If I have a robot, with wheel encoders, that runs around then, theoretically, I can calculate where the robot is relative to its starting point. ie its starting point is the origin and it has a position vector for the current position (x,y) relative to that point, and a direction vector indicating its current direction of travel. The x,y axes will be arbitrary. OK: thats just the problem - so lets not worry about encoder cumulative errors, choice of x,y axes etc. Just take it as read that the robot knows where it is in relation to its start point. Lets assume this robot is happy just running around, creating a map of its environment, avoiding stuff, maze solving, eating cheese and being a thoroughly excellent robot.
Next I place (ie switch on) another similar robot but at a completely different start point and pointing in another direction. This robot is also happy just running around doing the same stuff.
The only difference is that their starting points (ie the map origin of 0,0) is in a different place and that their initial x,y axes are different. This could be fixed by a translation (to share the same origin) and a rotation (to share the same x,y axes). Then both of their maps are identical and they can co-operate by sharing their map data.
So lets assume that all of the above is 'true and makes sense' - does to me but then I'm only human!!
So my question is: since each robot only has a sense of position according to its start point then how do they exchange 'enough' data in order for them to zero their start points in to the same position. I know there are various hardware systems out there that use beacons (either placed on the floor or screwed into the ceiling) - but my wife would go beserk if I wired out the house in this way!! Fine if you've got robots running around a warehouse. Also its not very portable.
I think the solution can be simplified to a degree: in that we dont care about the absolute physical localtion of the robots - we only care about their location relative to each other and relative to a common, if arbitrary, start point.
Here are some of my thoughts....
You could have very low intensity IR transceivers on each robot so that once they first get into close contact then they can share their info. ie they both know whey they are relative to their own start points and their direction of travel relative to their own initial x-y axes. They also know that they are very close to each other. So if you sub-divide floor space into 6 inch squares then you could say that they are both in the same square (you could adjust this by knowing the radius of each robot base to be more exact). Therefore one of the robots could re-adjust his origin, and initial x-y axes, to be in sync with the other robot. Now they have a common map of the world relative to a shared origin.
I don't think that RF, Bluetooth etc will help because, as I see it, you need to have really low power transmitters so that communications only happen when they share a common point.
When you first switch on a new robot then its primary goal could be to chase any other moving targets - so that it can then get in sync with the 'robot pack'. Once in sync then the robots could collaborate, via a master laptop link if necessary, to share their common view of the world. Divide and conquer !!!
Can anyone else think of another easy way for two robots to agree an origin and x/y axes for their map data WITHOUT requiring ANY external devices.