Why do people not use an optical mouse as an XY encoder for a robot that has to work on standard robot arenas?
Had you spent less than a second on Google you would have seen that they do - depending on your search terms, you get from 200,000 to more than 4 million hits on the subject.
One of the people that have been hacking around with mice is benji
and I have seen a number of different different implementations of the idea.
I'm tempted to ask in return "Why don't you
use an optical mouse for XY"
It should be pretty accurate no?
Like with most sensors, a good deal of the success (or otherwise) lies in the implementation of both hardware and software - Even the best sensor can be F'd up completely by bad hard- and/or software, while even very crude sensors can be used with precision if used right.
As an example, just consider the Wii Fit, which really is nothing more than a body scale, but with clever interfacing and programming it can deduce the slightest change in your posture, your 2D CoG and a host of other things - that's smooth interfacing.
If you can find a way to interface a microcontroller with an optical mouse, it should give good XY measurements with respect to the initial position. Theoretically.
I have only seen one implementation that relied in the mouse itelf (with just the optics changed) to sort out the optical flow. Usually, people interface the camera chip directly and write their own decoding, suitable for the surface(s) they want to read.
There is a huge difference in how well optical mice works though. Some have a hard time with black surfaces, some with white, some don't like glass, some don't like skin (don''t use a mouse while having body tequilas
) etc. and that's an important factor as well, so don't just grab the cheapest mouse - test friends mice (and your own) on different "difficult" surfaces and test how much you can vary the distance upwards and angular while still getting a good reading.