jpeg processing is optional on this camera. it will output simple bitmaps as well meaning it would be relatively easy to do image processing on board a microcontroller. (storing the images may be a little more problematic....)
i'm not actually planning to use this camera at the moment as i have found other ways to solve my video input problems.
i was just pointing it out as an option for anyone else who is considering doing similar things.
the CMU cam was also an option that i looked at. it uses a parallel input camera not unsimilar to the Transchip one if i remember correctly.
to explain why i decided the CMU cam was not for me i should go into a little more detail as to what i am trying to do.
i want my bot to have a vision based distance sensor. i'm essentially building a laser ranging module. (so i was simplifying things in my earlier post when i said video digitizer.)
i have one of those DIY thingys that draws a straight line on your wall by passing a laser through a piece of curved glass.
i point the laser line thingy straight ahead about an inch above the ground.
i point my camera the same direction but higher up.
take a picture with the laser switched on. take a picture with the laser switched off. subtract one image from the other. you are left with an image of just the line.
you can then work out how far away things are by how high up the picture the laser appears.
i wanted to build a module with similar properties to the CMU cam but that did not transfer the whole image over the serial port, just the range information.
this would be far quicker, meaning the bot could do it's ranging in real time, rather than having to stop and think about it.
here's a link that explains the maths but i'm implementing it in quite a different way: http://www.seattlerobotics.org/encoder/200110/vision.htm
i have abandoned the camera to microcontroller approach to this problem and am now going for a USB camera plugged into a little embedded processor meaning i can solve most of the problems in software rather than hardware and microcontroller code. (ok, some of that's software too but you get my point....)
while i'm confident the microcontroller route would work and i may go back to it one day, the way i'm doing it now is far quicker and easier.
on a modern processor, programming in C++ it's actually quite simple. (i'm using a Linksys NSLU2 http://www.nslu2-linux.org/
with a Logitech USB quick cam.)
i still have to fine tune some of the inaccuracies caused by the distorting effect of the camera but i'm confident i can fix that with a little maths.