go away spammer

Author Topic: Looking at robot sensors in terms of dimensions (WARNING: ABSTRACT POST!)  (Read 2135 times)

0 Members and 1 Guest are viewing this topic.

Offline WaterPig MasterTopic starter

  • Full Member
  • ***
  • Posts: 62
  • Helpful? 3
  • Hurdy Gurdy playing at Raglan
    • WaterPigs.co.uk
Hi there,

After reading a mathematical novel called 'Flatland', I began to think about robots and sensors in terms of the dimensions that they deal with. For example, we as 3D beings perceive the world only in semi-3D — our perception is more like 2D, and is based on a planar sensor (retina). Your average floor trundling robot lives in what could theoretically be a 2D world — it can only travel around, not up or down. And most commonly, robots use 1D sensors (a point, such as a photoresistor).

Now, our semi-3D perception comes from having two physically separate grids of sensors spaced a set distance apart, which allows us to see in fake 3D. A 2D robot that uses (for example) two IR emitter/detector pairs is using the same kind of set-up: Two 1D sensors, but spaced a set distance apart.

In exactly the same way that our two 2D sensor arrays give the impression of one 3D sensor, the robot's two 1D sensors give it effectively a 2D sensor array.

I'm convinced there is some way to use this way of thinking for some kind of super-amazing sensor system, but I can't figure out how. What do you think?

Sorry for the pretty directionless post, I wanted to get other people's thoughts on what I had been thinking about!
Thanks,
Barnaby

Offline rbtying

  • Supreme Robot
  • *****
  • Posts: 452
  • Helpful? 31
Two 2D sensors (cameras) at a set distance let you generate similar "fake 3D" pointclouds as humans see - its the basis of stereoscopic computer vision.  Seems like the problem is more the processing power needed to get it to work, though.

 


Get Your Ad Here