Software > Software

occupancy grid - cell size vs. sensor resolution



I have a logic question regarding occupancy grid cell size vs. sensor resolution.

I use IR and Sonar sensors. Both have a theoretical resolution of 1cm.

Let's say I use a cell size of 10x10 cm in my occupancy grid. There is a 1x1 cm object in this cell
I scan the object in this grid 10 times. I scan 11 other unoccupied points in this field one time. --> The cell would be unoccupied even though I found the object.

So I think:
- the cell size should not be larger than the smallest object
and / or
- the cell size should match the sensor resolution

or more likely  I miss something - as everyone else uses cell sizes bigger than one cm.

What do I miss?


Your theory has two problems:

1) How precisely do you know the position of the sensor itself? If the robot moves, you will likely have more error than 1 cm.

2) The occupancy cell math calculates a probability that there's something in a particular cell. If the beam scans the cell stochastically, then when you get a hit, you need to update the cell with a probability greater than one (!) based on the ratio of the cell area to the area covered by the sensor. Each successive stochastic scan of that cell will then successively approximate the occupancy of the cell.

Note that you have to choose both sensors and filter characteristics to match the desired application. If you want 10 cm cells and your beam width is 1 cm, you either have to sweep each cell multiple times each time you scan it, or you have to average out more scans, to get a good measurement for the cell. Note that this is a beam width vs cell trade-off, and the size of the object doesn't matter as much in this case.

Thanks for the explanation. It's clear now.


[0] Message Index

Go to full version