Author Topic: Wavefront propagation source code  (Read 13733 times)

0 Members and 1 Guest are viewing this topic.

Offline JesseWelling

  • Expert Roboticist
  • Supreme Robot
  • *****
  • Posts: 707
  • Helpful? 0
  • Only You Can Build A Robot!
Re: Wavefront propagation source code
« Reply #30 on: September 12, 2007, 04:25:42 PM »
My idea is based the certainty grid idea as well.

First consider each map square as the certainty that something is in that square. Every thing starts at 0.
Then as your robot takes a scan you increment all the squares where you measured an object (yes you have to use some trig, look up table/piece wise linear interpretation should be used) But also you decrement every square that is between your robot and it's measurement. And you should probably saturate at around 100.

Of course you aren't going to do an on-line raytrace of each scan line, but you pre-compute each scan line and create an array of references to each square that is passes through, so that if you know the distance to the object you know which squares up to the object you should decrement and then increment the final square.

Of course I'm working on a 400MHz ARM processor so this may not be entirely practical for most people...

This also has the advantage that it works well with noisy sensors because its the add up that counts not the stray reading... Which will get wiped out eventually.

Then you can layer a fast VFH or ND algorithm with a window ( size would depend on your sensor range) for local pathfinding, and over that Layer a D* or wave front for global pathfinding, (global path finder feeds waypoints to local pathfinder, thus the global pathfinder helps the local pathfinder avoid local minima confusion). There are also some clever tricks I plan to play on the D* algorithm because I don't want a brand new plan on every new piece of data, so I'll only force a re-planning if the lateral path error from last waypoint to next waypoint (cause by the local path planner avoiding objects) exceeds a certain threshold.

That's the rough Idea I've been working together but it does have the weakness of not being able to find new openings where there were previously none because the local planner will go straight to the new waypoint and never exceed the later path error threshold... So I think I need to keep track of a rough delta value and if a square changes from saturation (100 or whatever) to near 0 in a short amount of time, I need to force a re-plan of the global path planner.

hows all that sound to you guys? :-[
« Last Edit: September 12, 2007, 04:32:21 PM by JesseWelling »

Offline HDL_CinC_Dragon

  • Supreme Robot
  • *****
  • Posts: 1,261
  • Helpful? 5
Re: Wavefront propagation source code
« Reply #31 on: September 12, 2007, 05:05:40 PM »
OO! Question! Admin! Pick me! PICK ME!

Can it scan more than just 1 grid spot away if it had a longer distance rangefinder? So instead of scanning 1 grid locale away, it can scan 2 grid locations?
United States Marine Corps
Infantry
Returns to society: 2014JAN11

Offline JesseWelling

  • Expert Roboticist
  • Supreme Robot
  • *****
  • Posts: 707
  • Helpful? 0
  • Only You Can Build A Robot!
Re: Wavefront propagation source code
« Reply #32 on: September 12, 2007, 05:12:19 PM »
HDL: Wouldn't that be a function of the dimensions of the grid vs the distance of your sensors?

Offline dunk

  • Expert Roboticist
  • Supreme Robot
  • *****
  • Posts: 1,086
  • Helpful? 21
Re: Wavefront propagation source code
« Reply #33 on: September 12, 2007, 06:03:05 PM »
Quote
dunk, your idea wont work well for really large maps . . . the forget delay would need to be a function of map size, but i can see cases where:

A) the delay time is guessed/tweaked by the programmer
B) control oscillations from new paths suddenly appearing because of forgetting obstacles
C) moving objects near the robot (highest probability of error) would be the freshest in memory
D) very large maps would need a very long forget delay - or the robot would forget the first half of the map while it explores the second half

i don't see why you have an issue with large maps as long as the forget delay is indeed a function of map size.
the degrading of occupancy probability should be very gradual compared to the input from sensors (positive or negative).
map points should only really be degraded when they can no longer be trusted. this will indeed require some tweaking by the programmer, but then what doesn't?
bear in mind, like JesseWelling, i'll be running this on something with a lot more power than your average microcontroller so i'm not really exploring the time involved or memory constraints which large maps involve in this discussion. (which i probably should be as that's where the conversation started....)

for point C), that is precisely why sensor readings are cumulative.
presumably an object moving through the field of view will create more "empty" readings than "occupied" ones so the average will be "empty".

point B) is valid but there have to be some challenges left to solve....


dunk.

Offline paulstreats

  • Supreme Robot
  • *****
  • Posts: 1,381
  • Helpful? 21
Re: Wavefront propagation source code
« Reply #34 on: September 12, 2007, 06:10:19 PM »
Question 1:
Can you use some external flash or EEPROM memmory on these MCUs? Maybe buy the now cheap 1Gb thumb drives, rig it to work with the MCU, and bingo! Near limitless (relatively :P) memmory for your bot... would something like that work?


You can use flash etc.. with an mcu with some time and effort but you must beware of their limitations....
I had a call 6 months ago by an accountant friend of mine saying that he had a usb flash stick that was 128mb (small but still..) but he had only used 7mb of space on the device and it wouldnt let him save anymore. After about 1hr of playing about, i realised the problem was that although his files were small, the device could only hold 256files regardless of its size i.e. they have a smallish allocation table that wouldnt really be suitable for storing arrays. Most memory devices operate in this way by allowing you to program a certain amount of information within predetermined spots ie 256 data areas allowing 1mb of data = 256mb - but you cant have 512 data areas of half a mb. Some algorythms need to be created to store more than one array segment to any area in external ram.

Im currently working on this because i think it is an important part of using and storing sensor data.

Offline paulstreats

  • Supreme Robot
  • *****
  • Posts: 1,381
  • Helpful? 21
Re: Wavefront propagation source code
« Reply #35 on: September 12, 2007, 08:36:12 PM »
So i know that the propogation type software can only work if your robot knows how far its actually moved in a particular direction...

I did have some systems set up to evaluate and move a robot built for my virtual robot software, but after reading this thread and the tutorial and therefore thinking about the subject more, there is a difference between my virtual robot moving forwards 5 pixels and a real robot moving forwards any distance since you can see motors/servos move and unless you have encoders etc.. you dont get linear motion if any at all. To overcome this, ive just bought some accelerometers which will probably need an mcu on their own to coordinate properly....

My idea is similar to dunks with a degrading occupancy, but im also playing with a moveable grid system. Where you break an area down into the grid, if the robot moves 2pixels left then the grid does also. So the map is not set into statically defined boundaries which means that youre robot isnt either(obviously waypoints have to be reconfigured also).... It seems to work within my predefined space with my virtual robot, but you would need an infinity system to work in the real world so im looking back to some of my theory programming projects like 'langtons ant' to see how it defines space within a processor, Ive got a few ideas but none that i want to stick with at the moment.

Its certainly a deeper subject than what i first thought :o

Offline h3ro

  • Full Member
  • ***
  • Posts: 110
  • Helpful? 0
Re: Wavefront propagation source code
« Reply #36 on: September 13, 2007, 02:29:53 AM »
Quote
You can use flash etc.. with an mcu with some time and effort but you must beware of their limitations....
I had a call 6 months ago by an accountant friend of mine saying that he had a usb flash stick that was 128mb (small but still..) but he had only used 7mb of space on the device and it wouldnt let him save anymore. After about 1hr of playing about, i realised the problem was that although his files were small, the device could only hold 256files regardless of its size i.e. they have a smallish allocation table that wouldnt really be suitable for storing arrays. Most memory devices operate in this way by allowing you to program a certain amount of information within predetermined spots ie 256 data areas allowing 1mb of data = 256mb - but you cant have 512 data areas of half a mb. Some algorythms need to be created to store more than one array segment to any area in external ram.

Would it be better to use a small HD from a laptop? They are not that much bigger, but would they work with a microcontroler?

Offline JesseWelling

  • Expert Roboticist
  • Supreme Robot
  • *****
  • Posts: 707
  • Helpful? 0
  • Only You Can Build A Robot!
Re: Wavefront propagation source code
« Reply #37 on: September 13, 2007, 06:06:05 AM »
h3ro: They use use more power than you'd like for a mobile robot.

CF cards are way better because they use IDE interface so they are pretty much a standard HD,
But they use way less power, and there is no 'seek' time because there is no spinning disk.
And there is no limitation on how you can use the space because it is a HD, and not usb mass
storage interface. But then if you are using a micro, you have to write (or port) your own file
system if you have more than one map  :P

I think we are rambling off topic a bit though...

Offline AdminTopic starter

  • Administrator
  • Supreme Robot
  • *****
  • Posts: 11,652
  • Helpful? 169
    • Society of Robots
Re: Wavefront propagation source code
« Reply #38 on: September 13, 2007, 08:31:45 AM »
jesse . . . yea your algorithm is crazy processor/memory intensive, but its interesting how you are solving it. the latest in SLAM technology is comparing old maps to new maps and then match fitting through probability to prevent drift. its all still complicated for me, but im learning!

Quote
Can it scan more than just 1 grid spot away if it had a longer distance rangefinder? So instead of scanning 1 grid locale away, it can scan 2 grid locations?
i didnt implement this from laziness, but yes its more than possible and it would be more optimal too. but the problem is that you cant scan behind other objects, so its not very useful in very cluttered environments.

Quote
for point C), that is precisely why sensor readings are cumulative.
presumably an object moving through the field of view will create more "empty" readings than "occupied" ones so the average will be "empty".
for a map that drifts with encoder errors, cumulative sensor readings arent very useful . . . if i can prevent the drift then this will work for stationary vs moving objects very well.

Quote
but im also playing with a moveable grid system. Where you break an area down into the grid, if the robot moves 2pixels left then the grid does also. So the map is not set into statically defined boundaries which means that youre robot isnt either(obviously waypoints have to be reconfigured also)....
very interesting idea! i think this can be added to my wavefront with very little effort, too! hope you dont mind me 'borrowing' your idea :P
hmmmm thinking a bit more . . . when you shift the grid, new empty squares come in possibly giving a false clear path. will probably still need to use a large map to negate that possibility. and the other problem, way points. perhaps just count the travel distance, and then use other sensors to find the goal visually?


as for the other ideas about using a thumbdrive and harddrive . . . interfacing them requires a PC, negating the need for a microcontroller being the primary processor . . .

Offline MadMax

  • Full Member
  • ***
  • Posts: 58
  • Helpful? 0
Re: Wavefront propagation source code
« Reply #39 on: September 13, 2007, 04:06:56 PM »
use somekind of threshold variable...

for example (I've got no idea how to implent the 2pixels of thing)

Code: [Select]

bool CheckOffset() {
  if (pixelsAwayFromGrid > threshold ) {
    //lets assume the grid is 5x5
    //and because I'm lazy, I only have the algorithm to move the map to the right (compass sensor could fix it)
    for (int x = 4; x >= 0; x--) {
      for (int y = 0; y < 5; y++) {
        int otherX;
        if (x + 1 > 4) {
          otherX = 4;
        } else {
          otherX = x + 1;
        WavefrontArray[otherX][y] = WavefrontArray[x][y];
      }
    }
    return true;
    pixelsAwayFromGrid = 0;
  } else {
    return false;
  }
}

void main() {
  //something happens so you need to call the function
  if (CheckOffset() == true) {
    do some scanning
  } else {
    // or not
  }
}

maybe you could check it's position in between... For example, if you knew that on the original left border of the map their should be a wall, you could check if it's the same image.

Offline paulstreats

  • Supreme Robot
  • *****
  • Posts: 1,381
  • Helpful? 21
Re: Wavefront propagation source code
« Reply #40 on: September 13, 2007, 04:23:08 PM »
Quote
as for the other ideas about using a thumbdrive and harddrive . . . interfacing them requires a PC, negating the need for a microcontroller being the primary processor . . .
 
 

ive just ordered som i2c compatable eeprom to help with data storage. I couldnt find anywhere to buy any normal ram chips but i know they exist because ive used them once before

Offline JesseWelling

  • Expert Roboticist
  • Supreme Robot
  • *****
  • Posts: 707
  • Helpful? 0
  • Only You Can Build A Robot!
Re: Wavefront propagation source code
« Reply #41 on: September 13, 2007, 04:59:26 PM »
admin: I'm not really doing any thing that hasn't been done before, it's just how I'm putting the big blocks together that's different. So the map aging and how VFH and D* work together is new.

You should check out VFH because it has the whole slidding window over the environment too, which I might implement as having finner granularity (4cm x 4cm) for VFH and then bigger squares for the global path planner to reduce memory (1m x 1m). I've also found a paper on dynamic resolution for field D* which would be good for sparse environments, but I'm still reading through field D* and wrapping my mind around it.

About the SLAM statistical matching, I'm not so interested in in mapping as I am Object avoidance and pathfinding so I'm going with a simple (or more simple) solution for mapping. But for localization, because of where I work, I'm willing to work more on signal processing, so I plan on working out an encoder/IMU/GPS system to get better accuracy.
« Last Edit: September 13, 2007, 05:01:58 PM by JesseWelling »

Offline paulstreats

  • Supreme Robot
  • *****
  • Posts: 1,381
  • Helpful? 21
Re: Wavefront propagation source code
« Reply #42 on: September 13, 2007, 06:13:27 PM »

hmmmm thinking a bit more . . . when you shift the grid, new empty squares come in possibly giving a false clear path. will probably still need to use a large map to negate that possibility. and the other problem, way points. perhaps just count the travel distance, and then use other sensors to find the goal visually?


 2 map are definately needed - 1 infinite scale map and 1 small scale (current area) map, which need to update each other accordingly.
Since the large scale map will likely start at 0x - 0y it is possible to travel below these coordinates so there needs to be some way of storing this data outside of a predefined array. this is not a necesarry step but something that would make the program more universal.
using int variables rather than bits will allow for more than just detecting an obstacle in the map, you can also record a different variable for a light spot etc (photovore) and the ability to set way points on the large scale map.
part of my interest in this is to build it as a character trait of my robot - to explore uncharted territory when bored.
I hope to be able to send my map data to a lcd screen so i can actually see the world that the robot sees, if not it can be retrieved with hyper terminal and fed into a custom java app.

I hope to have wavefront and mapping systems on my robot in about 4 weeks (with the bored - explore)

Offline Ro-Bot-X

  • Contest Winner
  • Supreme Robot
  • ****
  • Posts: 1,431
  • Helpful? 25
  • Store: RoBotXDesigns.ca
Re: Wavefront propagation source code
« Reply #43 on: September 14, 2007, 10:25:57 AM »
There was a game called Diablo. It was an isometric view from behind the character, the place was some underground dungeon. The game had a map that revealed to the character by exploring the area. After geting far from some visited parts of the map, something like a fog would cover it. The walls remained, but other creatures would became invisible. Only the line of sight was visible completly.

I think something like this should be in our robot too. There is the line of sight (camera plus sonars and infrared), so that part of the map is actively updated. This should be the small map, with a finer grid. As the robot travels, the big map will became larger with the newly explored part. On the old visited part the clutter can fade away but the walls should stay in place.
Check out the uBotino robot controller!

Offline kshv01

  • Beginner
  • *
  • Posts: 2
  • Helpful? 0
Re: Wavefront propagation source code
« Reply #44 on: August 13, 2011, 02:40:39 AM »
Hello all,

i am currently working on a project for one of my uni's course units. it involves making a robot that needs to move from its base in a 2mx2m arena, and then when let loose it has to go and find small polystyrene balls which have been scattered around in the arena.

can i use the wavefront algorithm to actually do this? i have already got my IR rangefinders etc etc. now it's the coding part. so how would i actually have to come up with that if i am to use the wavefront algo? coz instead of ignoring an obstacle detected, i have to actually go and get it and then the aim is to get the polystyrene balls and then come back to the base the robot was launched from.

i would appreciate if someone could help!

Thank you !!

please see an image of the arena attached. i must point out here, that the balls are gonna be scattered anywhere over the place and not placed in the center as shown in the picture.

Offline AdminTopic starter

  • Administrator
  • Supreme Robot
  • *****
  • Posts: 11,652
  • Helpful? 169
    • Society of Robots
Re: Wavefront propagation source code
« Reply #45 on: August 15, 2011, 12:06:22 PM »
Nah, the wavefront is best for cluttered areas. What you'd want is called the bug algorithm, which is much simpler and easier to learn.

Offline kshv01

  • Beginner
  • *
  • Posts: 2
  • Helpful? 0
Re: Wavefront propagation source code
« Reply #46 on: August 17, 2011, 09:00:48 AM »
thanks admin!

btw, have you got any good sources or references which i could have a look at.. google doesnt give a lot of help..

Thanks

Offline AdminTopic starter

  • Administrator
  • Supreme Robot
  • *****
  • Posts: 11,652
  • Helpful? 169
    • Society of Robots
Re: Wavefront propagation source code
« Reply #47 on: August 20, 2011, 12:00:51 PM »
The bug algorithm is nothing more than photovore + obstacle avoidance.

Sensor(s) move your robot to a specific object, while another sensor says 'something is in the way, go around it first'.

pseudocode:
look for object to get
if object on left, turn left
if object on right, turn right
if object straight ahead, go straight ahead
if wall in the way, turn left

 


Get Your Ad Here

data_list