Society of Robots - Robot Forum

Software => Software => Topic started by: markmac on January 27, 2014, 04:05:05 PM

Title: Architecture for Autonomous Robot with 3d vision
Post by: markmac on January 27, 2014, 04:05:05 PM
As the subject line says, I'm interested in understanding the software architecture to use for an autonomous robot. I am most interested in developing one based on a hexapod platform. I've got no true robotics experience, but I do have considerable programming experience including applications integration, embedded systems and motor control. I watched the recent DARPA competition (anybody want to watch some paint dry?), and got inspired to try something myself, but I am hoping for some pointers on how to connect the dots.  Ideally hardware would be based on the Rasberry Pi or something similar.
Here's my initial concept of how the robot might work (with some hand waving):
 A 3-d vision system (Kinect Fusion?) and/or sonar range finder would provide point-cloud data, which is in turn processed (using PCL from pointCloud.org?) to create a final 3d model that represents the robot's concept of it's world. It seems like a VR gaming engine would be ideal if there were some way of getting the point-cloud data into the game in real time. Kinematics for the robot would then be tied to the game avatar.  Ideally such a game based system might easily also offer a virtual "God" view for monitoring robotic activities.
I've seen some CAD type robot kinematic simulation software packages out there, but my first impression is that they aren't well suited for embedded autonomous applications. Additionally, a gaming engine should probably be much more optimized so the robot wouldn't have to sit and run some lengthy simulation before each move (like it appeared the DARPA competition robots did).
So to summarize:
  1 What linux compatible software do I need to make sense of the Kinect data?
  2. Is a good open source hexapod kinematics library available?
  3. Is there any real-time simulation software (aka game engine) available for this application?

Please pardon any stupid assumptions I've made. I know this is an ambitious project to even contemplate.

Title: Re: Architecture for Autonomous Robot with 3d vision
Post by: jwatte on January 27, 2014, 08:13:34 PM
If you want something that is open source, and has a lot of libraries, and can be programmed in C/C++ and Python, you probably want the Robot Operating System (ROS) which has been known to work on Raspberry Pis as well as other systems.
If you're going to do very heavy computer vision, the RPi probably doesn't have enough power. Actually, no single computer with less than four GPUs would be sufficient for some of the currently state of the art computer vision algorithms :-)
Title: Re: Architecture for Autonomous Robot with 3d vision
Post by: CJAlbertson on February 04, 2014, 01:51:31 AM

So to summarize:
  1 What linux compatible software do I need to make sense of the Kinect data?
  2. Is a good open source hexapod kinematics library available?
  3. Is there any real-time simulation software (aka game engine) available for this application?

Hexapods really don't have much of a payload ability.   But yes there is Hexapod source code but you will be disappointed the IK is simple as the legs are co-planer and you can use simple geometry.
http://kkulhanek.blogspot.com/2013/01/inverse-kinematics-for-3-dof-hexapod.html (http://kkulhanek.blogspot.com/2013/01/inverse-kinematics-for-3-dof-hexapod.html)
With only 3 DOF there is a unique solution so you don't have to choose
Google will find lots of source code for the phenix hexapod.    The basic arduino based hexapod software is on-line but is the very unsophisticated.  It is really only set up for teleoperation (remote control) and not for autonomous operation.

Their other problem is knowing exactly where the robot is located.  "leg odometry" is very non-exact.  You get the best location data from wheels with encoders on the motors.  Legs slip a LOT.  Your 3D sensors data is not as us full as you'd like if you don't know the location of the sensor.  Of course you can use the sensor data to help refine your location, if you already have a map.   It's a chicken and egg problem called "SLAM"  You can Google SLAM algorithms

You will need multiple ways to know your location, wheel encoders, or if you must use a hexapod, counting steps and direction and adding many of those.  Also accelerometers and gyros and magnetic compasses help a LOT.   Feed all this into a Kalman filter (Google will find loads of info on Kalman) the filter will give you good location.

The above is how the self driving cars work, except for motion planning

Game software is a good idea for visualization.  You can use a game engine to make a nice display but then ROS has Gazebo integrated into it.  At the level of complexity you are looking at.  I think you are looking at a ROS based solution rather then Arduino or uP.
http://www.ros.org/core-components/ (http://www.ros.org/core-components/)

I would start with a small wheeled robot.  Get it to the point where it "knows" its location from simple sensors like wheel encoders, IMU and GPS.  This will take weeks of work and some study, maybe have to re-learn Linear Algebra?   Then you can add sensor like a camera or sonar or IR rangefinder or if you have a space $3K a laser scanner.   Then later try to make all this fit into the payload ability of a hexapod (or quadacopter or whatever)  The key is to fuse multiple sensors

But BEFORE you do anything else watch this on-line class (and do the exercises)  It covers the basics of how the Google self driving car works and you need to do the exact same thing
https://www.udacity.com/course/cs373 (https://www.udacity.com/course/cs373)
Title: Re: Architecture for Autonomous Robot with 3d vision
Post by: CJAlbertson on February 04, 2014, 02:12:01 AM
. Actually, no single computer with less than four GPUs would be sufficient for some of the currently state of the art computer vision algorithms :-)

Yes, that is what I was getting at when I said to start  with wheels.  A stack of quad core i7 notebook PCs is not going to get on a hexapod.   But more realistically you would use WiFi and send the computation to a server and use an ARM based controller on the robot.  ROS allows the robot software to be distributed over a network of computers as it is all based on message passing.

I have about the same goals as you do.  I'm finding I don't really use a real robot.  A sensor mounted to a camera tripod works better for development.  I move it my hand.    I built a hexapod leg.  Just one leg mounted to a large test jig.  I test it's weight carrying ability by having it push downward on a postage scale.  So far the weak link is the servo bearings. They have a VERY short life because of high eccentric loads.  I'm glad I tested one leg first.  I find you really do need an all ball bearing design but mine is good enough to test my IK software.

I have a $20 wheeled platform now.  and I have an IMU chip ($3 from China).  none of this stuff is integrated.

Bottom line is the hard part in knowing were you are.  try having your robot drive a 2 meter square and see if it can come back to the origen.  Not an easy task.
Title: Re: Architecture for Autonomous Robot with 3d vision
Post by: Kohanbash on February 04, 2014, 10:16:57 AM
Here are some of the common modules from http://kohanbash.com/software-modules-form-robot/ (http://kohanbash.com/software-modules-form-robot/) :
(http://kohanbash.com/wordpress/wp-content/uploads/2014/01/kohanbash.com-Software-Modules-that-Form-a-Robot.jpg)

For Kinect there are a bunch of linux drivers available. A good first place to look is http://openkinect.org/wiki/Main_Page (http://openkinect.org/wiki/Main_Page) there are also ROS drivers for Kinect http://wiki.ros.org/kinect (http://wiki.ros.org/kinect)

Title: Re: Architecture for Autonomous Robot with 3d vision
Post by: CJAlbertson on February 04, 2014, 12:14:13 PM
That's for the lead.  I just looked up the Kinect.  I had planned on using an "IP Camera" the kind used for security systems that use WiFi.  I may still do that because they are small and don't use much power.  But the Kinect and it's Open library can return a depth map and the unit costs only $100.   i had not known the API was that easy to use.
Title: Re: Architecture for Autonomous Robot with 3d vision
Post by: Kohanbash on February 04, 2014, 01:18:52 PM
Just a note is that you need to be careful with IP cameras and wifi.

Many generic (webcam quality) ones can be used over wifi however proper gigabit ethernet cameras require jumbo packets which is not wifi friendly and will require an ethernet cable.