Author Topic: Complexity of robot vision  (Read 5469 times)

0 Members and 1 Guest are viewing this topic.

Offline SeekingVisionTopic starter

  • Jr. Member
  • **
  • Posts: 14
  • Helpful? 0
Complexity of robot vision
« on: January 05, 2015, 07:58:48 AM »
I've been working on a simple, ha ha, vision program so that my robot can navigate my house and recognize where it currently is. Even just creating a useful edge of items in image discovery program is much more difficult than I thought it would be.  I am now starting to incorporate blob detection into the edge identifier to complete lines that have breaks in them.  Does anyone have some advice as to how a reliable edge detection algorithm could be created?   
Robots are our children and will inherit the Earth.

Offline Ibaeni

  • Jr. Member
  • **
  • Posts: 28
  • Helpful? 4
Re: Complexity of robot vision
« Reply #1 on: January 19, 2015, 04:31:46 PM »
I've had some experience with this because of one of my school friends. He has used the xbox kinect for obstacle detection and also cameras with the raspberry pi. We're working on a quadcopter ( that will eventually implement these. Although I don't have much knowledge, I have some sources that may be useful to you.

These are two links to a project that uses the Kinect to build 3D maps

The next two may also be useful

Offline mklrobo

  • Supreme Robot
  • *****
  • Posts: 558
  • Helpful? 15
  • From Dream to Design at the speed of Imagination!
Re: Complexity of robot vision
« Reply #2 on: February 19, 2015, 04:07:40 PM »
 :) Hello!
I viewed on of the videos on the robot movies sections, and one robot recognized
signs. Maybe you could use signs throughout the house to help the robot know
where it is at. Good Luck!   ;D ;D ;D

Offline Spud

  • Beginner
  • *
  • Posts: 4
  • Helpful? 0
Re: Complexity of robot vision
« Reply #3 on: June 15, 2015, 03:34:25 PM »
I've been working on image recognition software, the text file attached is the code for edge detection. The variable that holds the image is called SIFInt04
this is an array variable defined as uint SIFInt04[321,241]
This is directly extracted from my Ai code it wont work on its own, it should be used as reference only.
I use this function for my robot to auto navigate it works rarely well at detecting edges of doors and skirting boards.

Offline artbyrobot1

  • Jr. Member
  • **
  • Posts: 42
  • Helpful? 2
Re: Complexity of robot vision
« Reply #4 on: July 12, 2015, 08:48:01 AM »
In my experience with image recognition algorithm making, I had the robot turn each pixel of the image into a rgb number and then if that number matched the color it is looking for, it called that pixel a 1.  If the color did not match the color it was looking for it called that pixel a 0.  In this way, it turned images into a series of 1's and 0's and then it compared that with the list of objects on file who were also in 1's and 0's form to see if the 1's and zeros within that image matched.  If they did, he knew what object he was looking at.  You can also set it to call it a match if it was within 95% of the desired 1's and 0's pattern it was looking form.  This was a simple solution but enabled me to teach the robot to read, identify objects on screen, etc...

Offline cyberjeff

  • Full Member
  • ***
  • Posts: 114
  • Helpful? 7
Re: Complexity of robot vision
« Reply #5 on: July 20, 2015, 07:44:29 PM »
IMHO this is being approached in the wrong way.

I've toyed with OpenCV on the Pi and Beaglebone and it is frustrating. What is lacking is depth and color perception.

There is very little available on stereo vision, or other ways finding depth.  I find this a little strange as object and focus detection is fairly mature in camera technology.

Consider a lidar scanner. What you would wind up with is a two dimensional slice that looks like a map. I think that will require less processing, but I don't think it is an easy task either. Add in compass heading and you'll cut the task down.

Offline GrimBot

  • Jr. Member
  • **
  • Posts: 22
  • Helpful? 4
Re: Complexity of robot vision
« Reply #6 on: December 23, 2015, 05:01:42 PM »
I saw that stereo-vision came up in the conversation here. I've only been looking into it a little bit when trying to figure out depth perception stuff but was curious if you folks thought it would be possible to get the same depth data by having just 1 camera take an image from 1 position and then tilt or slightly rotate to take another picture and then compare the 2 to try to figure out depth.

I don't think it would be as fast or processor heavy as ready made stereo-vision hardware but maybe you don't need it to be fast for what you need and don't care if the hardware needs to change its view angel to get a second opinion on the camera image.

If it sounds like I don't know what I'm talking about, you are probably right. I only have a basic understanding of this stuff


Get Your Ad Here