Society of Robots - Robot Forum

Software => Software => Topic started by: airman00 on August 30, 2008, 04:21:38 PM

Title: Line Following Using Vision Sensors
Post by: airman00 on August 30, 2008, 04:21:38 PM
The line following course is as follows : A line that is limited to a maximum of 90 degree turns but can have turns of other angles that are lower than 90. The line can break and then the robot must travel straight over that break in the line and retake the line on the other end. Plus there are victims ( green and silver paper things) on the line and the robot must identify the victims by lighting an LED when its above the victim.

Do you guys think that a CMUcam or any other camera sensor can do all that by itself?

,Eric
Title: Re: Line Following Using Vision Sensors
Post by: Ro-Bot-X on August 30, 2008, 05:08:21 PM
I don't know about the CMU camera, but the AVRcam works like this:
You can set up to 8 color blobs that the camera will find and return bounding boxes for each color you are searching for. So, for a line following, you set a black color to search and the camera will return the bounding boxes for each spot it detects it on a frame. Than you move the bot to keep the bounding boxes in the middle of the frame. You are also searching for the green color (victims) and move the bot so the bounding box will get centered in the frame and then lit your LED.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on August 30, 2008, 06:46:21 PM
Thanks for the reply Robot-X.

The CMUcam works the same way.
Title: Re: Line Following Using Vision Sensors
Post by: Admin on September 01, 2008, 05:55:47 PM
The original CMUcam cannot track more than a single color at a time.

You'd have to cycle through the colors to do this.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on September 01, 2008, 06:05:06 PM
The original CMUcam cannot track more than a single color at a time.

You'd have to cycle through the colors to do this.

awww
I think I might upgrade to the CMUcam2 after I finish documenting the original CMUcam. Its more or less the same commands , right?
Title: Re: Line Following Using Vision Sensors
Post by: Admin on September 01, 2008, 06:16:03 PM
The commands are a bit different, but you shouldn't have to modify your code more than 5-10%.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 16, 2008, 09:41:18 PM
I am having difficulty getting past one obstacle with the camera line following. The course has one 25 degree ramp that the robot must follow the black line up. My worry is that when the robot goes up the ramp at the very beginning of the ramp the camera points away from the line and screws up.

Also anybody have any strategies or tips on line following using a camera?
Title: Re: Line Following Using Vision Sensors
Post by: Admin on October 16, 2008, 09:50:05 PM
Quote
My worry is that when the robot goes up the ramp at the very beginning of the ramp the camera points away from the line and screws up.
The MOBOT course at CMU has that problem . . . 45 degree downward ramps, many robots speed down the ramp and lose the line when it suddenly turns at the bottom of the ramp.

Solutions:
use a wide angle lens
bring the camera higher up away from the line (for a wider view)
add a servo for the camera to rotate like on my ERP
setup the control algorithm so that its more likely to rotate the robot than move forward
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 16, 2008, 09:52:36 PM
Solutions:
add a servo for the camera to rotate like on my ERP

How would the robot know when to pan the camera and how much to pan it

Btw are you going to release your ERP line following code?
Title: Re: Line Following Using Vision Sensors
Post by: Admin on October 16, 2008, 10:27:14 PM
Here is how it works:

The head centers itself onto the white line

The body centers itself to the head.

So the camera is always looking at the line, and the body is always trying to equalize with the head. Its almost just like my Stampy code, but without the oscillation.

I'll be releasing my ERP code in probably two weeks or so (don't quote me on that!). It uses alpha version Axon code, so I have to update it all, test it, and make it user friendly.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 16, 2008, 10:39:43 PM
Here is how it works:
The head centers itself onto the white line

Do I need one servo for panning only or do I need two servos for panning and tilting?
Title: Re: Line Following Using Vision Sensors
Post by: Admin on October 16, 2008, 10:54:52 PM
For the MOBOT competition I disabled the vertical servo, so just one should be fine for you.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 17, 2008, 08:09:39 AM
For the MOBOT competition I disabled the vertical servo, so just one should be fine for you.

But my robot has to go up a ramp , not down one. Does that change anything or do I just need a horizontal servo)
Title: Re: Line Following Using Vision Sensors
Post by: Admin on October 17, 2008, 09:52:24 AM
No idea, never made a line follower go up a ramp :P
(well actually I tried, but it didn't have the torque . . .)

I say run some experiments! ;D


But my intuition says it shouldn't be a problem . . .
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 17, 2008, 10:14:04 AM
I'll make a 1:1 scale CAD and post it up here in about an hour or two.

I'm only unsure about how high to place the camera and at what angle the camera should be pointing down.

Heres the CAD
(http://i273.photobucket.com/albums/jj202/erobot/Line%20Follower%20using%20Vision/Line_Follower.jpg)

It has two motorized wheels on each side of the robot. Axon microcontroller and I'll add on a battery soon. One servo connected to CMUcam on top.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 18, 2008, 07:25:28 PM
This is what I mean
(http://i273.photobucket.com/albums/jj202/erobot/Line%20Follower%20using%20Vision/Line_Follower_ramp.jpg)

you see how it starts to go up the ramp the camera is "looking" at the line which is way to far away. How do I figure out the optimum angle for the camera to be pointing downward at the line?
Title: Re: Line Following Using Vision Sensors
Post by: Ro-Bot-X on October 18, 2008, 07:31:06 PM
You can use a tilt sensor (some weight attached to a potentiometer) that can cause the tilt servo to adjust the camera for a proper angle.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 18, 2008, 07:59:51 PM
You can use a tilt sensor (some weight attached to a potentiometer) that can cause the tilt servo to adjust the camera for a proper angle.
yea I guess but then thats extra cost - another servo and another tilt sensor

I was wondering if there is someway to calculate a "magic angle" that will see the line on flat surface and on a ramp.
Title: Re: Line Following Using Vision Sensors
Post by: Admin on October 18, 2008, 11:10:34 PM
hmmmm, but when its up on the ramp, its as if its on level terrain . . . the only prob you'd have is right before it gets on the ramp, and the line is too close.

you have to decide not just the optimal angle, but also the optimal height from the ground . . . hook up your cam to your PC to view what it sees and make a subjective comparison.

then you need to account for speed . . . faster your robot goes, or slower it processes data, the further ahead it needs to look to make up for the lag time.

look at my ERP for a good example, it worked fine for me at the angle/height/speed i chose.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on October 19, 2008, 09:53:58 AM
Heres a quick look at the final CAD of the robot:
(http://i273.photobucket.com/albums/jj202/erobot/Line%20Follower%20using%20Vision/line_follower-1.jpg)

The line following robot with vision uses a CMUcam , four 150:1 Micro Metal Gearmotor HP motors , Axon microcontroller, 6V NiMH battery pack, and one panning servo for the camera. Chassis is HDPE (Spray painted blue) and is two decker - deck supported by four hex spacers.

I haven't decided yet  if I am going to make a step-by-step tutorial or just documentation. Also I'll be making a tutorial on interfacing the CMUcam to the Axon.

More updates to follow.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on January 10, 2009, 11:25:46 PM
Sorry guys - seems as if I will not be able to make the SoR Robot Contest this time around. I've been way busy with other projects of mine and scheduled this robot build for February/March time .
I'll still be writing a tutorial on it though.

Oh well I guess I'll just have to let someone else get an Axon . :P
Title: Re: Line Following Using Vision Sensors
Post by: Admin on January 23, 2009, 04:26:12 AM
The more robots you try to build at the same time, the fewer robots you actually build at the same time :P

Anyway, where is my CMUcam + Axon tutorial that I bribed you to do? :P
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on January 23, 2009, 07:46:14 AM
The more robots you try to build at the same time, the fewer robots you actually build at the same time :P

Anyway, where is my CMUcam + Axon tutorial that I bribed you to do? :P
working on it , working on it.  :P
The competition for the line follower with vision is in May and I've been busy finishing up some other projects. You will for sure have it by May and most likely have it by February/March.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on February 21, 2009, 09:21:13 PM
UPDATES!!!

I got the CMUcam to work with the axon. I even built the chassis but I havent mounted on the Axon yet. Notice the LED spotlight on the CMUcam. I also mounted a rangefinder on the robot.

Enjoy the video:
[youtube]QBCuAPl8qug[/youtube]
Title: Re: Line Following Using Vision Sensors
Post by: Admin on February 26, 2009, 09:20:35 PM
I'm curious about the spotlight. Do you find it helps?

Got an ETA on code release? (no rush, just curious)
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on February 26, 2009, 10:03:16 PM
I'm curious about the spotlight. Do you find it helps?

Got an ETA on code release? (no rush, just curious)
Yep it definitely helps

I'll probably release the code next week. I just got my motor drivers, so I'm trying to finish up on the robot.
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on March 02, 2009, 12:47:16 PM
Update:
Vision Camera Robot Following a Red Line (http://www.youtube.com/watch?v=Shkur2WrWgA#lq-lq2-hq)

I did red because black is just too easy :P

I'll be releasing the code to do tracking sometime tonight.
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 02, 2009, 12:51:30 PM
which cam is it and why do u do the rotating motion for the cam...i think it is not required
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on March 02, 2009, 01:04:40 PM
which cam is it and why do u do the rotating motion for the cam...i think it is not required
CMUcam

Yea I need work on my algorithm , but I need rotating motion for sharp turns
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on March 15, 2009, 06:51:03 PM
New video
Line Follower with Improved Algorithm (http://www.youtube.com/watch?v=EcD3hsuit8k#lq-hq-vhq)

Source code will be released tomorrow evening-  right now I'm just editing my comments so they make sense
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on March 15, 2009, 07:56:08 PM
I released it early. Code is finally up!

Here it is :
http://www.societyofrobots.com/member_tutorials/node/321 (http://www.societyofrobots.com/member_tutorials/node/321)
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 16, 2009, 10:59:22 AM
although i personally thinkcmucam is wate of money but its v gud for the beginners in image processing...basiclly but if u put a little effort u can do more than wat cmucam can do wth opencv and a smal webcam...which is way cheaper but yes alittle diffcult to implement ...

this is my opencv program for red blob colour tracking...it really helpps...instead of moving the servo here i use the parallel port to control some led's which light up when u move the blog to the left and switches off when u move the blob to te right...

Code: [Select]
// cvdemo3.cpp : Defines the entry point for the console application.
//

#include "stdafx.h"
#include"cv.h"
#include"highgui.h"
#include"cvaux.h"
# include"stdlib.h"

short _stdcall Inp32(short PortAddress);
void _stdcall Out32(short PortAddress, short data);


//grey screen+cvLoadImage
int _tmain(int argc, _TCHAR* argv[])
{

/*
const char *c="image001";
IplImage *hw=cvLoadImage( "image001.jpg" );
*/
static CvMemStorage* storage = 0;
//capturing from cam
CvCapture* capture=cvCaptureFromCAM(0);
cvWaitKey(0);
//tsting if frame was captured or not
if(!cvGrabFrame(capture))
{printf("could not capture");
_sleep(2000);
exit(0);
}
printf("captured\n");
storage = cvCreateMemStorage(0);


//cvMoveWindow("Hello World", 186, 56);


while(1)
{
//capturing frame
cvNamedWindow("Hello World",  CV_WINDOW_AUTOSIZE);
    IplImage *hw=0;
    hw=cvQueryFrame(capture);
double y=cvGetCaptureProperty(capture,5);

int height1=hw->height;
int width1=hw->width;
//cvClearMemStorage(storage);
CvScalar s;
for(int i=0;i<height1;i++)
{
for(int j=0;j<width1;j++)
{
s=cvGet2D(hw,i,j);
if(s.val[0]>=140 && s.val[1]>=140 && s.val[2]>=140)
s.val[2]=0;
s.val[0]=0;
s.val[1]=0;
//video loop
if(s.val[2]<120)
s.val[2]=0;
else
s.val[2]=255;
cvSet2D(hw,i,j,s);
}
}
float xcounter,ycounter,xpixel,ypixel;
xpixel=0;
ypixel=0;
    xcounter=0;
ycounter=0;
//finding centre of gravity...
for(float i=0;i<height1;i++)
{
for(float j=0;j<width1;j++)
{
s=cvGet2D(hw,i,j);
if(s.val[2]>120)
{
xcounter++;
ycounter++;
xpixel+=j;
ypixel+=i;
}
}
}

   
printf("height=%d,width=%d centre of gravity x=%f,y=%f,FPS=%e\n,",height1,width1,xpixel/xcounter,ypixel/ycounter,y);

  if(xpixel/xcounter<178)
  {
Out32(0x378,255);
  }
  else
  Out32(0x378,0);
//display the image in the container
   cvCircle(hw, cvPoint(xpixel/xcounter,ypixel/ycounter), 10, cvScalar(0,255,0), 1);

cvShowImage("Hello World", hw);

int c=cvWaitKey(10);
if((char)c==27)
break;
//cvReleaseImage(&hw);
}

cvDestroyWindow("Hello World");
cvReleaseCapture(&capture);

/*//declare for the height and width of the image
int height = 620;int width = 440;
//specify the point to place the text
CvPoint pt = cvPoint( height/4, width/2 );
//Create an 8 bit, 3 plane image
IplImage* hw = cvCreateImage(cvSize(height, width), 8, 3);
//Clearing the Image
cvSet(hw,cvScalar(0,0,0));
//initialize the font
CvFont font;
cvInitFont( &font, CV_FONT_HERSHEY_COMPLEX,1.0, 1.0, 0, 1, CV_AA);
//place the text on the image using the font
cvPutText(hw, "Welcome To OpenCV", pt, &font, CV_RGB(150, 0, 150) );
//create the window container
cvNamedWindow("Hello World", 0);
//display the image in the container
cvShowImage("Hello World", hw);
//hold the output windows
cvWaitKey(0);
return 0;*/


}

Title: Re: Line Following Using Vision Sensors
Post by: chelmi on March 16, 2009, 01:33:56 PM
although i personally thinkcmucam is wate of money but its v gud for the beginners in image processing...basiclly but if u put a little effort u can do more than wat cmucam can do wth opencv and a smal webcam...which is way cheaper but yes alittle diffcult to implement ...

With your solution you need a PC to run the OpenCV code. CMUCam is completely independent. If you want an autonomous solution, you will need a powerful processor to process the video, and will end up with something similar to the CMU Cam. And more expensive.

BTW, I find your way of writing quite difficult to understand. Remember that some of us are not native English speakers (like myself). Typos every two words and non existing punctuation makes it really painful to read. Please, make an effort ;)
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 16, 2009, 01:48:14 PM
sorry for that ....actually i type very fast and my keyboard is little faulty so sum keys have to be pressed hard to register...ill keep that in mind...actually wat u ssaid is right..see..u got to have sum creativity..cmu cam gives images to mcu which processes it...the processing speed will be very low as compared to pc and also the response time will bcome high for more complex functions...


what i do is i do the processing in my pc and send serial commands or parallel commands via wireless to mcu...whose jobis jst to read these commands and control the motors which leads to a much faster processing and response than cmu cam...

in my view i save a lot of money and also can gain a lot of knowledge by using opencv as i implement sum higher and more complex algorithms like fae recognition...blob tracking...etc etc etc which in any mcu will take much much time to process...so i alwayz opt for the pc to process any kinda image processing elements...
Title: Re: Line Following Using Vision Sensors
Post by: chelmi on March 16, 2009, 02:18:19 PM
what i do is i do the processing in my pc and send serial commands or parallel commands via wireless to mcu...whose jobis jst to read these commands and control the motors which leads to a much faster processing and response than cmu cam...

in my view i save a lot of money and also can gain a lot of knowledge by using opencv as i implement sum higher and more complex algorithms like fae recognition...blob tracking...etc etc etc which in any mcu will take much much time to process...so i alwayz opt for the pc to process any kinda image processing elements...

Yep, but sometimes this solution is not possible. For instance I your robot has to be completely autonomous (in a competition this is often the case), your solution is not applicable. Saying that CMUCam is a complete waste of money is unfair in my opinion. And by the way, you can learn a lot by using CMUCam, remember that the embedded processor is programmable.
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 16, 2009, 02:29:54 PM
agreed...but ... processing algos like face detection....motion tracking....is an overkill for any microcontroller...so its better if they are not done in it...

and i guess if u use computers then the robot remains autonomous...coz only the program is running and nothing else and remember the mcu is noting but a processor at heart and our computer is the same too...tll date i havent seen any competitions who have denied that u cant use a computer for image processing on ur robot....unless offcourse started by some dumb ppl...

Title: Re: Line Following Using Vision Sensors
Post by: Razor Concepts on March 16, 2009, 02:51:36 PM
But it's hard to build your robot around a laptop.
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 16, 2009, 02:56:31 PM
thats why i said use wireless....and also its not that hard ...jst make a flat base and keep the lappy on it...that it...
Title: Re: Line Following Using Vision Sensors
Post by: chelmi on March 16, 2009, 03:18:39 PM
agreed...but ... processing algos like face detection....motion tracking....is an overkill for any microcontroller...so its better if they are not done in it...

and i guess if u use computers then the robot remains autonomous...coz only the program is running and nothing else and remember the mcu is noting but a processor at heart and our computer is the same too...tll date i havent seen any competitions who have denied that u cant use a computer for image processing on ur robot....unless offcourse started by some dumb ppl...

I am not very familiar with robot competition, but from what I have seen, a lot of them specifically forbid radio communication.

for instance: http://www.eurobot.org/eng/rules.php (http://www.eurobot.org/eng/rules.php)
http://groups.google.com/group/iaroc/web/rules?hl=en&pli=1 (http://groups.google.com/group/iaroc/web/rules?hl=en&pli=1)

The purpose of any competition like that is to stimulate the creativity and ingenuity of contestants. By imposing certain limitations they force people to find original solutions. I don't see why this is dumb.
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 17, 2009, 10:42:41 AM
lol and wats not original in not using the laptop for doing image processing ...if they ban rf then use bluetooth ..there are no limitations..i still think processing images on the computer is a way better choice than unecessarily wasting money on the cmu cam...but then its still my own perspective..urs will dffer..
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on March 17, 2009, 10:51:10 AM
lol and wats not original in not using the laptop for doing image processing ...if they ban rf then use bluetooth ..there are no limitations..i still think processing images on the computer is a way better choice than unecessarily wasting money on the cmu cam...but then its still my own perspective..urs will dffer..
bluetooth is a form RF
Title: Re: Line Following Using Vision Sensors
Post by: paulstreats on March 17, 2009, 11:04:07 AM
The compromise would be to use a higher end microcontroller like a fast arm9 with sram extensions. You can use a cmucam with it plus its fast enough to do other image processing. (many of them can run embedded linux distro's too letting you run existing image processing tools over the top).

look at something like this: http://www.tincantools.com/product.php?productid=16133&cat=0&page=1&featured (http://www.tincantools.com/product.php?productid=16133&cat=0&page=1&featured)  a 200mhz microcontroller with 16mb memory that fits into a standard 40 pin dip socket. Image processing is perfectly possible on it, maybe not in superhigh resolutions but enough to get by with.

Quote
agreed...but ... processing algos like face detection....motion tracking....is an overkill for any microcontroller...so its better if they are not done in it...

It'll never happen unless somebody tries...
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 17, 2009, 11:38:37 AM
lets face it...for any normal competition ...there is no need in spending so much money for  a 200mhz mcu...thats what iam exactly saying doing it on a laptop and communication via wireles is a much cheaper and more permanent solutio ...let the algo be anything
Title: Re: Line Following Using Vision Sensors
Post by: chelmi on March 17, 2009, 01:41:09 PM
lets face it...for any normal competition ...there is no need in spending so much money for  a 200mhz mcu...thats what iam exactly saying doing it on a laptop and communication via wireles is a much cheaper and more permanent solutio ...let the algo be anything

What is a "normal competition"?

Why is it so hard for you to understand that limitations are what makes a competition interesting and fun! Trying to work around the rules to use a laptop for video precessing is not a solution. If you don't like the rules, don't participate in this competition ;) But that doesn't make the competition stupid or you solution better. Most competition impose rules to make the competition interesting. And I'm not only talking about robotic. Take the Formula 1 Championship for instance: they have very strict rules about the weight, the dimension of the car, the engine... even telemetry is regulated. This is to keep the competition balanced and interesting. Same in robotic competition. So YES there is a need for completely embedded image processing solution. Your solution is cheaper, but this is not a universal solution and in some case not practical.

Chelmi
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 17, 2009, 01:53:30 PM
lol ...not practical...i can gurantee that i can do more things with the laptp and a webcam than any other cmucam or watever cam can do...and wat kinda limitations are u talking abt...i havent seen any competition which puts such a kinda limitation where u cant use any laptops for image processing.....huh...it would be illogical...

even i love challenges...but i need fast processing and instantaneous response...tell  me which mcu can do it better than a processor which runs at 2 ghz with 2gb of ddr3 ram .... look at the possibilities ..i can implement more cmplex algos,more better image processing modules and can still expect to get much faster processing.. these modules are required if u want ur robot to be bang on target...i dont want to sacrifice the accuracy and efficiency of my robot for any kinda self processing camera...


and why the hell are we fighting...i love this way of mage processing and u love urs...its our different perspective...no need to fight over it..
Title: Re: Line Following Using Vision Sensors
Post by: chelmi on March 17, 2009, 02:05:49 PM
lol ...not practical...i can gurantee that i can do more things with the laptp and a webcam than any other cmucam or watever cam can do...and wat kinda limitations are u talking abt...i havent seen any competition which puts such a kinda limitation where u cant use any laptops for image processing.....huh...it would be illogical...

even i love challenges...but i need fast processing and instantaneous response...tell  me which mcu can do it better than a processor which runs at 2 ghz with 2gb of ddr3 ram .... look at the possibilities ..i can implement more cmplex algos,more better image processing modules and can still expect to get much faster processing.. these modules are required if u want ur robot to be bang on target...i dont want to sacrifice the accuracy and efficiency of my robot for any kinda self processing camera...


and why the hell are we fighting...i love this way of mage processing and u love urs...its our different perspective...no need to fight over it..

ok ok, we not fighting, it's an argument ;) some competitions impose limitation on weight and dimensions. If you're laptop doesn't fit, then it's not a practical solution, period.
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 17, 2009, 02:25:33 PM
but never in imge processing..ppl know all cannot buy the 200$ cmu cam...
Title: Re: Line Following Using Vision Sensors
Post by: airman00 on March 17, 2009, 04:33:26 PM
People you are hijacking this thread!

My outlook on it
When you want it all embedded you can use a CMUcam
when you have the luxury of having a laptop always near the robot , then use the wireless method
Please make a new thread to continue the conversation . :P
Title: Re: Line Following Using Vision Sensors
Post by: paulstreats on March 17, 2009, 06:43:41 PM
Quote
People you are hijacking this thread!

My outlook on it
When you want it all embedded you can use a CMUcam
when you have the luxury of having a laptop always near the robot , then use the wireless method
Please make a new thread to continue the conversation .

Aaaaaw. Just 1 more post pleeeeeez.....

Quote
ok ok, we not fighting, it's an argument

its not even an argument, its a perfectly sound debate where both parties have good points to make.

My personal opinion is we should aim for embedded designs, whats the point of a robot that needs an external laptop (a robot should be able to function without external help in my own opinion. I know its not other peoples but there we go).

Also with the cost. Does a cmucam + 200mhz microcontroller board cost more than a webcam and laptop? you are assuming that we already have a laptop to use, if not then we would have to buy one probably making the overall cost higher than the embedded solution.
Also the part about being bang on target is subject to the resolution of the camera used before you even consider the rest of the hardware. I have plain camera modules, not webcams or cmucams but the plain camera ic's so Im not siding with either the webcam/cmucam.

The 2 subjects of the debate are also beneficial as a whole. The computer assisted side allows better image processing techniques to be developed(creating better image processing) but faces a robot either being encumbered with a full computer or having to stay in range of a wireless link whereas the embedded side is playing catchup, the techniques will always be a few years behind but you have a more agile robot.
Title: Re: Line Following Using Vision Sensors
Post by: superchiku on March 18, 2009, 10:05:29 AM
see we have to look at all aspects who doesnt have a laptop nowadays...investing on an laptop is much better than investin on cmucam or 200 mhz mcu...i think its a complete waste...

butthen again its my perspective...a laptop is nothing but an mcu with already extra peripherals attached...so no worries...

sorry for that airman...