Don't ad-block us - support your favorite websites. We have safe, unobstrusive, robotics related ads that you actually want to see - see here for more.

0 Members and 1 Guest are viewing this topic.

only with one camera . . . I have read through the tutorial of Stereo Vision here, but I still cannot understand the equation for one camera.

Is there any method that can get more accurate result from a 2D image? How to "calibrate the camera so that I can relate pixels to real world distances" ?

(rotation about Y-axis)

Does the equation work when Zo is (0,0,0) or angle "phi" is 0 (I want to use the other camera (cR) to check whether the target and robotic arm are in the same plane).

could you please also recommend some "calibration software" to calculate the transform?

U can see the Z must be 70 cm not 2.5 cm.

Quote(rotation about Y-axis)Do you mean X or Z? If its the Y axis, your cameras are crooked for no reason at all . . .

I need to know how many degrees that the base of the robotic arm need to turn to make the robotic arm and the target in the same plane, and then calculate the positions of each joint together with the target point in their plane in unit of centimeters.

As RoenZ said parallel axes is simple, so I choose parallel axes at last.

Does the value of focal length influence the result much or not? Since I can only use the minimum focal length of my camera provided in the user manual.

4. "x_CamL" and "x_CamR" are posions of point P in left image and right image right? For keeping them consistent, we can change the pixel units to e.g. cm units, right?

5. In RoenZ's reply #10 and #11, could you please explain why the result seems making no sense. Where actually should the original point be?

I have to use single camera to get robot localization.

how to set up an experiment to construct a lookup table with a discrete set of known sample points in image which represent the real joints of the robotic arm, and then interpolate a unkonwn point between them to get the approximate position in real world.

But after i got the depth, i confused how to detect what is the background and what is the object....

I would put a red ball on the end of the robot end-effector

then record the angles of each position

I assume you meant robot arm end-effector (grippers) localization?

But in your explain, what do you mean the angles? I have a joint on shoulder, and a joint on elbow.

That is great because I only need to know the position of the target and the end-effector right?

i mean i didn't get a shape of cube after i segmented it??? Have any simple algorithm to get a smooth segmentation??

i'm confused about the similiar triangle equation, it marked in blue colour. Why xL/f=(X+(b/2))/Z

Hmmmmm your englarged image doesnt look like the smaller images, nor does it have a blue circled equation that you asked about . . .

Although I think they made a mistake in that red equation because they misdefine T . . .