go away spammer

Author Topic: Need help with identifying welding job intersections with computer vision  (Read 2915 times)

0 Members and 1 Guest are viewing this topic.

Offline yashmaniacTopic starter

  • Jr. Member
  • **
  • Posts: 9
  • Helpful? 0
Hi,
Before I begin, I'd like to mention that I'm a complete novice in the field of image processing. I just need to know if this is possible before putting time and energy into it.

Consider there are two plates to be welded. My idea is to have a camera take video input, identify the profile of the intersection of the two plates using a processor, and work out the 3 dimensional coordinates of the profile, and relay the information to another system.
My question: Is is possible to derive the shape of the profile and the 3d coordinates of the intersection, from a video?
Also, if yes, then could anyone give me an idea as to what it may entail?
 

Offline mklrobo

  • Supreme Robot
  • *****
  • Posts: 558
  • Helpful? 15
  • From Dream to Design at the speed of Imagination!
 8) Cool problem!   8)
I can offer only logical options; I am stumped at the programming of the Axon. I expect to
overcome this soon.
The tutorials in this forum cover camera AI. I do not know if there is enough information for you
to use to your advantage. There are some instructive videos in the video section that may offer options too.
Your request;
My question: Is it possible to derive the shape of the profile and the 3d coordinates of the intersection, from a video?
My response would be, yes, but may be expensive, depending on your objectives.
Industry used lasers in a calibrated area to determine the dimensions of your pieces to be welded. information from
video itself may prove problematic, because of scale and possibilitiy of optical illusions. (?)
Scale, space, emplacement of video are factors to be considered.
Cameras can pick up different spectral emissions, which may add to your advantage, as well as depth perception. (?)
Good Luck!  :) :D ;D

Offline yashmaniacTopic starter

  • Jr. Member
  • **
  • Posts: 9
  • Helpful? 0
Thanks,
  I did imagine something like that would be the issue. But would it be possible to get accurate coordinates using 2 or more cameras?
The optical illusions would be avoided, and most angles should be covered that way. Also, I was planning to use a sort of landmark in the field of vision which would basically be a reference as by having its distance from the camera known.
What I need to know is, will this method give me accurate enough coordinates so that, say a robot arm can trace them and complete the weld? 

Offline mklrobo

  • Supreme Robot
  • *****
  • Posts: 558
  • Helpful? 15
  • From Dream to Design at the speed of Imagination!
 ;D Hello!
your request;
Is it possible to derive the shape of the profile and the 3d coordinates of the intersection, from a video?
I had been thumbing through some tech papers, and ran across some options. The goal you seek may lie
in video forensics. (If this was CSI, we'd have the answer in an hour. Ha!  ;) )
If this could be done (cheaply), people in this forum, like Kobratek, could use this to convert
parts to dimensions in no time. Obviously, more information is needed, but maybe still on the right
track.
I had to provide an accurate scale of a part at work, and so, I put a grid in the background of the picture.
This provided scale and angle perspective.  8) In that context, if you had three cameras in an X, Y,  and Z coordinate position from the target, that may be a starting point for your goal. If the cameras could be adjusted to focus on certain distances from their position, maybe this would give "layers"
of perspective for target spacial definition.(  ???) Using sound as a depth gauge could be used. Also, maybe using lasers (only the camera can see) in different shaped may give depth perception. Consider this;
If you used a laser that emitted a circle, that provided many ringed concentric projections( ring projections
are set at a measured distance from each other), this could help provide depth perspective, and be invisible
to the user in infrared/ultraviolet lasers are used.  Good luck!   ;D ;D
« Last Edit: May 24, 2015, 04:51:37 PM by mklrobo »

Offline Spud

  • Beginner
  • *
  • Posts: 4
  • Helpful? 0
if the plates are flat and you know the depth then you can use one camera at an angle. Measure the distance from the camera to the flat surface. The higher up the image the further away the object. Edge detection could detect the plate edges and the position in the image could be used to determine where it is. 

Offline yashmaniacTopic starter

  • Jr. Member
  • **
  • Posts: 9
  • Helpful? 0
Thanks for the inputs,
It seems, the more I try to get this concept to work, the more expensive it gets.
if the plates are flat and you know the depth then you can use one camera at an angle. Measure the distance from the camera to the flat surface. The higher up the image the further away the object. Edge detection could detect the plate edges and the position in the image could be used to determine where it is. 

I was planning a system which could identify the shapes of the materials to be welded. So if it was 2 pipes, the intersection could be traced and followed up in 3d by a robotic arm. I it happened to be two plates, the intersection could be traced and coordinates could be generated without changing the program.

;D Hello!
your request;
Is it possible to derive the shape of the profile and the 3d coordinates of the intersection, from a video?
I had been thumbing through some tech papers, and ran across some options. The goal you seek may lie
in video forensics. (If this was CSI, we'd have the answer in an hour. Ha!  ;) )
If this could be done (cheaply), people in this forum, like Kobratek, could use this to convert
parts to dimensions in no time. Obviously, more information is needed, but maybe still on the right
track.
I had to provide an accurate scale of a part at work, and so, I put a grid in the background of the picture.
This provided scale and angle perspective.  8) In that context, if you had three cameras in an X, Y,  and Z coordinate position from the target, that may be a starting point for your goal. If the cameras could be adjusted to focus on certain distances from their position, maybe this would give "layers"
of perspective for target spacial definition.(  ???) Using sound as a depth gauge could be used. Also, maybe using lasers (only the camera can see) in different shaped may give depth perception. Consider this;
If you used a laser that emitted a circle, that provided many ringed concentric projections( ring projections
are set at a measured distance from each other), this could help provide depth perspective, and be invisible
to the user in infrared/ultraviolet lasers are used.  Good luck!   ;D ;D
This seems plausible. The grid in the back ground could help set a scale of reference. I could then have a camera placed at some distance along each coordinate axis. Hopefully, all I would have to do then would be isolating the intersection through each camera feed and use the background to generate 3d coordinates. This is still too expensive for me to try out, but I heard ROS offers some simulation capability, so I may look into that.
Thanks a lot again!!

 


Get Your Ad Here

data_list