Author Topic: Robotics exhibit: a feasibility assessment  (Read 559 times)

0 Members and 1 Guest are viewing this topic.

Offline Kethis256Topic starter

  • Beginner
  • *
  • Posts: 1
  • Helpful? 0
Robotics exhibit: a feasibility assessment
« on: May 09, 2012, 09:15:55 AM »
I work for a science museum and recently decided I would like to make a robotics exhibit for them. Specifically, I would like to make a delta robot that can play board games against human opponents. The entire thing would be in a sturdy enclosure: humans are not able to touch the board/robot and must interact through a touch screen.

I made a delta robot out of LEGO parts and stared at it until I correctly rederived the kinematic and inverse kinematic equations. I also have already developed an AI for a number of board games.

The "board" for the board games will be an upwards facing monitor: changing the screen is all that is necessary to change the board. The pieces will be kept to the side, on a flat, level indistinct surface.

Picking up the pieces is achieved by using suction: a servo retracts a syringe connected through tubing to a suction cup. (Ive seen this on a youtube video, it looked like it worked pretty well!) It of course follows that game pieces must have a flat top.

I have a view things I am not sure how to do just yet, and would like some input on.

Delta robot motors: I assume that servos are too inaccurate for what I will need. Could stepper motors (which I gear down) be what I need to use? I am wary of anything that uses an open loop control. Determining the stepper motors current position could be done by calibrating: moving the arms up until some mechanically limited position, for example.

Error-creep: Placing and picking up the game pieces thousands of times will cause even minuscule error to accrue in an unacceptable manner. Ive dismissed the idea of magnets to compensate as inelegant and impractical. My best idea for how to handle this is to use computer vision.

I could place a downwards facing camera on the side of the end effector, interfaced with openCV. The robot would move the camera to above the expected position of the piece to be picked up. This camera could detect where the piece is, and change where the arm attempts to pick up the piece in order to do so more accurately. The problem is now how do I do that?

I think the simplest, most robust (if inelegant) option is to simply have a green dot in the center of each piece. To detect the piece you would chroma key the camera's vision, then run blob detection, then pick up the blob closest to the center.

I have never played with computer vision before, and would like all the advice/links to resources you could give me!

Rolling dice: Some games require rolling dice. While I could obvious virtualize this, I would like to make the process tangible. Which means physically rolling physical dice, and using the camera mounted on the end effector to count the dots. I am okay with having dedicated hardware to roll the dice, but obviously I have to prevent the dice from rolling off the table, etc. Any ideas on how to achieve this?

 


Get Your Ad Here