Author Topic: Roborealm or OpenCV?  (Read 7726 times)

0 Members and 1 Guest are viewing this topic.

Offline SeagullOneTopic starter

  • Robot Overlord
  • ****
  • Posts: 248
  • Helpful? 0
  • Humans and Robots working together for our future.
    • Loren John Presley - Author, Artist, Roboteer
Roborealm or OpenCV?
« on: September 07, 2010, 09:48:43 AM »
I've really been getting into OpenCV lately. I've experimented with face detection implementation in my project NINA and it works great in my robot's human interaction program. It's really fun to watch it work.

One issue I'm having is that the vision process using open CV is quite slower than it once was when I was using Roborealm. When I plug the same stereoscopic head I've made for my robot and run both cameras in open CV the frame rate slows down considerably. When using roborealm, the frame rate remains nice under vision processing algorithms with both cameras.

Should there be a difference? Because I'm highly considering buying a license for the roborealm software and switching my robot's main vision application from openCV to roborealm.

It's been a while since I used roborealm, before you had to purchase a license for it, so I don't know if they've added other features like face detection, which I would need for my robot's application.
I think the chauffeur did it.

.......

He did.

Offline garrettg84

  • Robot Overlord
  • ****
  • Posts: 187
  • Helpful? 8
  • Armchair Roboticist Extraordinaire
    • http://www.garrettgalloway.com/
Re: Roborealm or OpenCV?
« Reply #1 on: September 07, 2010, 01:11:11 PM »
I am new to robotics and most of this image processing. OpenCV appears to be a library, RoboRealm looks more like an application suite. The difference in performance between the two may be the implementation of the OpenCV library and its API calls. I am not attacking your programing skills, I hope you do not interpret this comment that way. There may also be some significant differences between the way the two actually function, algorithms used, etc.

Have you done any profiling on the app you are using for OpenCV? Do you know if the bottle-neck is memory or CPU related? Often times, in media intensive applications as such, threading can play a big role in how well the application peforms. If you are writing the script/code to implement OpenCV vs what RoboRealm appears to be, it may be a threading issue.

The other piece to consider, is the framerate of analysis actually dropping, or just the framerate for the displayed images dropping? Consider the reverse as well, could the RoboRealm application simply be sending you clean video, but not actually running analysis that quickly(wasted resources)? Which is more important to you, and I know that may sound snarky, but I'm serious. When giving a demonstration to a non-technical crowd, the fluid video may be more important - especially if you are attempting to aquire funding, bigwigs love flashy stuff. Nerd types would understand that the majority of the resources are being poured into processing/analysis instead of displaying current inforamtion.
-garrett

Offline SeagullOneTopic starter

  • Robot Overlord
  • ****
  • Posts: 248
  • Helpful? 0
  • Humans and Robots working together for our future.
    • Loren John Presley - Author, Artist, Roboteer
Re: Roborealm or OpenCV?
« Reply #2 on: September 07, 2010, 03:37:56 PM »
Thanks garrettg84!

I'm not a wiz when it comes to programming, but I know a little this and a little that. :)

I'm mainly concerned about the slow framerate not because I want to see fluid video, but because my robot's responses to what it see's have to be fluid as well. For instance, if my robot is tracking a human face, I'm hoping to have the robot keep up with any of the face's positions, so it doesn't lose the face too easily (or make the servo movements too jittery if the frame rate is too slow).

I'm using python to program in OpenCV, and I'm leaning toward sticking with OpenCV right now. If I do so, however, I'm wondering how I can speed up the processing by optimizing my scripts... ???

How would I do some profiling on my application? CPU meter or something else maybe? Just to be sure it's not CPU or memory that's bottle necking the rate.

Thanks again!
I think the chauffeur did it.

.......

He did.

Offline garrettg84

  • Robot Overlord
  • ****
  • Posts: 187
  • Helpful? 8
  • Armchair Roboticist Extraordinaire
    • http://www.garrettgalloway.com/
Re: Roborealm or OpenCV?
« Reply #3 on: September 08, 2010, 07:35:01 AM »
I'm mainly concerned about the slow framerate not because I want to see fluid video, but because my robot's responses to what it see's have to be fluid as well. For instance, if my robot is tracking a human face, I'm hoping to have the robot keep up with any of the face's positions, so it doesn't lose the face too easily (or make the servo movements too jittery if the frame rate is too slow).

I am still not sure if the choppy frame rate is in the display (what you see) or in the actual frame rate of analysis (what openCV sees). Do your functions/events/traps get called often enough or return their results quickly enough? Does your bot's response actually appear jerky as is? Would you mind posting source somewhere so that I might get an idea into your implementation?

I'm using python to program in OpenCV, and I'm leaning toward sticking with OpenCV right now. If I do so, however, I'm wondering how I can speed up the processing by optimizing my scripts... ???
Without the source and my total lack of experience with OpenCV, it would be hard to advise on this. As a side note scripting languages are generally slower than their compiled counterparts. It may be in your interest to take a look at a language like C that is compiled. With compiled libraries such as the one you are using, depeding on implementation of course, performance tends not to be an issue because the scripting language is used simply as glue.

How would I do some profiling on my application? CPU meter or something else maybe? Just to be sure it's not CPU or memory that's bottle necking the rate.

Profiling is typically done to see where your app is spending the most of its time. It lets you identify all kinds of information about an application. If you are spending time in wait states, you are often I/O bound. If your app appears slow and all available resources on a single CPU or core within the CPU are used, it is likely a concurrency issue.
Profiling in python:
http://docs.python.org/library/profile.html

Just a side thought, here are some general performance tips that also briefly mention profiling
General Performance Tips:
http://wiki.python.org/moin/PythonSpeed/PerformanceTips

Source code or atleast the main loop of your program would be pretty important to figure this out. Even if you post up pseudocode, we may be able to find issues within the program flow.

You had mentioned that you are using two cameras, that really makes me think this a concurrency issue. Your code may be taking one frame from one cam, processing, and then returning to grab the next frame from the next cam in some sort of loop. That is a simple method of getting the work done, but it would certainly make things appear choppy depending on how you normally experience this processing. Splitting each capture and processing up into a separate thread would allow you to take advantage of a multi-core cpu, and because each cam is likely a different IO source (yes, usb, sometimes same bus) it can also alleviate some of the IO constraints.
http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/
-garrett

Offline SeagullOneTopic starter

  • Robot Overlord
  • ****
  • Posts: 248
  • Helpful? 0
  • Humans and Robots working together for our future.
    • Loren John Presley - Author, Artist, Roboteer
Re: Roborealm or OpenCV?
« Reply #4 on: September 08, 2010, 01:45:06 PM »
Thanks again!

Here is the source code I'm using for face detection:

Code: [Select]
import sys
import cv

def detect(image):
    image_size = cv.GetSize(image)
   
    # create grayscale version
    global grayscale
    grayscale = cv.CreateImage(image_size, 8, 1)
    cv.CvtColor(image, grayscale, cv.CV_BGR2GRAY)

    # create storage
    global storage
    storage = cv.CreateMemStorage(0)
   
    # equalize histogram
    cv.EqualizeHist(grayscale, grayscale)
       
    #detect objects
    global cascade
    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")
    global faces
    faces = cv.HaarDetectObjects(grayscale, cascade, storage, 1.2, 2, 0, (50, 50))

    if faces:
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)

if __name__ == "__main__":

    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Camera')
 
    # create capture device
    device = 1 # assume we want first device
    capture = cv.CreateCameraCapture(1)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)   


    # check if capture device is OK
    if not capture:
        print "Error opening capture device"
        sys.exit(1)
 
    while 1:
        # do forever
        # capture the current frame

        frame = cv.QueryFrame(capture)

        if frame is None:
            print "breaking"
            break
 
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        detect(frame)
 
        # display webcam image
        cv.ShowImage('Camera', frame)

        # handle events
        k = cv.WaitKey(10)
 
        if k == 0x1b: # ESC
            print 'ESC pressed. Exiting ...'
            break

This one just uses one camera. Here's the source code I'm trying to implement for displaying the images of both cameras (no stereoscopic vision yet). I'm working on the stereo vision script and making progress but still needs a little debugging.

Code: [Select]
import sys
import cv

running = 1

capture_l = 0
capture_r = 1

left = cv.CreateCameraCapture(1)
right = cv.CreateCameraCapture(2)

cv.SetCaptureProperty(left, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cv.SetCaptureProperty(left, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)

cv.SetCaptureProperty(right, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cv.SetCaptureProperty(right, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)


if not left:
    print "Error opening left capture device"
    sys.exit(1)
if not right:
    print "Error opening right capture device"
    sys.exit(1)

while running == 1:

    frameL = cv.QueryFrame(left)
    frameR = cv.QueryFrame(right)

    if frameL is None:
        print "breaking: Failed to capture from Left Camera"
        break
    if frameR is None:
        print "breaking: Failed to capture from Right Camera"
        break

    cv.ShowImage('Left Camera', frameL)
    cv.ShowImage('Right Camera', frameR)

    k = cv.WaitKey(10)
   
    if k == 0x1b: # ESC
        print 'ESC pressed. Exiting ...'
        break

As I believe you supposed, they are running on the same thread...I'll definitely take a look at the threading information you posted and see if I can implement that into the script. Then I'll see if its appears to run more smoothly.

On a side note, I haven't actually tested my robot's movement for choppiness. I'm in the middle of reconstructing the frame. I'll work with what I have in the meantime, however.
I think the chauffeur did it.

.......

He did.

Offline garrettg84

  • Robot Overlord
  • ****
  • Posts: 187
  • Helpful? 8
  • Armchair Roboticist Extraordinaire
    • http://www.garrettgalloway.com/
Re: Roborealm or OpenCV?
« Reply #5 on: September 09, 2010, 06:13:27 AM »
Thanks again!
No problem, I enjoy coding and problem solving =)


I don't have syntax highlighting and I don't have python (or anything useful for that matter) on this work station. I will do my best to just add comments (look for '# #') to the source where things may be changed for improvement. From a pure logical program flow perspective though, it looks pretty solid, good job!

If I were to make any MAJOR changes to the program flow, I would add threading to the design. I will assume this OpenCV library is thread safe.
1. Create a (global)queue that maintains the order of the images captured so that you don't end up
    displaying images out of order.
        - Possible structure being a 3D array of images with a property to designate if each has been
          displayed yet
2. Create a dispatcher (maybe thread, maybe main loop - your call), something that can spawn
    new threads to do capture and processing
        - Have the dispatcher monitor queue (think iterrate from 0 to [queue length])
        - For images marked as 'displayed' (or queue indexes with no previous images - starting
          fresh)have the dispatcher kick off a new process to capture and process a new frame
        - Pass the queue element (think array index) that has been previously displayed to the
          new spawning thread
        - Put the image in the queue (image) location FIRST, then mark the image as 'not displayed'
3. Create a separate thread for displaying images to your output window
        - Because the capture-and-process threads are dispatched cicularly, the images SHOULD be
          MOSTLY in order
        - Have this thread iterrate the queue and for each element enter a loop that waits until that
          element is marked as 'not displayed' (remember we have other threads working on this...)
        - Once the element is shown as 'not displayed', display it THEN mark it as 'displayed'
4. Continue to use your main (thread) function to monitor and control the over arching process
        - Maintain a way to stop the dispatcher and the image display thread
        - Monitor for errors and react accordingly

I hope I was clear enough with that description. The actual implementation might actually take less characters than what I had written aboce, but as I stated before, I don't have python or anything to even help out with syntax highlighting so I don't want to write code that won't execute.

On to the comments which could be changed now without complete redesign for performance gains...

Here is the source code I'm using for face detection:

Code: [Select]
import sys
import cv

# # for the variables later suggested to be put in a more global location,
# # create them here and define them in your main program (not the loop portion).

def detect(image):
    # # maybe hardcode this - no sense in requiring a function to be called
    # # every single face detection go around i saw 320x240 hardcoded
    # # somewhere else in here
    image_size = cv.GetSize(image)
   

    # # Maybe brake out this image creation and variable from inside this fuction to
    # # something truly global, then simply copy that instance of the image to a local
    # # variable. this would again prevent additional un-necessary external API calls
    # create grayscale version
    global grayscale
    grayscale = cv.CreateImage(image_size, 8, 1)
    cv.CvtColor(image, grayscale, cv.CV_BGR2GRAY)

    # create storage
    global storage
    storage = cv.CreateMemStorage(0)
   
    # # This appears that it may be broken out with the image creation of the grayscale object.
    # # In general, when going to speed/performance you want to prevent repetative work.
    # equalize histogram
    cv.EqualizeHist(grayscale, grayscale)
       

    # # This 'cascade' file is loading every single time you run this function. Move it out of the function to
    # # something global and use it where necessary. While Windows will LIKELY cache this file in memory so
    # # you aren't actually grabbing this from disk every time, it still takes time to open a file and load something.
    # # Note: This loading process probably has some sort of back end routine that it does to actually process
    # # the contents of that file into a memory structure representation of the data - also taking up CPU time.
    #detect objects
    global cascade
    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")
    global faces
    faces = cv.HaarDetectObjects(grayscale, cascade, storage, 1.2, 2, 0, (50, 50))

    if faces:
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)

if __name__ == "__main__":

    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Camera')
 
    # create capture device
    device = 1 # assume we want first device
    capture = cv.CreateCameraCapture(1)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)   


    # check if capture device is OK
    if not capture:
        print "Error opening capture device"
        sys.exit(1)
 
    while 1:
        # do forever
        # capture the current frame

        frame = cv.QueryFrame(capture)

        if frame is None:
            print "breaking"
            break
 
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        detect(frame)
 
        # display webcam image
        cv.ShowImage('Camera', frame)

        # handle events
        k = cv.WaitKey(10)
 
        if k == 0x1b: # ESC
            print 'ESC pressed. Exiting ...'
            break

This one just uses one camera. Here's the source code I'm trying to implement for displaying the images of both cameras (no stereoscopic vision yet). I'm working on the stereo vision script and making progress but still needs a little debugging.

Code: [Select]
import sys
import cv

running = 1

capture_l = 0
capture_r = 1

left = cv.CreateCameraCapture(1)
right = cv.CreateCameraCapture(2)

cv.SetCaptureProperty(left, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cv.SetCaptureProperty(left, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)

cv.SetCaptureProperty(right, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cv.SetCaptureProperty(right, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)


if not left:
    print "Error opening left capture device"
    sys.exit(1)
if not right:
    print "Error opening right capture device"
    sys.exit(1)

while running == 1:

    frameL = cv.QueryFrame(left)
    frameR = cv.QueryFrame(right)

    if frameL is None:
        print "breaking: Failed to capture from Left Camera"
        break
    if frameR is None:
        print "breaking: Failed to capture from Right Camera"
        break

    cv.ShowImage('Left Camera', frameL)
    cv.ShowImage('Right Camera', frameR)

    k = cv.WaitKey(10)
   
    if k == 0x1b: # ESC
        print 'ESC pressed. Exiting ...'
        break

As I believe you supposed, they are running on the same thread...I'll definitely take a look at the threading information you posted and see if I can implement that into the script. Then I'll see if its appears to run more smoothly.

On a side note, I haven't actually tested my robot's movement for choppiness. I'm in the middle of reconstructing the frame. I'll work with what I have in the meantime, however.
-garrett

Offline jaime

  • Jr. Member
  • **
  • Posts: 30
  • Helpful? 1
Re: Roborealm or OpenCV?
« Reply #6 on: September 09, 2010, 09:37:27 AM »
You've been given some good suggestions.  I can help you reorganize your code, but first a few questions.

1 - the image you're getting from QueryFrame -- does it's dimensions change?
2 - what version of python are you using? Can you jump to 2.6 if you're using a prior version?

As was mentioned above, if the image dimensions are constant, you can optimize a bit.  I do not know how much faster you can get this, though.  Outside of your main loop, you really only have one other loop.

Your code does not look too expensive, though.

If you decide you need to be multithreaded - then you *HAVE* to use 2.6.  Versions prior to that do not implement "real" threading per se, because of python's global interpreter lock.  Only one python thread can run in the interpreter at any given time.  The multiprocessing module solves this and will let you use multiple cores on your computer.

jaime

Offline SeagullOneTopic starter

  • Robot Overlord
  • ****
  • Posts: 248
  • Helpful? 0
  • Humans and Robots working together for our future.
    • Loren John Presley - Author, Artist, Roboteer
Re: Roborealm or OpenCV?
« Reply #7 on: September 09, 2010, 09:50:48 AM »
Hi Jaime.

I am using python 2.6.
The dimensions are constant.

I still have a lot to learn about threading. I tried to implement some threading in the python programming and got it to work, but it was running at the same speed, lol. :D

However, after reading garret's suggestions, I went back and reformatted the code for face detection. This time it works at the speed I want (thought without threading.)
It's fantastic!

Here is the final code (but if you guys think it can be optimized further, please let me know).
Code: [Select]
import sys
import cv

#Parameters for haar detection

min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# create capture device
device = 1
capture = cv.CreateCameraCapture(1)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)

storage = cv.CreateMemStorage(0)

def face_detect(image, cascade):
    MakeImage = cv.CreateImage((320, 240), 8, 1)
    Scale_Image = cv.CreateImage((cv.Round(320 / image_scale),
                   cv.Round (240 / image_scale)), 8, 1)

    #convert to grayscale
    cv.CvtColor(image, MakeImage, cv.CV_BGR2GRAY)
    #scale image to increase processing
    cv.Resize(MakeImage, Scale_Image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(Scale_Image, Scale_Image)

    if(cascade):
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(MakeImage, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        if faces:
            for (x,y,w,h),n in faces:
                pt1 = (x,y)
                pt2 = (x+w,y+h)
                cv.Rectangle(image, pt1, pt2, 255)

if __name__ == "__main__":

    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Camera')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    # check if capture device is OK
    if not capture:
        print "Error opening capture device"
        sys.exit(1)
 
    while 1:
        # do forever
        # capture the current frame

        frame = cv.QueryFrame(capture)

        if frame is None:
            print "breaking"
            break
 
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frame, cascade)
 
        # display webcam image
        cv.ShowImage('Camera', frame)

        # handle events
        k = cv.WaitKey(10)
 
        if k == 0x1b: # ESC
            print 'ESC pressed. Exiting ...'
            break

Next I'll see how this works using both cameras.
Thanks to you both.
I think the chauffeur did it.

.......

He did.

Offline garrettg84

  • Robot Overlord
  • ****
  • Posts: 187
  • Helpful? 8
  • Armchair Roboticist Extraordinaire
    • http://www.garrettgalloway.com/
Re: Roborealm or OpenCV?
« Reply #8 on: September 09, 2010, 07:23:59 PM »
I've only broken out a few more lines of code and instead of calculations and external API, your app now just basically copies an image from memory in both cases. It should be a modest improvement.

I am on my cell phone right now, but I should have my cable modem back up tomorrow. I'll be downloading the OpenCV library and I'll see what else I can do with speeding it up as well as possibly implementing a threading model for it. Sadly I don't actually know enough about what this code really does - I'm not fluent in image processing, I suck at math and I haven't the slightest clue on anything signals processing. I can work the logic, but have no clue about the underlying meaning of what all is going on in there =(

Code: [Select]
import sys
import cv

#Parameters for haar detection

min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# create capture device
device = 1
capture = cv.CreateCameraCapture(1)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 240)

storage = cv.CreateMemStorage(0)

# # Last chunk to move out of the loop
MakeImage = cv.CreateImage((320, 240), 8, 1)
Scale_Image = cv.CreateImage((cv.Round(320 / image_scale),
               cv.Round (240 / image_scale)), 8, 1)

def face_detect(image, cascade):
    # # Copy global variables to something local for work...
    # # Memory copying is relatively less expensive than the
    # #     calculations and API calls above in the global
    tmp_MakeImage = MakeImage
    tmp_Scale_Image = Scale_Image

    #convert to grayscale
    cv.CvtColor(image, tmp_MakeImage, cv.CV_BGR2GRAY)
    #scale image to increase processing
    cv.Resize(tmp_MakeImage, tmp_Scale_Image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(tmp_Scale_Image, tmp_Scale_Image)

    if(cascade):
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(tmp_MakeImage, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        if faces:
            for (x,y,w,h),n in faces:
                pt1 = (x,y)
                pt2 = (x+w,y+h)
                cv.Rectangle(image, pt1, pt2, 255)

if __name__ == "__main__":

    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Camera')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    # check if capture device is OK
    if not capture:
        print "Error opening capture device"
        sys.exit(1)
 
    while 1:
        # do forever
        # capture the current frame

        frame = cv.QueryFrame(capture)

        if frame is None:
            print "breaking"
            break
 
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frame, cascade)
 
        # display webcam image
        cv.ShowImage('Camera', frame)

        # handle events
        k = cv.WaitKey(10)
 
        if k == 0x1b: # ESC
            print 'ESC pressed. Exiting ...'
            break
-garrett

Offline jaime

  • Jr. Member
  • **
  • Posts: 30
  • Helpful? 1
Re: Roborealm or OpenCV?
« Reply #9 on: September 10, 2010, 10:22:49 AM »
Feeling adventuresome?

I cleaned up this code to make it pythonic.  I'm sure I broke something.

I'll help you fix it if you're interested.

Code: [Select]
import sys
import cv


class CaptureDeviceCreationError(Exception):
    '''
    Raised when a capture device can not be created.
    '''

# Unused variables.  I wasn't sure if these could be removed.  One reason you
# can not haphazardly remove variables, is that other modules may import this
# module, and look for these variables.
min_size = (20, 20)
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# Parameters for haar detection
# Upcase constants -- it's programming a convention.
IMAGE_SCALE = 2
WIDTH, HEIGHT = 320, 240
DEVICE = 1   

# Keycode for escape
KEY_ESC = 0x1b

def create_capture_device(index):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    capture = cv.CreateCameraCapture(index) # Did you mean to use DEVICE here instead of 1?
    # Making the assumption that None is returned if the camera cant be created.
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return capture


def face_detect(image, cascade, capture):
    # I think make_image and scale_image can be reused, if so make this global
    # by moving the below statements just above this function definition.
    make_image = cv.CreateImage((WIDTH, HEIGHT), 8, 1)
    scale_image = cv.CreateImage((cv.Round(WIDTH / IMAGE_SCALE),
                                  cv.Round (HEIGHT / IMAGE_SCALE)), 8, 1)

    # convert to grayscale
    cv.CvtColor(image, make_image, cv.CV_BGR2GRAY)

    # scale image to increase processing
    cv.Resize(make_image, scale_image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(scale_image, scale_image)

    if cascade:
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(make_image, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)


def main():
    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Camera')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    capture = create_capture_device(DEVICE)
    storage = cv.CreateMemStorage(0)

    while 1:
        # do forever
        # capture the current frame

        frame = cv.QueryFrame(capture)
        return 'Could not QueryFrame'
 
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frame, cascade, capture)
 
        # display webcam image
        cv.ShowImage('Camera', frame)

        # handle events
        if cv.WaitKey(10) == KEY_ESC:
            print 'ESC pressed. Exiting ...'
            return 0


if __name__ == '__main__':
    sys.exit(main())


Offline SeagullOneTopic starter

  • Robot Overlord
  • ****
  • Posts: 248
  • Helpful? 0
  • Humans and Robots working together for our future.
    • Loren John Presley - Author, Artist, Roboteer
Re: Roborealm or OpenCV?
« Reply #10 on: September 11, 2010, 09:03:28 PM »
Thank you, Jaime!

I guess I have a lot to learn about python standards.  :P

I did have to rearrange a few items in the code to make it work, but it runs beautifully!

Code: [Select]
import sys
import cv

class CaptureDeviceCreationError(Exception):
    '''
    Raised when a capture device can not be created.
    '''

# Unused variables.  I wasn't sure if these could be removed.  One reason you
# can not haphazardly remove variables, is that other modules may import this
# module, and look for these variables.
min_size = (20, 20)
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# Parameters for haar detection
# Upcase constants -- it's programming a convention.
IMAGE_SCALE = 2
WIDTH, HEIGHT = 320, 240
DEVICE = 1   

# Keycode for escape
KEY_ESC = 0x1b

def create_capture_device(index):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    capture = cv.CreateCameraCapture(index) # Did you mean to use DEVICE here instead of 1?
    # Making the assumption that None is returned if the camera cant be created.
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return capture


make_image = cv.CreateImage((WIDTH, HEIGHT), 8, 1)
scale_image = cv.CreateImage((cv.Round(WIDTH / IMAGE_SCALE),
                                  cv.Round (HEIGHT / IMAGE_SCALE)), 8, 1)

def face_detect(image, cascade, capture):
    # convert to grayscale
    cv.CvtColor(image, make_image, cv.CV_BGR2GRAY)

    # scale image to increase processing
    cv.Resize(make_image, scale_image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(scale_image, scale_image)

    storage = cv.CreateMemStorage(0)

    if cascade:
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(make_image, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)


def main():
    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Camera')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    capture = create_capture_device(DEVICE)

    while 1:
        # do forever
        # capture the current frame

        frame = cv.QueryFrame(capture)
 
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frame, cascade, capture)
 
        # display webcam image
        cv.ShowImage('Camera', frame)

        # handle events
        if cv.WaitKey(10) == KEY_ESC:
            print 'ESC pressed. Exiting ...'
            return 0


if __name__ == '__main__':
    sys.exit(main())

Next, I'm going to see how it works using two web cams. I've already tried implementing a second, identical function for the right webcam on my robots stereoscopic head, but I guess it wasn't such a good idea because no matter what I do, the program freezes up when I run it...
The idea of this code below is to see how fast the program runs when running face detection on two webcam feeds. Eventually I'm going to implement basic stereovision, but until I can get it to work, I'm using what I've got so far.

Again, be warned that the below code freezes up my python window, sometimes I need to log off and log on again to get it to go away...

Code: [Select]
import sys
import cv

class CaptureDeviceCreationError(Exception):
    '''
    Raised when a capture device can not be created.
    '''

min_size = (20, 20)
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# Parameters for haar detection
IMAGE_SCALE = 2
WIDTH, HEIGHT = 320, 240
DEVICE = 1   

# Keycode for escape
KEY_ESC = 0x1b

def Left_Optic(index):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    Left = cv.CreateCameraCapture(1)
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return Left

def Right_Optic(index):
    Right = cv.CreateCameraCapture(2)
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return Right

make_image = cv.CreateImage((WIDTH, HEIGHT), 8, 1)
scale_image = cv.CreateImage((cv.Round(WIDTH / IMAGE_SCALE),
                                  cv.Round (HEIGHT / IMAGE_SCALE)), 8, 1)

def face_detect(image, cascade, Left, Right):
    # convert to grayscale
    cv.CvtColor(image, make_image, cv.CV_BGR2GRAY)

    # scale image to increase processing
    cv.Resize(make_image, scale_image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(scale_image, scale_image)

    storage = cv.CreateMemStorage(0)

    if cascade:
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(make_image, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)


def main():
    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Left Optic')
    cv.NamedWindow('Right Optic')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    captureL = Left_Optic(DEVICE)
    captureR = Right_Optic(DEVICE)
    while 1:
        # do forever
        # capture the current frame

        frameL = cv.QueryFrame(Left)
        frameR = cv.QueryFrame(Right)
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frame, cascade, Left, Right)
 
        # display webcam image
        cv.ShowImage('Left Optic', frameL)
        cv.ShowImage('Right Optic', frameR)
        # handle events
        if cv.WaitKey(10) == KEY_ESC:
            print 'ESC pressed. Exiting ...'
            return 0


if __name__ == '__main__':
    sys.exit(main())
I think the chauffeur did it.

.......

He did.

Offline garrettg84

  • Robot Overlord
  • ****
  • Posts: 187
  • Helpful? 8
  • Armchair Roboticist Extraordinaire
    • http://www.garrettgalloway.com/
Re: Roborealm or OpenCV?
« Reply #11 on: September 11, 2010, 10:38:39 PM »
Again, be warned that the below code freezes up my python window, sometimes I need to log off and log on again to get it to go away...

Code: [Select]
import sys
import cv

class CaptureDeviceCreationError(Exception):
    '''
    Raised when a capture device can not be created.
    '''

min_size = (20, 20)
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# Parameters for haar detection
IMAGE_SCALE = 2
WIDTH, HEIGHT = 320, 240
DEVICE = 1   

# Keycode for escape
KEY_ESC = 0x1b

# # one function will suffice for both
def capture_Optic(deviceIndex):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    capDevice = cv.CreateCameraCapture(deviceIndex)
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return capDevice

make_image = cv.CreateImage((WIDTH, HEIGHT), 8, 1)
scale_image = cv.CreateImage((cv.Round(WIDTH / IMAGE_SCALE),
                                  cv.Round (HEIGHT / IMAGE_SCALE)), 8, 1)

def face_detect(image, cascade, Left, Right):
    # convert to grayscale
    cv.CvtColor(image, make_image, cv.CV_BGR2GRAY)

    # scale image to increase processing
    cv.Resize(make_image, scale_image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(scale_image, scale_image)

    storage = cv.CreateMemStorage(0)

    if cascade:
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(make_image, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)


def main():
    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Left Optic')
    cv.NamedWindow('Right Optic')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    # # Made these reflect the dvice to capture
    captureL = capture_Optic(1)
    captureR = capture_Optic(2)
    while 1:

        # do forever
        # capture the current frame
        frameL = cv.QueryFrame(Left)
        frameR = cv.QueryFrame(Right)

        # # Which frame, why do we mirror here?
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frameL, cascade, Left, Right)
        face_detect(frameR, cascade, Left, Right)
 
        # display webcam image
        cv.ShowImage('Left Optic', frameL)
        cv.ShowImage('Right Optic', frameR)
        # handle events
        if cv.WaitKey(10) == KEY_ESC:
            print 'ESC pressed. Exiting ...'
            return 0


if __name__ == '__main__':
    sys.exit(main())

Ok so a few modifications, but nothing big. There is one line that confuses me, the line that does mirroring on 'frame' we no longer have a variable 'frame' it is either 'frameL' or 'frameR' i believe. This could be the reason you are experiencing lockup.

Btw, I am still on my cell phone, no cable modem yet so no bandwidth for downloading opencv for playing =(
-garrett

Offline jaime

  • Jr. Member
  • **
  • Posts: 30
  • Helpful? 1
Re: Roborealm or OpenCV?
« Reply #12 on: September 12, 2010, 07:39:29 PM »
Someone with a sense of adventure.  Great!

First, lets look at this chunk of code... You rewrote it, but you did not need too.

Code: [Select]
def create_capture_device(index):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    capture = cv.CreateCameraCapture(index)
    # Making the assumption that None is returned if the camera cant be created.
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return capture

The code above creates a capture device.  Look closely at the comments and you'll see that you specify which camera to create with the index parameter.  So, to create two cameras, try this code below:

Code: [Select]
    captureL = create_capture_device(1)
    captureR = create_capture_device(2)

Now, you say that your code is "locking up".  How do you run your python program?  Do you "double-click" on it from windows explorer, or do you run it from a command prompt?

I have a strong feeling that you're running your code directly from windows explorer.  You should run it from the command prompt, because you'll see the stack trace when your program crashes.  This will help you track down your problem.

Looking at your code, you're using variables that have not been declared anywhere:

Code: [Select]
        frameL = cv.QueryFrame(Left)

I patched your program, but I'm not sure if it is fixed.

Code: [Select]
import sys
import cv

class CaptureDeviceCreationError(Exception):
    '''
    Raised when a capture device can not be created.
    '''

min_size = (20, 20)
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# Parameters for haar detection
IMAGE_SCALE = 2
WIDTH, HEIGHT = 320, 240
DEVICE = 1   

# Keycode for escape
KEY_ESC = 0x1b


def create_capture_device(index):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    capture = cv.CreateCameraCapture(index)
    # Making the assumption that None is returned if the camera cant be created.
    if not capture:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return capture


make_image = cv.CreateImage((WIDTH, HEIGHT), 8, 1)
scale_image = cv.CreateImage((cv.Round(WIDTH / IMAGE_SCALE),
                                  cv.Round (HEIGHT / IMAGE_SCALE)), 8, 1)

def face_detect(image, cascade, Left, Right):
    # convert to grayscale
    cv.CvtColor(image, make_image, cv.CV_BGR2GRAY)

    # scale image to increase processing
    cv.Resize(make_image, scale_image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(scale_image, scale_image)

    storage = cv.CreateMemStorage(0)

    if cascade:
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(make_image, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)


def main():
    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Left Optic')
    cv.NamedWindow('Right Optic')

    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    captureL = create_capture_device(1)
    captureR = create_capture_device(2)

    while 1:
        # do forever
        # capture the current frame

        frameL = cv.QueryFrame(captureL)
        frameR = cv.QueryFrame(captureR)
        # mirror
        cv.Flip(frame, None, 1)
 
        # face detection
        face_detect(frame, cascade, captureL, captureR)
 
        # display webcam image
        cv.ShowImage('Left Optic', frameL)
        cv.ShowImage('Right Optic', frameR)
        # handle events
        if cv.WaitKey(10) == KEY_ESC:
            print 'ESC pressed. Exiting ...'
            return 0



if __name__ == '__main__':
    sys.exit(main())

Let's try running your program from a command prompt.  It will really help you get to the bottom of your problems.

Let me know how the next run goes.

I dont have the hardware necessary to test this, or I'd try it myself.  Feel free to mail me any necessary hardware  ;D ;D

jaime

Offline SeagullOneTopic starter

  • Robot Overlord
  • ****
  • Posts: 248
  • Helpful? 0
  • Humans and Robots working together for our future.
    • Loren John Presley - Author, Artist, Roboteer
Re: Roborealm or OpenCV?
« Reply #13 on: September 13, 2010, 10:13:32 AM »
Thanks to you both. :D

Here's what the code looks like. It runs beautifully.

Code: [Select]
#Detects faces using OpenCV on two cameras running at the same time
#Not stereoscopic vision, just two cameras running face detection
import sys
import cv

class CaptureDeviceCreationError(Exception):
    '''
    Raised when a capture device can not be created.
    '''
#These are the parameters used to detect faces.
#They are not used in this software, but good to know for reference.
#It's possible to tweak them in code, but these will suffice for now.
min_size = (20, 20)
haar_scale = 1.2
min_neighbors = 2
haar_flags = cv.CV_HAAR_DO_CANNY_PRUNING

# Parameters for haar detection
IMAGE_SCALE = 2
WIDTH, HEIGHT = 320, 240
DEVICE_L = 0
DEVICE_R = 1

# Keycode for escape
KEY_ESC = 0x1b

#The main function for setting up a camera.
def capture_Optic(deviceIndex):
    '''
    Create the capture device.

    If the device can't be captured, an CaptureDeviceCreationError is raised.

    index:  Index of the camera to be used. If there is only one camera or it
    does not matter what camera to use -1 may be passed.
    '''
    capDevice = cv.CreateCameraCapture(deviceIndex)
    if not capDevice:
        raise CaptureDeviceCreationError()
    cv.SetCaptureProperty(capDevice, cv.CV_CAP_PROP_FRAME_WIDTH, WIDTH)
    cv.SetCaptureProperty(capDevice, cv.CV_CAP_PROP_FRAME_HEIGHT, HEIGHT) 
    return capDevice

make_image = cv.CreateImage((WIDTH, HEIGHT), 8, 1)
scale_image = cv.CreateImage((cv.Round(WIDTH / IMAGE_SCALE),
                                  cv.Round (HEIGHT / IMAGE_SCALE)), 8, 1)

def face_detect(image, cascade, captureL, captureR):
    # convert to grayscale
    cv.CvtColor(image, make_image, cv.CV_BGR2GRAY)

    # scale image to increase processing
    cv.Resize(make_image, scale_image, cv.CV_INTER_LINEAR)

    cv.EqualizeHist(scale_image, scale_image)

    storage = cv.CreateMemStorage(0)

    if cascade:
        # find out how fast the detection time is taking.
        t = cv.GetTickCount()
        faces = cv.HaarDetectObjects(make_image, cascade, storage, 1.2, 2, 0, (50, 50))

        t = cv.GetTickCount() - t
        print "Detection Time =", t
        # Draw a rectangle around any detected faces.
        for (x,y,w,h),n in faces:
            pt1 = (x,y)
            pt2 = (x+w,y+h)
            cv.Rectangle(image, pt1, pt2, 255)


def main():
    print "Press ESC to exit ..."
    # create windows
    cv.NamedWindow('Left Optic')
    cv.NamedWindow('Right Optic')

    #load the information for detecting faces.
    cascade = cv.Load("C:\OpenCV2.1\data\haarcascades\haarcascade_frontalface_default.xml")

    #Capture two Images from Left and Right Cameras
    captureL = capture_Optic(DEVICE_L)
    captureR = capture_Optic(DEVICE_R)
    while 1:

        # do forever
        # capture the current frame from the two cameras.
        frameL = cv.QueryFrame(captureL)
        frameR = cv.QueryFrame(captureR)
 
        # face detection
        face_detect(frameL, cascade, captureL, captureR)
        face_detect(frameR, cascade, captureL, captureR)
 
        # display webcam image
        cv.ShowImage('Left Optic', frameL)
        cv.ShowImage('Right Optic', frameR)
        # handle events
        if cv.WaitKey(10) == KEY_ESC:
            print 'ESC pressed. Exiting ...'
            return 0


if __name__ == '__main__':
    sys.exit(main())

At first the code was creating two capture "Devices" from the same camera. The index numbers were set to 0 and 1. After changing the index numbers to 1 and 2 it captured from both cameras respectively. Not sure why that was the case, but it works now.  :)

I'm now working on stereo vision using python and openCV. I've been studying a ton about how to do it, and the learning process is slow but sure.
I'm picking it up, nevertheless. Once I'm finished learning and implementing it, and I have a successful, calibrated, 3d depth perceiving stereoscopic vision system, I'm going to write a tutorial for SoR with bits of source code along the way. Thinking of titling it "Practical Stereo Vision: How to Setup Stereo Vision in OpenCV and Python."

Mainly decided to pursue this because it was very hard for me to find examples and clarification for using the python API in OpenCV to set up Stereo Vision. I found some good tutorials that use C++, and a great site for using Python and OpenCV that explains 3D reconstruction, but it's hard to follow and lacks example code for how to implement and execute functions properly. I keep having to look back and hunt for the correct reference...so it's a little frustrating.

But that's the beauty of learning, after all.

So I'm going to write a tutorial for setting up stereo vision and explaining the process practically--for beginners. I'll simply explanations and theory and provide bits of source code along the way (so readers can write line by line and read how the process works as they go).

After looking into it, I doubt I ever would have been able to use face detection and stereo vision in roborealm. Roborealm still looks like a great application...just not for what I'm doing here.  :)
I think the chauffeur did it.

.......

He did.

Offline jaime

  • Jr. Member
  • **
  • Posts: 30
  • Helpful? 1
Re: Roborealm or OpenCV?
« Reply #14 on: September 13, 2010, 02:20:14 PM »
Thats good stuff SeagullOne.

If you need any help, python code review in the future, please let me know.  I'll be glad to help.

Good luck with your studies.

jaime

 


Get Your Ad Here