Software > Software

Behavioural Robotics

(1/2) > >>

Jon_Thompson:
Is anyone else here doing anything with behavioural robotics? My main focus of interest is in this area, specifically in the design of systems that adapt to previous experience of the environment.

mstacho:
I've looked into it a while back.  My work is (vaguely) dealing with something that comes right before behavioural robotics: we are working on adaptation and rapid learning for a specific context at the moment.  The idea being: if the robot has a learned representation from experience, cool, it should use it.  but if it doesn't, it shouldn't keep failing while learning until it gets it right.

MIKE

jwatte:

--- Quote ---it shouldn't keep failing while learning until it gets it right.
--- End quote ---

So how would it learn? From what I know, that's actually exactly what humans do when learning something as children.

Jon_Thompson:

--- Quote from: mstacho on March 14, 2013, 06:45:50 AM ---I've looked into it a while back.  My work is (vaguely) dealing with something that comes right before behavioural robotics: we are working on adaptation and rapid learning for a specific context at the moment.  The idea being: if the robot has a learned representation from experience, cool, it should use it.  but if it doesn't, it shouldn't keep failing while learning until it gets it right.

MIKE

--- End quote ---


I get where you're coming from on that one!

I have a long term fascination with the Eliza Effect. Does a machine have to be genuinely intelligent, or do we simply have to perceive it to be so? After all, we readily anthropomorphise other animate objects.

My experiment involves building a kind of "pet" that people can play with. Though it is clearly a machine with tracks and bumpers and so on, if its range of reactions to situations is flexible, adaptable and natural enough, will they begin to imbue it with its own "personality" without it resembling a real animal? Above all, will it be fun to have around?

mstacho:
@jwatte: Although I agree that we learn by making mistakes, that is because a lot of our neural networks aren't structured.  We are, in effect, learning what the model is along with that the model does, and that leads to a ton of mistakes.  I feel that robots learning new skills can be given a leg up by having really excellent reactive algorithms working in parrallel with learning algorithms.  This way, learning doesn't *enable* performance, it *enhances* it.

I come at it from an engineering background, though, so to me the best robot is the one that does the job, not the one that has the coolest software. :-P  Eventually, reactive algorithms fail and learning has to be done, I agree.  But we should give the robot the best abilities we can before it has to learn in order to maximize its usefulness.  At least, so my philosophy goes.

@Jon_Thompson: Definitely look up the (now old)  "Subsumption architecture" from Rodney Brooks.  I'm personally a fan of a hybrid approach.  While Brooks says that complex behaviour doesn't need complex control policies and I agree, I also feel that SOME complex behaviour does, or can benefit from more complex control. 

Also look up http://en.wikipedia.org/wiki/Braitenberg_vehicle.  They're neat, and really go to show you how far you can get by just wiring sensors to motors.

MIKE

Navigation

[0] Message Index

[#] Next page

Go to full version