Society of Robots - Robot Forum

Software => Software => Topic started by: leeguy92 on December 03, 2008, 07:06:56 AM

Title: ANN simulation on an AVR
Post by: leeguy92 on December 03, 2008, 07:06:56 AM
hi, i want to build some robots, which will have to run a "fully connected recurrent artificial neural network" ie - where each neuron's output serves as an input to every other neuron.

I was considering using an AVR MCU for this because:
it can be programmed in C
it has multiply/divide instructions
it is faster than a PIC
GCC target.
I have never used one before.
cheaper than an AT91(which i have never really used either).

Another option is a parallax propeller - which lacks the mul/div instructions, and GCC compatibility, but does have 8 processors. unfortunately, it only has 32kb of program/data ram.


will an AVR be able to do this at a reasonable speed(say with 100 neurons)? Or should i go with another processor like an ARM?

Lee.
Title: Re: ANN simulation on an AVR
Post by: cosminprund on December 03, 2008, 08:14:07 AM
Define "raisonable speed". Define "run". Do you want to simply RUN the network or do you want to do training? Will the MCU do anything but run the network?

I've done some math and an 1 instruction-per-clock-cycle MCU as the AVR is should be able to apply the network to an set of intpus about 100 times per second per MIPS. That's enough to run it but not enough to train it (that's my guestimation).

Why do you need so many neurons? What will your robot do?
Title: Re: ANN simulation on an AVR
Post by: leeguy92 on December 03, 2008, 11:01:24 AM
yes, it's only to run the network....unless I implement some kind of real-time learning as described(sort of) in my other topic in this section.

The application is two 'tank' robots, which have to drive around an arena and fire at each other.

I want to give it as many neurons as possible(the constraint being training time on my desktop computer) because I want to see what kind of complex behaviours i can get.
complex behaviours being things like hiding and demonstration of memory and sequenced actions (the recurrent ANN gives this as far as i'm aware).
Title: Re: ANN simulation on an AVR
Post by: Admin on December 06, 2008, 01:39:55 AM
With the given RAM limitations on mcu's, I can't see neural nets being done on anything but FPGA's and PCs.
Title: Re: ANN simulation on an AVR
Post by: cosminprund on December 06, 2008, 09:29:42 AM
@To leeguy92:
The more neurons you have the harder it will be to get your training to converge to something. I think 100 is excessive and I think an "photovore" can be done with as few as 4!

@To admin:
Training a neural network on an MCU would be really hard but running a neural network is a different thing. You can keep all required weights in the code section so you would only need RAM for the current state of the neurons. Since the state of 1 neuron can be represented with a single BIT I'd say running a 100 neurons neural network on a MCU is feasible.
Title: Re: ANN simulation on an AVR
Post by: Admin on December 06, 2008, 09:45:12 AM
Quote
Since the state of 1 neuron can be represented with a single BIT
So how do you represent the connections between neurons? ;)
Title: Re: ANN simulation on an AVR
Post by: cosminprund on December 06, 2008, 09:54:57 AM
You hard-code them into the code. You write one function for every single neuron that has the location for the status for every single other neuron it connects to hard coded. It also has all the weights hard-coded. This simply moves the weights from RAM to CODE (I'm thinking in PICMicro terms, and a literal constant with the PICMicro is stored with the OPCODE of the intruction into the program memory). I haven't looked at Atmel's/AVR's yet but an PIC easily has enough program memory to support this.
Title: Re: ANN simulation on an AVR
Post by: Admin on December 06, 2008, 10:04:00 AM
I assumed it was a growing and self propagating neural net . . . a hardcoded net will definitely be limited!
Title: Re: ANN simulation on an AVR
Post by: cosminprund on December 06, 2008, 10:22:17 AM
That's why I asked about training vs running a neural network. Training the neural network would pose problems much harder then RAM limitation. Ex: training the network requires some form of feedback to help it in the right direction. Where would this feedback come from? Given the feedback can be managed, altering the weights for the actual learning requires solving some serious equations, doing that on the limited MCU's might be challenging (again, my experience is with the PICMicro MCU's, the ones I'm working with don't have multiplication!). An other huge problem would be convergence time.

On the other hand using a pre-trained neural network is not as bad as it sounds. As far as I know all OCR software uses AI, but none of the final users do any training: they just use the given software.
Title: Re: ANN simulation on an AVR
Post by: leeguy92 on December 13, 2008, 04:06:59 PM
Whoa....wha???? you have my interest...

Quote
I assumed it was a growing and self propagating neural net

that sounds cool, how do you mean...?
Title: Re: ANN simulation on an AVR
Post by: Admin on December 15, 2008, 04:28:04 AM
Quote
that sounds cool, how do you mean...?
http://en.wikipedia.org/wiki/Neural_network#History_of_the_neural_network_analogy
Title: Re: ANN simulation on an AVR
Post by: leeguy92 on December 15, 2008, 05:56:57 PM
i meant the "growing" bit, do you mean real-time learning(in the past i have only trained ANNs in simulation)?
It seems like training in the real world would take hours at least, excluding battery charging/replacement time.
Title: Re: ANN simulation on an AVR
Post by: paulstreats on December 17, 2008, 01:29:16 PM
You could interface external memory...

Use 1 microcontroller to run on the neural net and then use another to provide additional learning. So it learns while being used... (of course it would be better to be pre-trained to start with.).

Also, are you planning on predefining a structure to the neural net? or just literally linking every neuron with every other neuron?

Its my understanding that predefining several neural net structures to act as seperate systems but allow them growing space as it were is the way to go.... maybe not?