Pages: 1 2 3 4 5 6 ... 10
« Last post by mklrobo on June 23, 2015, 05:39:16 PM »
Axon Series: What is this thing?
HAPPY DAY! HAPPY DAY!
I reloaded the gnu tools, and upgraded to Atmel Studio 6.2, and VOILA! I created a sample program
the software already had for a sample; compiled it, build it (5 seconds total) and the HEX file was
Joy - Joy !
It took almost FOREVER to download every server program available with
the main software, but oh boy! This software has got features out the gazu! Very easy to make a project, compile, build, and make a downloadable HEX file!
Now, I have to see if my own project can be built, and then downloaded. that will be the real test.
Did not need the project designer software; 6.2 must have created what was needed,(?)
with the settings of the pins and all. (
But, at least, I have a HEX file. If nothing else, I could make the hex file with the 6.2, then download
the hex file with 4.o version using my bootloader. On to the next adventure!
« Last post by mklrobo on June 23, 2015, 05:30:38 AM »
I am working on the making of the hex file, just to get the Axon programmed.
Once I get that
straightened out, I can proceed to work on the voice control. The only other way around using the
Axon, is to use a beaglebone or Raspberry Pi to process the voice, the follow through with the appropriate
command. I would have to find a dragon dictate or similar program to do this. I am hung at the Hex file at the moment.
« Last post by APJ1234 on June 22, 2015, 06:43:48 PM »
Hey man I just needed to know any other progress in the project.
« Last post by mklrobo on June 22, 2015, 05:40:22 AM »
I am guilty myself, I went ahead and bought a Raspberry Pi 2, with the Wheezy package from Adfruit.
I have heard so many good things and the PI, and It is frustrating to buy something and not be able to use
it as easily as advertised. Everybody I know uses a Pi, and has been programming it with no problem.
I loaded AVR studio 6 yesterday, in hopes that, that software could help me with the hex file.
I feel like I had to download half the world into my computer. Now, I have to figure out how that works,
and it looks complicated.
I am in a real tough spot, to try to get the Axon to program. I will just keep plugging away at it.
the Axon would make a nice accessory to the PI, were I get either one to work.
« Last post by pedromatias on June 21, 2015, 04:07:14 PM »
First, I, in no way, implied that you were stupid. I have had the same problem as you. I need help
with the programming, no doubt.
When I saw this forum, and read some of the posts, and the tutorials, it seemed easy to use the
Axon series. I invested a lot of time/money/effort in order to understand and program the Axon.
You are right, if you can not compile the Hex file, the robot will never work.
I have worked with small MCU's like this, and I am still stumped.( ) I may have gotten a virus,
downloaded something wrong, or whatever. I have found info pointing to a direct way to compile
the object file to a hex file, but have been unsuccessful in making it work.
In reference to your videos, I would definitely do a dry run first, maybe making notes, then go through
with your video.
I believe other people are having the same problem, that is probably why you have not seem many
people respond to your posts. (and mine).
Once this problem is resolved, and other people see your video and my posts, that will pretty much eliminate anyone having any problems starting out. After that, more people will be encouraged to use the Axon, and post their projects on this forum.
I do not wish to discourage anyone from using the Axon. It seems problematic now, but it is being used everywhere, and is a very versatile MCU. If I knew the answer to help you, I would definitely tell you.
I am still working on the problem, even though I feel Hexed.
I know you didn't imply I was stupid! I merely said that my ignorance might look stupid to an experienced builder like you
I have been googling a lot about robots and stuff, and I really wish that there would be a 50 dollar robot based on something like the Raspberry Pi or the Arduino. I think they are very "hip" now, and it would be easier to get parts and help! I really think the Raspberry Pi is the way to go if someone creates a new first robot tutorial. Programmable in Python, a language very suitable for beginners like me, and it has plenty of projects we can make, less trouble to get the pieces for people that are not in the US and it isn't very expensive. Plus, I could use the Pi for other projects, but I doubt I'll ever be able to use the pieces I bought for this robot, because I don't understand what the hell they are hehehe. Thanks for replying, and again, I know you didn't say I was stupid, but I feel stupid whenever I read about robots hahah
« Last post by Rushmoore on June 21, 2015, 09:48:38 AM »
Hello! Thanks for your reply! Just a quick question: When you say "feed the audio through the voice changer" how would you actually do this? circuit wise?
« Last post by mklrobo on June 21, 2015, 09:37:36 AM »
I would offer a suggestion, where permissible;Sparkfun
used to offer a voice recorder, that recorded about 120 seconds
.(dedicated, small memory)
I imagine, use that kit to record your voice, but when playback is activated, feed the audio through
the voice changer.
This way, your original voice file is not corrupted
, and you can always listen to
the changes that were made; to have a avenue of comparison
, so to speak. (you can hear how
weird it gets!) You can do the same thing with a digital recorder; just feed the audio into the voice changer, and Volia!
(or whatever you want.
) Good Luck!!!
« Last post by Rushmoore on June 21, 2015, 01:26:22 AM »
Before I begin, I confess that I am an electronics noob and I apologize for asking such stupid questions. Ahem. Ill begin:
So I want to use the velleman voice changer, the device modifies the speakers voice in real time, however I do not want this. I want to be able to record a message then press a button to hear the modified voice. How can I do this?
« Last post by yashmaniac on June 20, 2015, 12:37:56 AM »
Thanks for the inputs,
It seems, the more I try to get this concept to work, the more expensive it gets.
if the plates are flat and you know the depth then you can use one camera at an angle. Measure the distance from the camera to the flat surface. The higher up the image the further away the object. Edge detection could detect the plate edges and the position in the image could be used to determine where it is.
I was planning a system which could identify the shapes of the materials to be welded. So if it was 2 pipes, the intersection could be traced and followed up in 3d by a robotic arm. I it happened to be two plates, the intersection could be traced and coordinates could be generated without changing the program.
Is it possible to derive the shape of the profile and the 3d coordinates of the intersection, from a video?
I had been thumbing through some tech papers, and ran across some options. The goal you seek may lie
in video forensics. (If this was CSI, we'd have the answer in an hour. Ha! )
If this could be done (cheaply), people in this forum, like Kobratek, could use this to convert
parts to dimensions in no time. Obviously, more information is needed, but maybe still on the right
I had to provide an accurate scale of a part at work, and so, I put a grid in the background of the picture.
This provided scale and angle perspective. In that context, if you had three cameras in an X, Y, and Z coordinate position from the target, that may be a starting point for your goal. If the cameras could be adjusted to focus on certain distances from their position, maybe this would give "layers"
of perspective for target spacial definition.( ) Using sound as a depth gauge could be used. Also, maybe using lasers (only the camera can see) in different shaped may give depth perception. Consider this;
If you used a laser that emitted a circle, that provided many ringed concentric projections( ring projections
are set at a measured distance from each other), this could help provide depth perspective, and be invisible
to the user in infrared/ultraviolet lasers are used. Good luck!
This seems plausible. The grid in the back ground could help set a scale of reference. I could then have a camera placed at some distance along each coordinate axis. Hopefully, all I would have to do then would be isolating the intersection through each camera feed and use the background to generate 3d coordinates. This is still too expensive for me to try out, but I heard ROS offers some simulation capability, so I may look into that.
Thanks a lot again!!
« Last post by mklrobo on June 19, 2015, 04:55:42 AM »
Yes, its true.
The controllers of Sprint and owners of alibaba (industrial trading) have co-ventured to
make an emotional robot. This info alert off of a online report describes;Humanoid robots named Pepper are envisioned as companions for the elderly, teachers of schoolchildren and retail or office assistants. AKIO KON/BLOOMBERG NEWS
Just saw the newcast this morning; I am sure more info is to follow.....
Pages: 1 2 3 4 5 6 ... 10