I have a robotic arm that just refuses to be controlled smoothly and am turning to you for help.
I am attempting to control it in real time (or as close to real time as possible) using a kinematically matched master.
The master has encoders on each joint which are read by the main computer.
The main computer communicates with the robots nodes on an RS 485 bus at ~25hz
The motor controllers on the robot run at ~1kHz
My initial attempt was to implement a simple PID loop on the embedded motor controllers. The Set Point is updated every .040s (25 hz). The Processed value is determined from encoders built into the motors on the robot.
The output of this is in general very jerky motion. Essentially what happens is that the robot receives a new command gets to the new position before it receives the next command. If the master is moved fast enough, the motion is smoothed out. I attempted to lower the max velocity (PWM) of the robots motors, however this is more of a band-aid solution, as it can no longer move at high speeds, and if the master is moved slowly enough the jerkyness comes back.
I have a few things I want to try next. For one, I want to implement a ramp (trapezoidal) profile to the output of the PID loop, essentially limiting the max change in PWM per time step. I theory the D gain component should do something like this, but in practice it doesn't seem to do a damn thing other than seriously destabilize the system. This may help, but I do not think it is the optimal solution.
I have done some research and it seems that a combined position / velocity control may be the ticket, but this black magic to me and do not understand it well enough to actually write a usable algorithm.
Oh, and I realize that speeding up the data rate past 25 hz would probably solve the problem, but the other members of my team assure me that this is not possible.
Anyone think they can explain the deeper mysteries of control algorithms to me?