## 11.3. Motion Control with Velocity Inputs (Part 2 of 3)

#### 11.3. Motion Control with Velocity Inputs (Part 2 of 3)

This video introduces proportional-integral (PI) control of the position of a single-degree-of-freedom system, and feedforward plus PI feedback control, for the case where the desired position is a ramp as a function of time (constant velocity) and the control input is the velocity. The approach generalizes easily to the control of a multi-degree-of-freedom robot.

In the previous video we saw that, for the task of tracking a trajectory with a constant velocity, a proportional controller results in nonzero steady-state error c over K_p. To fix this, let's augment the P controller with another term that is proportional to the integral of the error over time. K_i is called the integral gain, since it multiplies the time integral of the error. We can calculate this time integral numerically in the control computer. We call this new controller proportional-integral control, or PI control for short.

Plugging this controller into the error dynamics, we get this equation for the time-evolution of the error. To turn it into a differential equation, we can take the derivatives of both sides. We now have a standard second-order homogeneous differential equation, where the damping ratio is K_p over 2 times the square root of K_i and the natural frequency is the square root of K_i. By our mass-spring-damper analogy from earlier videos, K_i plays the role of the spring and K_p plays the role of the damper. If K_i and K_p are both positive, then the error dynamics are stable and the steady-state error is zero. In other words, the commanded velocity is nonzero when the error is zero because the integral of the error is not zero.

The characteristic equation of the error dynamics is s-squared plus K_p s plus K_i equals zero, which means the roots are given by this equation. We can plot these roots in the complex plane. Here, the roots marked One correspond to overdamped error dynamics, where K_p is large relative to K_i. Here is a plot of the overdamped error response.

If we increase the gain K_i until it equals K_p-squared over 4, so that the term in the square root is zero, we get a critically damped response, indicated as Two. Increasing the gain K_i pushes the two roots toward each other until critical damping, where both roots are located at minus K_p over 2. Note that the critically damped response is faster than the overdamped response.

If we continue to increase the gain K_i, the term in the square root becomes negative, and the two roots become complex conjugates, moving away from each other in the vertical direction. The new roots, and the new response, are marked Three. Since the real values of the roots are unchanged, the settling time is unchanged, but we now see overshoot and oscillation in the error response. Of the three error responses, the critically damped response, marked Two, is the best, since it is fast and has no overshoot. In general, critical damping is a good goal for second-order error dynamics.

The path that the two roots trace as we change the control gain K_i is called a "root locus." A root locus plots the roots as we change a single parameter, such as the gain K_i in this example. We can use the root locus to help us choose control gains. We would like to keep the roots far to the left, for fast settling, and close to the real axis, to minimize overshoot. As discussed before, though, there are limits as to how large we can choose the gains.

Let's look at an example of P and PI control applied to tracking a trajectory with a constant velocity. The dashed line represents the desired position theta_d as a function of time. The actual initial position, theta at time zero, has some error, as shown by the dot. The P controller by itself cannot track the trajectory; in steady state, it always lags behind the desired position by c over K_p, as we calculated earlier. We can also see this in the error response.

On the other hand, if we add an integral term, we see the PI controller achieves zero steady-state error. Here we've chosen the PI controller to be underdamped; a better PI controller would eliminate overshoot and achieve critical damping by choosing a lower gain K_i or a larger gain K_p.

In summary, P control eliminates steady-state error for setpoint control, and PI control eliminates steady-state error for any trajectory with constant velocity. A PI controller cannot eliminate steady-state error for arbitrary trajectories, but a properly tuned PI controller should provide good tracking for many practical trajectories.

A PI control system can be visualized as a block diagram, as shown here. The actual position is subtracted from the reference position to get the error, and this error is multiplied by the gain K_p, and integrated and multiplied by the gain K_i. These terms are then summed to produce the commanded joint velocity. The robot moves and a sensor returns the actual position to the controller.

One problem with this control law is that the robot never moves until there is an error to force it to move. Since we know the desired trajectory, we should be able to move the robot without waiting for error to accumulate. We can augment this control law by adding a feedforward term. If the error is zero, the commanded velocity is just the desired velocity. This is our final preferred control law if the commanded controls are velocities.

Until now we've been discussing a robot with a single joint, but the control law is unchanged for a multi-joint robot. Each joint has this same control law governing it. We can write the scalar equation for each joint as a single vector equation by treating theta, theta_d, and theta_e as vectors, and treating K_p and K_i each as an identity matrix times a positive scalar.

The control law I've just described is expressed in terms of joint trajectories. But sometimes it's more convenient to express the desired motion in terms of the motion of the end-effector. This leads to task-space motion control, the topic of the next video.