given by
Figure 1.22 Solving for the joint angles of a two-link planar arm.
(1.8)
We could now determine θ2 as θ2 = cos − 1(D). However, a better way to find θ2 is to notice that if cos (θ2) is given by Equation (1.8), then sin (θ2) is given as
(1.9)
and, hence, θ2 can be found by
(1.10)
The advantage of this latter approach is that both the elbow-up and elbow-down solutions are recovered by choosing the negative and positive signs in Equation (1.10), respectively.
It is left as an exercise (Problem 1–17) to show that θ1 is now given as
(1.11)
Notice that the angle θ1 depends on θ2. This makes sense physically since we would expect to require a different value for θ1, depending on which solution is chosen for θ2.
Chapter 6: Dynamics
In Chapter 6 we develop techniques based on Lagrangian dynamics for systematically deriving the equations of motion for serial-link manipulators. Deriving the dynamic equations of motion for robots is not a simple task due to the large number of degrees of freedom and the nonlinearities present in the system. We also discuss the so-called recursive Newton–Euler method for deriving the robot equations of motion. The Newton–Euler formulation is well-suited for real-time computation for both simulation and control applications.
Chapter 7: Path Planning and Trajectory Generation
The robot control problem is typically decomposed hierarchically into three tasks: path planning, trajectory generation, and trajectory tracking. The path planning problem, considered in Chapter 7, is to determine a path in task space (or configuration space) to move the robot to a goal position while avoiding collisions with objects in its workspace. These paths encode position and orientation information without timing considerations, that is, without considering velocities and accelerations along the planned paths. The trajectory generation problem, also considered in Chapter 7, is to generate reference trajectories that determine the time history of the manipulator along a given path or between initial and final configurations. These are typically given in joint space as polynomial functions of time. We discuss the most common polynomial interpolation schemes used to generate these trajectories.
Chapter 8: Independent Joint Control
Once reference trajectories for the robot are specified, it is the task of the control system to track them. In Chapter 8 we discuss the motion control problem. We treat the twin problems of tracking and disturbance rejection, which are to determine the control inputs necessary to follow, or track, a reference trajectory, while simultaneously rejecting disturbances due to unmodeled dynamic effects such as friction and noise. We first model the actuator and drive-train dynamics and discuss the design of independent joint control algorithms.
A block diagram of a single-input/single-output (SISO) feedback control system is shown in Figure 1.23. We detail the standard approaches to robot control based on both frequency domain and state space techniques. We also introduce the notion of feedforward control for tracking time-varying trajectories. We also introduce the fundamental notion of computed torque, which is a feedforward disturbance cancellation scheme.
Figure 1.23 Basic structure of a feedback control system. The compensator measures the error between a reference and a measured output and produces a signal to the plant that is designed to drive the error to zero despite the presences of disturbances.
Chapter 9: Nonlinear and Multivariable Control
In Chapter 9 we discuss more advanced control techniques based on the Lagrangian dynamic equations of motion derived in Chapter 6. We introduce the notion of inverse dynamics control as a means for compensating the complex nonlinear interaction forces among the links of the manipulator. Robust and adaptive control of manipulators are also introduced using the direct method of Lyapunov and so-called passivity-based control.
Chapter 10: Force Control
In the example robot task above, once the manipulator has reached location A, it must follow the contour S maintaining a constant force normal to the surface. Conceivably, knowing the location of the object and the shape of the contour, one could carry out this task using position control alone. This would be quite difficult to accomplish in practice, however. Since the manipulator itself possesses high rigidity, any errors in position due to uncertainty in the exact location of the surface or tool would give rise to extremely large forces at the end effector that could damage the tool, the surface, or the robot. A better approach is to measure the forces of interaction directly and use a force control scheme to accomplish the task. In Chapter 10 we discuss force control and compliance, along with common approaches to force control, namely hybrid control and impedance control.
Chapter 11: Vision-Based Control
Cameras have become reliable and relatively inexpensive sensors in many robotic applications. Unlike joint sensors, which give information about the internal configuration of the robot, cameras can be used not only to measure the position of the robot but also to locate objects in the robot’s workspace. In Chapter 11 we discuss the use of computer vision to determine position and orientation of objects.
In some cases, we may wish to control the motion of the manipulator relative to some target as the end effector moves through free space. Here, force control cannot be used. Instead, we can use computer vision to close the control loop around the vision sensor. This is the topic of Chapter