In earlier chapters, you saw the use of reinforcement learning (RL) to solve discrete action problems, such as those that arise in Atari games. We will now build on this to tackle continuous, real-valued action problems. Continuous control problems are copious—for example, the motor torque of a robotic arm; the steering, acceleration, and braking of an autonomous car; the wheeled robotic motion on terrain; and the roll, pitch, and yaw controls of a drone. For these problems, we train neural networks in an RL setting to output real-valued actions.
Many continuous control algorithms involve two neural networks—one referred to as the actor (policy-based), and the other as the critic (value-based)—and therefore, this family of algorithms is referred to as Actor-Critic algorithms. The role of the actor is to learn a good policy...