In this chapter, we introduced the A3C algorithm, which is an on-policy algorithm that's applicable to both discrete and continuous action problems. You saw how three different loss terms are combined into one and optimized. Python's threading library is useful for running multiple threads, with a copy of the policy network in each thread. These different workers compute the policy gradients and pass them on to the master to update the neural network parameters. We applied A3C to train agents for the CartPole and the LunarLander problems, and the agents learned them very well. A3C is a very robust algorithm and does not require a replay buffer, although it does require a local buffer for collecting a small number of experiences, after which it is used to update the networks. Lastly, a synchronous version of the algorithm, called A2C, was also introduced.
This...