SAC
In the final section, we will check our environments on the latest state-of-the-art method, called SAC, which was proposed by a group of Berkeley researchers and introduced in the paper Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning, by Tuomas Taarnoja et. al. arXiv 1801.01290, published in 2018.
At the moment, it's considered to be one of the best methods for continuous control problems. The core idea of the method is closer to the DDPG method than to A2C policy gradients. The SAC method might have been more logically described in Chapter 17, Continuous Action Space. However, in this chapter, we have the chance to compare it directly with PPO's performance, which was considered to be the de facto standard in continuous control problems for a long time.
The central idea in the SAC method is entropy regularization, which adds a bonus reward at each timestamp that is proportional to the entropy of the policy at this timestamp. In mathematical...