PPO
Historically, the PPO method came from the OpenAI team and it was proposed long after TRPO, which is from 2015. However, PPO is much simpler than TRPO, so we will start with it. The 2017 paper in which it was proposed is by John Schulman et. al., and it is called Proximal Policy Optimization Algorithms (arXiv:1707.06347).
The core improvement over the classic A2C method is changing the formula used to estimate the policy gradients. Instead of using the gradient of logarithm probability of the action taken, the PPO method uses a different objective: the ratio between the new and the old policy scaled by the advantages.
In math form, the old A2C objective could be written as . The new objective proposed by PPO is .
The reason behind changing the objective is the same as with the cross-entropy method covered in Chapter 4, The Cross-Entropy Method: importance sampling. However, if we just start to blindly maximize this value, it may lead to a very large update to the policy...