Proximal policy optimization
In the previous section, we learned how TRPO works. We learned that TRPO keeps the policy updates in the trust region by imposing a constraint that the KL divergence between the old and new policy should be less than or equal to . The problem with the TRPO method is that it is difficult to implement and is computationally expensive. So, now we will learn one of the most popular and state-of-the-art policy gradient algorithms called Proximal Policy Optimization (PPO).
PPO improves upon the TRPO algorithm and is simple to implement. Similar to TRPO, PPO ensures that the policy updates are in the trust region. But unlike TRPO, PPO does not use any constraints in the objective function. Going forward, we will learn how exactly PPO works and how PPO ensures that the policy updates are in the trust region.
There are two different types of PPO algorithm:
- PPO-clipped – In the PPO-clipped method, in order to ensure that the policy updates...