Chapter 13 – TRPO, PPO, and ACKTR Methods
- The trust region implies the region where our actual function f(x) and approximated function are close together. So, we can say that our approximation will be accurate if our approximated function is in the trust region.
- TRPO is a policy gradient algorithm, and it acts as an improvement to policy gradient with baseline. TRPO tries to make a large policy update while imposing a KL constraint that the old policy and the new policy should not vary from each other too much. TRPO guarantees monotonic policy improvement, guaranteeing that there will always be a policy improvement on every iteration.
- Just like gradient descent, conjugate gradient descent also tries to find the minimum of the function; however, the search direction of conjugate gradient descent will be different from gradient descent and conjugate gradient descent attains convergence in N iterations.
- The update rule of TRPO is given as .
- PPO...