Trust Region Policy Optimization
TRPO was proposed in 2015 by the Berkeley researchers in the paper by John Schulman et al called Trust Region Policy Optimization (arXiv:1502.05477). This paper was a step towards improving the stability and consistency of the stochastic policy gradient optimization and has shown good results on various control tasks.
Unfortunately, the paper and the method are quite math-heavy, so it can be hard to understand the details of the method. The same could be said about the implementation, which uses the conjugate gradients method to efficiently solve the constrained optimization problem.
As the first step, the TRPO method defines the discounted visitation frequencies of the state: . In this equation, equals to the sampled probability of state s to be met at position i of the sampled trajectories. Then, TRPO defines the optimization objective as where is the expected discount reward of the policy and defines the deterministic policy.
To address the...