TRPO
TRPO was proposed in 2015 by Berkeley researchers in a paper by John Schulman et. al., called Trust Region Policy Optimization (arXiv:1502.05477). This paper was a step towards improving the stability and consistency of stochastic policy gradient optimization and has shown good results on various control tasks.
Unfortunately, the paper and the method are quite math-heavy, so it can be hard to understand the details of the method. The same could be said about the implementation, which uses the conjugate gradients method to efficiently solve the constrained optimization problem.
As the first step, the TRPO method defines the discounted visitation frequencies of the state: . In this equation, P (si = s) equals the sampled probability of state s, to be met at position i of the sampled trajectories. Then, TRPO defines the optimization objective as where is the expected discounted reward of the policy and defines the deterministic policy.
To address the issue of large policy...