TRPO
TRPO was proposed in 2015 by Berkeley researchers in a paper by Schulman et al., called Trust region policy optimization [Sch15]. This paper was a step towards improving the stability and consistency of stochastic policy gradient optimization and has shown good results on various control tasks.
Unfortunately, the paper and the method are quite math-heavy, so it can be hard to understand the details. The same could be said about the implementation, which uses the conjugate gradients method to efficiently solve the constrained optimization problem.
As the first step, the TRPO method defines the discounted visitation frequencies of the state as follows:
In this equation, P(si = s) equals the sampled probability of state s to be met at position i of the sampled trajectories.
Then, TRPO defines the optimization objective as
where
is the expected discounted reward of the policy...