In this chapter, we introduced policy gradient methods, where we learned how to use a stochastic policy to drive our agent with the REINFORCE algorithm. After that, we learned that part of the problem of sampling from a stochastic policy is the randomness of sampling from a stochastic policy. We found that this could be corrected using dual agent networks, with one that represents the acting network and another as a critic. In this case, the actor is the policy network that refers back to the critic network, which uses a deterministic value function. Then, we saw how PG could be improved upon by seeing how DDPG works. Finally, we looked at what is considered one of the more complex methods in DRL, TRPO, and saw how it tries to manage the several shortcomings of PG methods.
Continuing with our look at PG methods, we will move on to explore next-generation methods such as...