Distributional policy gradients
As the last method of this chapter, we'll take a look at the very recent paper by Gabriel Barth-Maron, Matthew W. Hoffman, and others, called Distributional Policy Gradients, published in 2018. At the time of writing, this paper hasn't been uploaded to ArXiV yet, as it was only submitted for a review for the conference ICLR 2018. It is available at https://openreview.net/forum?id=SyZipzbCb.
The full name of the method is Distributed Distributional Deep Deterministic Policy Gradients or D4PG for short. The authors proposed several improvements to the DDPG method we've just seen to improve stability, convergence, and sample efficiency.
First of all, they adapted the distributional representation of the Q-value proposed in the paper by Mark G.Bellemare, called A Distributional Perspective on Reinforcement Learning, published in 2017. We discussed this approach in Chapter 7, DQN Extensions, when we talked about DQN improvements, so refer...