Deterministic policy gradients
The next method that we will take a look at is called deterministic policy gradients, which is an actor-critic method but has a very nice property of being off-policy. The following is my very relaxed interpretation of the strict proofs. If you are interested in understanding the core of this method deeply, you can always refer to the article by David Silver and others called Deterministic Policy Gradient Algorithms, published in 2014 (http://proceedings.mlr.press/v32/silver14.pdf), and the paper by Timothy P. Lillicrap and others called Continuous Control with Deep Reinforcement Learning, published in 2015 (https://arxiv.org/abs/1509.02971).
The simplest way to illustrate the method is through comparison with the already familiar A2C method. In this method, the actor estimates the stochastic policy, which returns the probability distribution over discrete actions or, as we have just covered in the previous section, the parameters of normal distribution...