Deterministic policy gradients
The next method that we'll take a look at is called deterministic policy gradients, which is a variation of the A2C method, but has a very nice property of being off-policy. The following is my very relaxed interpretation of the strict proofs. If you are interested in understanding the core of this method deeply, you may always refer to the article by David Silver and others called Deterministic Policy Gradient Algorithms, published in 2014 and the paper by Timothy P. Lillicrap and others called Continuous Control with Deep Reinforcement Learning, published in 2015.
The simplest way to illustrate the method is by comparison with the already familiar A2C. In this method, the actor estimates the stochastic policy, which returns the probability distribution over discrete actions or, as we've just seen in the previous section, the parameters of normal distribution. In both cases, our policy was stochastic, so, in other words, our action taken...