Deep deterministic policy gradients
The next method that we will take a look at is called deep deterministic policy gradients (DDPG), which is an actor-critic method but has a very nice property of being off-policy. The following is a simplified interpretation of the strict proofs. If you are interested in understanding the core of this method deeply, you can always refer to the article by Silver et al. called Deterministic policy gradient algorithms [Sil+14], published in 2014, and the paper by Lillicrap et al. called Continuous control with deep reinforcement learning [Lil15], published in 2015.
The simplest way to illustrate the method is through comparison with the already familiar A2C method. In this method, the actor estimates the stochastic policy, which returns the probability distribution over discrete actions or, as we have just covered in the previous section, the parameters of normal distribution. In both cases, our policy is stochastic, so, in other words, our...