In this chapter, we went further into RL algorithms and talked about how these can be combined with function approximators so that RL can be applied to a broader variety of problems. Specifically, we described how function approximation and deep neural networks can be used in Q-learning and the instabilities that derive from it. We demonstrated that, in practice, deep neural networks cannot be combined with Q-learning without any modifications.
The first algorithm that was able to use deep neural networks in combination with Q-learning was DQN. It integrates two key ingredients to stabilize learning and control complex tasks such as Atari 2600 games. The two ingredients are the replay buffer, which is used to store the old experience, and a separate target network, which is updated less frequently than the online network. The former is employed to exploit the off-policy...