DQN Extensions
Since DeepMind published its paper on the deep Q-network (DQN) model in 2015, many improvements have been proposed, along with tweaks to the basic architecture, which, significantly, have improved the convergence, stability, and sample efficiency of DeepMindās basic DQN. In this chapter, we will take a deeper look at some of those ideas.
In October 2017, Hessel et al. from DeepMind published a paper called Rainbow: Combining improvements in deep reinforcement learning [Hes+18], which presented the six most important improvements to DQN; some were invented in 2015, but others are relatively recent. In this paper, state-of-the-art results on the Atari games suite were reached, just by combining those six methods.
Since 2017, more papers have been published and state-of-the-art results have been pushed further, but all the methods presented in the paper are still relevant and widely used in practice. For example, in 2023, Marc...