The paper that introduced Rainbow DQN, Rainbow: Combining Improvements in Deep Reinforcement Learning, by DeepMind in October 2017 was developed to address several failings in DQN. DQN was introduced by the same group at DeepMind, led by David Silver to beat Atari games better than humans. However, as we learned over several chapters, while the algorithm was groundbreaking, it did suffer from some shortcomings. Some of these we have already addressed with advances such as DDQN and experience replay. To understand what encompasses all of Rainbow, let's look at the main elements it contributes to RL/DRL:
- DQN: This is, of course, the core algorithm, something we should have a good understanding of by now. We covered DQN in Chapter 6, Going Deep with DQN.
- Double DQN: This is not to be confused with DDQN or...