Extensions to DQN: Rainbow
The Rainbow improvements bring in significant performance boost over the vanilla DQN and they have become standard in most Q-learning implementations. In this section, we discuss what those improvements are, how they help, and what their relative importance are. At the end, we talk how DQN and these extensions collectively overcome the deadly triad.
The extensions
There are six extensions to DQN included in the Rainbow algorithm. These are: i) double Q-learning, ii) prioritized replay, iii) dueling networks, iv) multi-step learning, v) distributional RL, and iv) noisy nets. Let's start describing them with double Q-learning.
Double Q-learning
One of the well-known issues in Q-learning is that the Q-value estimates we obtain during learning is higher than the true Q-values because of the maximization operation . This phenomenon is called maximization bias, and the reason we run into it is that we do a maximization operation over noisy observations...