Prioritized replay buffer
The next very useful idea on how to improve DQN training was proposed in 2015 in the paper, Prioritized Experience Replay ([7] Schaul and others, 2015). This method tries to improve the efficiency of samples in the replay buffer by prioritizing those samples according to the training loss.
The basic DQN used the replay buffer to break the correlation between immediate transitions in our episodes. As we discussed in Chapter 6, Deep Q-Networks, the examples we experience during the episode will be highly correlated, as most of the time the environment is "smooth" and doesn't change much according to our actions. However, the SGD method assumes that the data we use for training has a i.i.d. property. To solve this problem, the classic DQN method used a large buffer of transitions, randomly sampled to get the next training batch.
The authors of the paper questioned this uniform random sample policy and proved that by assigning priorities to buffer...