N-step DQN
The first improvement that we will implement and evaluate is quite an old one. It was first introduced by Sutton in the paper Learning to Predict by the Methods of Temporal Differences [Sut88]. To get the idea, let’s look at the Bellman update used in Q-learning once again:
This equation is recursive, which means that we can express Q(st+1,at+1) in terms of itself, which gives us this result:
Value ra,t+1 means local reward at time t + 1, after issuing action a. However, if we assume that action a at step t + 1 was chosen optimally, or close to optimally, we can omit the maxa operation and obtain this:
This value can be unrolled again and again any number of times. As you may guess, this unrolling can be easily applied to our DQN update by replacing one-step transition sampling with longer transition sequences of n-steps. To understand why this unrolling will help us to speed...