DQN on Pong
Before we jump into the code, some introduction is needed. Our examples are becoming increasingly challenging and complex, which is not surprising, as the complexity of the problems that we are trying to tackle is also growing. The examples are as simple and concise as possible, but some of the code may be difficult to understand at first.
Another thing to note is performance. Our previous examples for FrozenLake, or CartPole, were not demanding from a resource perspective, as observations were small, NN parameters were tiny, and shaving off extra milliseconds in the training loop wasn’t important. However, from now on, that’s not the case. One single observation from the Atari environment is 100k values, which have to be preprocessed, rescaled, and stored in the replay buffer. One extra copy of this data array can cost you training speed, which will not be seconds and minutes anymore but, instead, hours on even the fastest graphics processing unit...