Summary
In this chapter, we learned how to combine deep learning techniques to a DQN model and train it to play the Atari game Breakout. We first looked at adding convolutional layers to the agent for processing screenshots from the game. This helped the agent to better understand the game environment.
We then took things a step further and added an RNN to the outputs of the CNN model. We created a sequence of images and fed it to an LSTM layer. This sequential model provided the DQN agent with the ability to "visualize" the direction of the ball. This kind of model is called a DRQN.
Finally, we used an attention mechanism and trained a DARQN model to play the Breakout game. This mechanism helped the model to better understand previous relevant states and improved its performance drastically. This field is still evolving as new deep learning techniques and models are designed, outperforming previous generations in the process.
In the next chapter, you will be introduced...