Summary
In this chapter, we have learned about one of the very popular deep reinforcement learning algorithms called DQN. We saw how deep neural networks are used to approximate the Q function. We also learned how to build an agent to play Atari games. Later, we looked at several advancements to the DQN, such as double DQN, which is used to avoid overestimating Q values. We then looked at prioritized experience replay, for prioritizing the experience, and dueling network architecture, which breaks down the Q function computation into two streams, called value stream and advantage stream.Â
In the next chapter, Chapter 9, Playing Doom with Deep Recurrent Q Network, we will look at a really cool variant of DQNs called DRQN, which makes use of an RNN for approximating a Q function.Â