Summary
In this chapter, we started with an introduction to deep learning, and we looked at the different components of the deep learning process. Then, we learned how to build deep learning models using PyTorch.
Next, we slowly shifted our focus to RL, where we learned about value functions and Q learning. We demonstrated how Q learning can help us to build RL solutions without knowing the transition dynamics of the environment. We also investigated the problems associated with tabular Q learning and how to solve those performance and memory-related issues with deep Q learning.
Then, we looked into the issues related to a vanilla DQN implementation and how we can use a target network and experience replay mechanism to overcome issues such as correlated data and non-stationary targets during the training of a DQN. Finally, we learned how double deep Q learning helps us to overcome the issue of overestimation in a DQN. In the next chapter, you will learn how to use CNNs and RNNs...