Deep Reinforcement Learning
Machine learning is usually classified into different paradigms, such as supervised learning, unsupervised learning, semi-supervised, self-supervised learning, and reinforcement learning (RL). Supervised learning requires labeled data and is currently the most popularly used machine learning paradigm. However, applications based on unsupervised and semi-supervised learning, which require few or no labels, have been steadily on the rise, especially in the form of generative models. Better still, the rise of Large Language Models (LLMs) have shown that self-supervised learning (where labels are implicit within the data) is an even more promising machine learning paradigm.
RL, on the other hand, is a different branch of machine learning that is considered to be the closest we have reached in terms of emulating how humans learn. It is an area of active research and development and is in its early stages, with some promising results. A prominent example is the famous AlphaGo model, built by Google’s DeepMind, which defeated the world’s best Go player.
In supervised learning, we usually feed the model with atomic input-output data pairs and hope for the model to learn the output as a function of the input. In RL, we are not keen on learning such individual input to individual output functions. Instead, we are interested in learning a strategy (or policy) that enables us to take a sequence of steps (or actions), starting from the input (state), in order to obtain the final output or achieve the final goal.
Looking at a photo and deciding whether it’s a cat or a dog is an atomic input-output learning task that can be solved through supervised learning. However, looking at a chess board and deciding the next move with the aim of winning the game requires strategy, and we need RL for such tasks.
In the previous chapters, we came across examples of supervised learning such as building a classifier to classify handwritten digits using the MNIST dataset. We also explored unsupervised learning while building a text generation model using an unlabeled text corpus.
In this chapter, we will uncover some of the basic concepts of RL and deep reinforcement learning (DRL). We will then focus on a specific and popular type of DRL model – the deep Q-learning network (DQN) model. Using PyTorch, we will build a DRL application. We will train a DQN model to learn how to play the game of Pong against a computer opponent (otherwise known as a bot).
By the end of this chapter, you will have all the necessary context to start working on your own DRL project in PyTorch. Additionally, you will have hands-on experience in building a DQN model for a real-life problem. The skills you’ll have gained in this chapter will be useful for working on other such RL problems.
This chapter is broken down into the following topics:
- Reviewing RL concepts
- Discussing Q-learning
- Understanding deep Q-learning
- Building a DQN model in PyTorch
All the code files for this chapter can be found at https://github.com/arj7192/MasteringPyTorchV2/tree/main/Chapter11.