The approach
A key aspect of the book is its orientation to practice. Every method is implemented for various environments, from the very trivial to the quite complex. I’ve tried to make the examples clean and easy to understand, which was made possible by the expressiveness and power of PyTorch. On the other hand, the complexity and requirements of the examples are oriented to RL hobbyists without access to very large computational resources, such as clusters of graphics processing units (GPUs) or very powerful workstations. This, I believe, will make the fun-filled and exciting RL domain accessible to a much wider audience than just research groups or large artificial intelligence companies. On the other hand, this is still deep RL, so access to a GPU is highly recommended, as computation speed up will make experimentations much more convenient (waiting for several weeks for a single optimization to complete is not very fun). Approximately half of the examples in the book will benefit from being run on a GPU.
In addition to traditional medium-sized examples of environments used in RL, such as Atari games or continuous control problems, this book contains several chapters (10, 13, 14, 19, 20, and 21) that contain larger projects, illustrating how RL methods can be applied to more complicated environments and tasks. These examples are still not full-sized, real-life projects (they would occupy a separate book on their own), but just larger problems illustrating how the RL paradigm can be applied to domains beyond the well-established benchmarks.
Another thing to note about the examples in Parts 1, 2, and 3 of the book is that I’ve tried to make them self-contained, with the source code shown in full. Sometimes this has led to the repetition of code pieces (for example, the training loop is very similar in most of the methods), but I believe that giving you the freedom to jump directly into the method you want to learn is more important than avoiding a few repetitions. All examples in the book are available on GitHub at https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On-3E/, and you’re welcome to fork them, experiment, and contribute.
Besides the source code, several chapters (15, 16, 19, and 22) are accompanied by video recordings of the trained model. All these recordings are available in the following YouTube playlist: https://youtube.com/playlist?list=PLMVwuZENsfJmjPlBuFy5u7c3uStMTJYz7.