What this book covers
Chapter 1, What Is Reinforcement Learning?, contains an introduction to RL ideas and the main formal models.
Chapter 2, OpenAI Gym API and Gymansium, introduces the practical aspects of RL, using the open source library Gym and its descendant, Gymnasium.
Chapter 3, Deep Learning with PyTorch, gives you a quick overview of the PyTorch library.
Chapter 4, The Cross-Entropy Method, introduces one of the simplest methods in RL to give you an impression of RL methods and problems.
Chapter 5, Tabular Learning and the Bellman Equation, this chapter opens Part 2 of the book, devoted to value-based family of methods.
Chapter 6, Deep Q-Networks, describes deep Q-networks (DQNs), an extension of the basic value-based methods, allowing us to solve complicated environments.
Chapter 7, Higher-Level RL Libraries, describes the library PTAN, which we will use in the book to simplify the implementations of RL methods.
Chapter 8, DQN Extensions, gives a detailed overview of a modern extension to the DQN method, to improve its stability and convergence in complex environments.
Chapter 9, Ways to Speed up RL Methods, provides an overview of ways to make the execution of RL code faster.
Chapter 10, Stocks Trading Using RL, is the first practical project and focuses on applying the DQN method to stock trading.
Chapter 11, Policy Gradients, opens Part 3 of the book and introduces another family of RL methods that is based on direct policy optimisation.
Chapter 12, The Actor-Critic Method: A2C and A3C, describes one of the most widely used policy-based method in RL.
Chapter 13, The TextWorld Environment, covers the application of RL methods to interactive fiction games.
Chapter 14, Web Navigation, is another long project that applies RL to web page navigation using the MiniWoB++ environment.
Chapter 15, Continuous Action Space, opens the advanced RL part of the book and describes the specifics of environments using continuous action spaces and various methods (widely used in robotics).
Chapter 16, Trust Regions, is yet another chapter about continuous action spaces describing the trust region set of methods: PPO, TRPO, ACKTR and SAC.
Chapter 17, Black-Box Optimization in RL, shows another set of methods that don’t use gradients in their explicit form.
Chapter 18, Advanced Exploration, covers different approaches that can be used for better exploration of the environment — a very important aspect of RL.
Chapter 19, Reinforcement Learning with Human Feedback, introduces and implements recent approach to guide the process of learning by giving human feedback. This methed is widely used in training large language models (LLMs). In this chapter, we’ll implement RLHF pipeline from scratch and check its efficiency.
Chapter 20, AlphaGo Zero and MuZero, describes the AlphaGo Zero method and its evolution into MuZero, and applies both these methods to the game Connect 4.
Chapter 21, RL in Discrete Optimization, describes the application of RL methods to the domain of discrete optimization, using the Rubik’s cube as an environment.
Chapter 22, Multi-Agent RL, introduces a relatively new direction of RL methods for situations with multiple agents.