-
Learn with concise explanations, modern libraries, and diverse applications from games to stock trading and web navigation
-
Develop deep RL models, improve their stability, and efficiently solve complex environments
-
New content on RL from human feedback (RLHF), MuZero, and transformers
Start your journey into reinforcement learning (RL) and reward yourself with the third edition of Deep Reinforcement Learning Hands-On. This book takes you through the basics of RL to more advanced concepts with the help of various applications, including game playing, discrete optimization, stock trading, and web browser navigation. By walking you through landmark research papers in the fi eld, this deep RL book will equip you with practical knowledge of RL and the theoretical foundation to understand and implement most modern RL papers.
The book retains its approach of providing concise and easy-to-follow explanations from the previous editions. You'll work through practical and diverse examples, from grid environments and games to stock trading and RL agents in web environments, to give you a well-rounded understanding of RL, its capabilities, and its use cases. You'll learn about key topics, such as deep Q-networks (DQNs), policy gradient methods, continuous control problems, and highly scalable, non-gradient methods.
If you want to learn about RL through a practical approach using OpenAI Gym and PyTorch, concise explanations, and the incremental development of topics, then Deep Reinforcement Learning Hands-On, Third Edition, is your ideal companion
This book is ideal for machine learning engineers, software engineers, and data scientists looking to learn and apply deep reinforcement learning in practice. It assumes familiarity with Python, calculus, and machine learning concepts. With practical examples and high-level overviews, it’s also suitable for experienced professionals looking to deepen their understanding of advanced deep RL methods and apply them across industries, such as gaming and finance
-
Stay on the cutting edge with new content on MuZero, RL with human feedback, and LLMs
-
Evaluate RL methods, including cross-entropy, DQN, actor-critic, TRPO, PPO, DDPG, and D4PG
-
Implement RL algorithms using PyTorch and modern RL libraries
-
Build and train deep Q-networks to solve complex tasks in Atari environments
-
Speed up RL models using algorithmic and engineering approaches
-
Leverage advanced techniques like proximal policy optimization (PPO) for more stable training