Deep Reinforcement Learning with Python: Master classic RL, deep RL, distributional RL, inverse RL, and more with OpenAI Gym and TensorFlow
, Second Edition
Covers a vast spectrum of basic-to-advanced RL algorithms with mathematical explanations of each algorithm
Learn how to implement algorithms with code by following examples with line-by-line explanations
Explore the latest RL methodologies such as DDPG, PPO, and the use of expert demonstrations
Description
With significant enhancements in the quality and quantity of algorithms in recent years, this second edition of Hands-On Reinforcement Learning with Python has been revamped into an example-rich guide to learning state-of-the-art reinforcement learning (RL) and deep RL algorithms with TensorFlow 2 and the OpenAI Gym toolkit.
In addition to exploring RL basics and foundational concepts such as Bellman equation, Markov decision processes, and dynamic programming algorithms, this second edition dives deep into the full spectrum of value-based, policy-based, and actor-critic RL methods. It explores state-of-the-art algorithms such as DQN, TRPO, PPO and ACKTR, DDPG, TD3, and SAC in depth, demystifying the underlying math and demonstrating implementations through simple code examples.
The book has several new chapters dedicated to new RL techniques, including distributional RL, imitation learning, inverse RL, and meta RL. You will learn to leverage stable baselines, an improvement of OpenAI’s baseline library, to effortlessly implement popular RL algorithms. The book concludes with an overview of promising approaches such as meta-learning and imagination augmented agents in research.
By the end, you will become skilled in effectively employing RL and deep RL in your real-world projects.
Who is this book for?
If you’re a machine learning developer with little or no experience with neural networks interested in artificial intelligence and want to learn about reinforcement learning from scratch, this book is for you.
Basic familiarity with linear algebra, calculus, and the Python programming language is required. Some experience with TensorFlow would be a plus.
What you will learn
Understand core RL concepts including the methodologies, math, and code
Train an agent to solve Blackjack, FrozenLake, and many other problems using OpenAI Gym
Train an agent to play Ms Pac-Man using a Deep Q Network
Learn policy-based, value-based, and actor-critic methods
Master the math behind DDPG, TD3, TRPO, PPO, and many others
Explore new avenues such as the distributional RL, meta RL, and inverse RL
Use Stable Baselines to train an agent to walk and play Atari games
This is the best book I have read so far in RL. Please get the second edition and not the first edition. This second edition is completely rewritten and includes so many advanced topics as well. I have read the popular first edition as well. I can say this second edition is completely different from the first edition. So please get this second edition rather than the first edition book.I just wanna thank the author for crafting this masterpiece of a book it is. I have no idea what I would have done without this book. It helped me a big time at work and I can now proudly say that this book made me a pro in RL to deep RL.So to mention again, go for this second edition. My humble thanks to the author again. This book must be a revolution in RL field.
Amazon Verified review
MaheshApr 17, 2021
5
Wonderful read for beginner like me, complex maths and concepts are clearly explained with examples. Must buy for anyone interested to jump into Reinforcement Learning. Thanks a lot Sudharsan Ravichandiran !!!
Amazon Verified review
Amazon CustomerJan 22, 2021
5
I own and have read pretty much all of the DRL books that were published in the past 3 years, and I can with certainty say that this book is by far the best on the subject. An amazing clarity of explanation combined with the vast scope. Thank you so very much Sudharsan!
Amazon Verified review
DhruvNov 16, 2020
5
Best Deep Reinforcement Learning book available in the market. It covers everything from scratch.Must buy for serious learners.
Amazon Verified review
GaneshNov 06, 2023
5
I give full marks for ease and elegance with which the topic is dealt with. I had so much struggle learning from the other popular ones. However nothing registered in my mind. This book makes it really easy.Highly recommended.
Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.
A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content
How can I cancel my subscription?
To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.
What are credits?
Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.
What happens if an Early Access Course is cancelled?
Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.
Where can I send feedback about an Early Access title?
If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team.
Can I download the code files for Early Access titles?
We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.
When we publish the book, the code files will also be available to download from the Packt website.
How accurate is the publication date?
The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.
How will I know when new chapters are ready?
We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.
I am a Packt subscriber, do I get Early Access?
Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.
How is Early Access delivered?
Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.
How do I buy Early Access content?
Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.
What is Early Access?
Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.