Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Reinforcement Learning with Python

You're reading from   Deep Reinforcement Learning with Python Master classic RL, deep RL, distributional RL, inverse RL, and more with OpenAI Gym and TensorFlow

Arrow left icon
Product type Paperback
Published in Sep 2020
Publisher Packt
ISBN-13 9781839210686
Length 760 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Sudharsan Ravichandiran Sudharsan Ravichandiran
Author Profile Icon Sudharsan Ravichandiran
Sudharsan Ravichandiran
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Fundamentals of Reinforcement Learning 2. A Guide to the Gym Toolkit FREE CHAPTER 3. The Bellman Equation and Dynamic Programming 4. Monte Carlo Methods 5. Understanding Temporal Difference Learning 6. Case Study – The MAB Problem 7. Deep Learning Foundations 8. A Primer on TensorFlow 9. Deep Q Network and Its Variants 10. Policy Gradient Method 11. Actor-Critic Methods – A2C and A3C 12. Learning DDPG, TD3, and SAC 13. TRPO, PPO, and ACKTR Methods 14. Distributional Reinforcement Learning 15. Imitation Learning and Inverse RL 16. Deep Reinforcement Learning with Stable Baselines 17. Reinforcement Learning Frontiers 18. Other Books You May Enjoy
19. Index
Appendix 1 – Reinforcement Learning Algorithms 1. Appendix 2 – Assessments

Is the MC method applicable to all tasks?

We learned that Monte Carlo is a model-free method, and so it doesn't require the model dynamics of the environment to compute the value and Q function in order to find the optimal policy. The Monte Carlo method computes the value function and Q function by just taking the average return of the state and the average return of the state-action pair, respectively.

But one issue with the Monte Carlo method is that it is applicable only to episodic tasks. We learned that in the Monte Carlo method, we compute the value of the state by taking the average return of the state and the return is the sum of rewards of the episode. But when there is no episode, that is, if our task is a continuous task (non-episodic task), then we cannot apply the Monte Carlo method.

Okay, how do we compute the value of the state where we have a continuous task and also where we don't know the model dynamics of the environment? Here is where...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image