Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Reinforcement Learning Algorithms with Python

You're reading from   Reinforcement Learning Algorithms with Python Learn, understand, and develop smart algorithms for addressing AI challenges

Arrow left icon
Product type Paperback
Published in Oct 2019
Publisher Packt
ISBN-13 9781789131116
Length 366 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Andrea Lonza Andrea Lonza
Author Profile Icon Andrea Lonza
Andrea Lonza
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Algorithms and Environments
2. The Landscape of Reinforcement Learning FREE CHAPTER 3. Implementing RL Cycle and OpenAI Gym 4. Solving Problems with Dynamic Programming 5. Section 2: Model-Free RL Algorithms
6. Q-Learning and SARSA Applications 7. Deep Q-Network 8. Learning Stochastic and PG Optimization 9. TRPO and PPO Implementation 10. DDPG and TD3 Applications 11. Section 3: Beyond Model-Free Algorithms and Improvements
12. Model-Based RL 13. Imitation Learning with the DAgger Algorithm 14. Understanding Black-Box Optimization Algorithms 15. Developing the ESBAS Algorithm 16. Practical Implementation for Resolving RL Challenges 17. Assessments
18. Other Books You May Enjoy

Applying scalable ES to LunarLander

How well will the scalable version of evolution strategies perform in the LunarLander environment? Let's find out!

As you may recall, we already used LunarLander against A2C and REINFORCE in Chapter 6, Learning Stochastic and PG optimization. This task consists of landing a lander on the moon through continuous actions. We decided to use this environment for its medium difficulty and to compare the ES results to those that were obtained with A2C.

The hyperparameters that performed the best in this environment are as follows:

Hyperparameter Variable name Value
Neural network size hidden_sizes [32, 32]
Training iterations (or generations) number_iter 200
Worker's number num_workers 4
Adam learning rate lr 0.02
Individuals per worker indiv_per_worker 12
Standard deviation std_noise 0.05

The results are shown in the...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime