Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Reinforcement Learning for Games

You're reading from   Hands-On Reinforcement Learning for Games Implementing self-learning agents in games using artificial intelligence techniques

Arrow left icon
Product type Paperback
Published in Jan 2020
Publisher Packt
ISBN-13 9781839214936
Length 432 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Micheal Lanham Micheal Lanham
Author Profile Icon Micheal Lanham
Micheal Lanham
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Exploring the Environment
2. Understanding Rewards-Based Learning FREE CHAPTER 3. Dynamic Programming and the Bellman Equation 4. Monte Carlo Methods 5. Temporal Difference Learning 6. Exploring SARSA 7. Section 2: Exploiting the Knowledge
8. Going Deep with DQN 9. Going Deeper with DDQN 10. Policy Gradient Methods 11. Optimizing for Continuous Control 12. All about Rainbow DQN 13. Exploiting ML-Agents 14. DRL Frameworks 15. Section 3: Reward Yourself
16. 3D Worlds 17. From DRL to AGI 18. Other Books You May Enjoy

What this book covers

Chapter 1, Understanding Rewards-Based Learning, explores the basics of learning, what it is to learn, and how RL differs from other, more classic learning methods. From there, we explore how the Markov decision process works in code and how it relates to learning. This leads us to the classic multi-armed and contextual bandit problems. Finally, we will learn about Q-learning and quality-based model learning.

Chapter 2, Dynamic Programming and the Bellman Equation, digs deeper into dynamic programming and explores how the Bellman equation can be intertwined into RL. Here, you will learn how the Bellman equation is used to update a policy. We then go further into detail about policy iteration or value iteration methods using our understanding of Q-learning, by training an agent on a new grid-style environment.

Chapter 3, Monte Carlo Methods, explores model-based methods and how they can be used to train agents on more classic board games.

Chapter 4, Temporal Difference Learning, explores the heart of RL and how it solves the temporal credit assignment problem often discussed in academia. We apply temporal difference learning (TDL) to Q-learning and use it to solve a grid world environment (such as FrozenLake).

Chapter 5, Exploring SARSA, goes deeper into the fundamentals of on-policy methods such as SARSA. We will explore policy-based learning through understanding the partially observable Markov decision process. Then, we'll look at how we can implement SARSA with Q-learning. This will set the stage for the more advanced policy methods that we will explore in later chapters, called PPO and TRPO.

Chapter 6, Going Deep with DQN, takes the Q-learning model and integrates it with deep learning to create advanced agents known as deep Q-learning networks (DQNs). From this, we explain how basic deep learning models work for regression or, in this case, to solve the Q equation. We will use DQNs in the CartPole environment.

Chapter 7, Going Deeper with DDQNs, looks at how extensions to deep learning (DL) called convolutional neural networks (CNNs) can be used to observe a visual state. We will then use that knowledge to play Atari games and look at further enhancements.

Chapter 8, Policy Gradient Methods, delves into more advanced policy methods and how they integrate into deep RL agents. This is an advanced chapter as it covers higher-level calculus and probability concepts. You will get to experience the MuJoCo animation RL environment in this chapter as a reward for your hard work.

Chapter 9, Optimizing for Continuous Control, looks at improving the policy methods we looked at previously for continuously controlling advanced environments. We start off by setting up and installing the MuJoCo environment. After that, we look at a novel improvement called recurrent networks for capturing context and see how recurrent networks are applied on top of PPO. Then we get back into the actor-critic method and this time look at asynchronous actor-critic in a couple of different configurations, before finally progressing to actor-critic with experience replay.

Chapter 10, All Together Rainbow DQN, tells us all about Rainbow. Google DeepMind recently explored the combination of a number of RL enhancements all together in an algorithm called Rainbow. Rainbow is another advanced toolkit that you can explore and either borrow from or use to work with more advanced RL environments.

Chapter 11, Exploiting ML-Agents, looks at how we can either use elements from the ML-Agents toolkit in our own agents or use the toolkit to get a fully developed agent.

Chapter 12, DRL Frameworks, opens up the possibilities of playing with solo agents in various environments. We will explore various multi-agent environments as well.

Chapter 13, 3D Worlds, trains us to use RL agents effectively to tackle a variety of 3D environmental challenges.

Chapter 14, From DRL to AGI, looks beyond DRL and into the realm of AGI, or at least where we hope we are going with AGI. We will also looks at various DRL algorithms that can be applied in the real world.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image