Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Deep Learning for Games

You're reading from   Hands-On Deep Learning for Games Leverage the power of neural networks and reinforcement learning to build intelligent games

Arrow left icon
Product type Paperback
Published in Mar 2019
Publisher Packt
ISBN-13 9781788994071
Length 392 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Micheal Lanham Micheal Lanham
Author Profile Icon Micheal Lanham
Micheal Lanham
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: The Basics
2. Deep Learning for Games FREE CHAPTER 3. Convolutional and Recurrent Networks 4. GAN for Games 5. Building a Deep Learning Gaming Chatbot 6. Section 2: Deep Reinforcement Learning
7. Introducing DRL 8. Unity ML-Agents 9. Agent and the Environment 10. Understanding PPO 11. Rewards and Reinforcement Learning 12. Imitation and Transfer Learning 13. Building Multi-Agent Environments 14. Section 3: Building Games
15. Debugging/Testing a Game with DRL 16. Obstacle Tower Challenge and Beyond 17. Other Books You May Enjoy

Understanding PPO

We have avoided going too deep into the more advanced inner workings of the proximal policy optimization (PPO) algorithm, even going so far as to avoid any policy-versus-model discussion. If you recall, PPO is the reduced level (RL) method first developed at OpenAI that powers ML-Agents, and is a policy-based algorithm. In this chapter, we will look at the differences between policy-and model-based RL algorithms, as well as the more advanced inner workings of the Unity implementation.

The following is a list of the main topics we will cover in this chapter:

  • Marathon reinforcement learning
  • The partially observable Markov decision process
  • Actor-Critic and continuous action spaces
  • Understanding TRPO and PPO
  • Tuning PPO with hyperparameters

The content in this chapter is at an advanced level, and assumes that you have covered several previous chapters and exercises...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image