Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Deep Learning for Games

You're reading from   Hands-On Deep Learning for Games Leverage the power of neural networks and reinforcement learning to build intelligent games

Arrow left icon
Product type Paperback
Published in Mar 2019
Publisher Packt
ISBN-13 9781788994071
Length 392 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Micheal Lanham Micheal Lanham
Author Profile Icon Micheal Lanham
Micheal Lanham
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: The Basics FREE CHAPTER
2. Deep Learning for Games 3. Convolutional and Recurrent Networks 4. GAN for Games 5. Building a Deep Learning Gaming Chatbot 6. Section 2: Deep Reinforcement Learning
7. Introducing DRL 8. Unity ML-Agents 9. Agent and the Environment 10. Understanding PPO 11. Rewards and Reinforcement Learning 12. Imitation and Transfer Learning 13. Building Multi-Agent Environments 14. Section 3: Building Games
15. Debugging/Testing a Game with DRL 16. Obstacle Tower Challenge and Beyond 17. Other Books You May Enjoy

RL experiments

Reinforcement learning is quickly advancing, and the DQN model we just looked at has quickly become outpaced by more advanced algorithms. There are several variations and advancements in RL algorithms that could fill several chapters, but most of that material would be considered academic. As such, we will instead look at some more practical examples of the various RL models the Keras RL API provides.

The first simple example we can work with is changing our previous example to work with a new gym environment. Open up Chapter_5_5.py and follow the next exercise:

  1. Change the environment name in the following code:
if __name__ == "__main__":
env = gym.make('MountainCar-v0')
  1. In this case, we are going to use the MountainCar environment, as shown:
Example of MountainCar environment
  1. Run the code as you normally would and see how the DQNAgent...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £16.99/month. Cancel anytime