Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Deep Learning for Games

You're reading from   Hands-On Deep Learning for Games Leverage the power of neural networks and reinforcement learning to build intelligent games

Arrow left icon
Product type Paperback
Published in Mar 2019
Publisher Packt
ISBN-13 9781788994071
Length 392 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Micheal Lanham Micheal Lanham
Author Profile Icon Micheal Lanham
Micheal Lanham
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: The Basics
2. Deep Learning for Games FREE CHAPTER 3. Convolutional and Recurrent Networks 4. GAN for Games 5. Building a Deep Learning Gaming Chatbot 6. Section 2: Deep Reinforcement Learning
7. Introducing DRL 8. Unity ML-Agents 9. Agent and the Environment 10. Understanding PPO 11. Rewards and Reinforcement Learning 12. Imitation and Transfer Learning 13. Building Multi-Agent Environments 14. Section 3: Building Games
15. Debugging/Testing a Game with DRL 16. Obstacle Tower Challenge and Beyond 17. Other Books You May Enjoy

Exercises

As always, try and complete a minimum of two to three of these exercises on your own, and for your own benefit. While this is a hands-on book, it always helps to spend a little more time applying your knowledge to new problems.

Complete the following exercises on your own:

  1. Go through and explore the VisualPushBlock example. This example is quite similar to the Hallway, and is a good analog to play with.
  2. Modify the Hallway example's HallwayAgent script to use more scanning angles, and thus more vector observations.
  3. Modify the Hallway example to use a combined sensor sweep and visual observation input. This will require you to modify the learning brain configuration by adding a camera, and possibly updating some hyperparameters.
  4. Modify other visual observation environments to use some form of vector observation. A good example to try this on is the VisualPushBlock...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image