Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Reinforcement Learning Workshop

You're reading from  The Reinforcement Learning Workshop

Product type Book
Published in Aug 2020
Publisher Packt
ISBN-13 9781800200456
Pages 822 pages
Edition 1st Edition
Languages
Authors (9):
Alessandro Palmas Alessandro Palmas
Profile icon Alessandro Palmas
Emanuele Ghelfi Emanuele Ghelfi
Profile icon Emanuele Ghelfi
Dr. Alexandra Galina Petre Dr. Alexandra Galina Petre
Profile icon Dr. Alexandra Galina Petre
Mayur Kulkarni Mayur Kulkarni
Profile icon Mayur Kulkarni
Anand N.S. Anand N.S.
Profile icon Anand N.S.
Quan Nguyen Quan Nguyen
Profile icon Quan Nguyen
Aritra Sen Aritra Sen
Profile icon Aritra Sen
Anthony So Anthony So
Profile icon Anthony So
Saikat Basak Saikat Basak
Profile icon Saikat Basak
View More author details
Toc

Table of Contents (14) Chapters close

Preface
1. Introduction to Reinforcement Learning 2. Markov Decision Processes and Bellman Equations 3. Deep Learning in Practice with TensorFlow 2 4. Getting Started with OpenAI and TensorFlow for Reinforcement Learning 5. Dynamic Programming 6. Monte Carlo Methods 7. Temporal Difference Learning 8. The Multi-Armed Bandit Problem 9. What Is Deep Q-Learning? 10. Playing an Atari Game with Deep Recurrent Q-Networks 11. Policy-Based Methods for Reinforcement Learning 12. Evolutionary Strategies for RL Appendix

Building a DRQN

A DQN can benefit greatly from RNN models facilitating the processing of sequential images. Such an architecture is known as Deep Recurrent Q Network (DRQN). Combining a GRU or LSTM model with a CNN model will allow the reinforcement learning agent to understand the movement of the ball. To do so, we just need to add an LSTM (or GRU) layer between the convolutional and fully connected layers, as shown in the following figure:

Figure 10.9: DRQN architecture

To feed the RNN model with a sequence of images, we need to stack several images together. For the Breakout game, after initializing the environment, we will need to take the first image and duplicate it several times in order to have the first initial sequence of images. Having done this, after each action, we can append the latest image to the sequence and remove the oldest one in order to maintain the exact same size of sequence (for instance, a sequence of a maximum of four images).

...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime