Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Reinforcement Learning with TensorFlow

You're reading from   Reinforcement Learning with TensorFlow A beginner's guide to designing self-learning systems with TensorFlow and OpenAI Gym

Arrow left icon
Product type Paperback
Published in Apr 2018
Publisher Packt
ISBN-13 9781788835725
Length 334 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Sayon Dutta Sayon Dutta
Author Profile Icon Sayon Dutta
Sayon Dutta
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Deep Learning – Architectures and Frameworks 2. Training Reinforcement Learning Agents Using OpenAI Gym FREE CHAPTER 3. Markov Decision Process 4. Policy Gradients 5. Q-Learning and Deep Q-Networks 6. Asynchronous Methods 7. Robo Everything – Real Strategy Gaming 8. AlphaGo – Reinforcement Learning at Its Best 9. Reinforcement Learning in Autonomous Driving 10. Financial Portfolio Management 11. Reinforcement Learning in Robotics 12. Deep Reinforcement Learning in Ad Tech 13. Reinforcement Learning in Image Processing 14. Deep Reinforcement Learning in NLP 15. Further topics in Reinforcement Learning 16. Other Books You May Enjoy

Asynchronous one-step SARSA


The architecture of asynchronous one-step SARSA is almost similar to the architecture of asynchronous one-step Q-learning, except the way target state-action value of the current state is calculated by the target network. Instead of using the maximum Q-value of the next state s' by the target network, SARSA uses 

-greedy to choose the action a' for the next state s' and the Q-value of the next state action pair, that is, Q(s',a';

) is used to calculate the target state-action value of the current state. 

The pseudo-code for asynchronous one-step SARSA is shown below. Here, the following are the global parameters:

  •  : the parameters (weights and biases) of the policy network
  •  : parameters (weights and biases) of the target network  
  • T : overall time step counter 
// Globally shared parameters 
,
and T //
is initialized arbitrarily // T is initialized 0 pseudo-code for each learner running parallel in each of the threads: Initialize thread level time step counter t=0...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime