Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow Reinforcement Learning Quick Start Guide

You're reading from  TensorFlow Reinforcement Learning Quick Start Guide

Product type Book
Published in Mar 2019
Publisher Packt
ISBN-13 9781789533583
Pages 184 pages
Edition 1st Edition
Languages
Author (1):
Kaushik Balakrishnan Kaushik Balakrishnan
Profile icon Kaushik Balakrishnan
Toc

Table of Contents (11) Chapters close

Preface 1. Up and Running with Reinforcement Learning 2. Temporal Difference, SARSA, and Q-Learning 3. Deep Q-Network 4. Double DQN, Dueling Architectures, and Rainbow 5. Deep Deterministic Policy Gradient 6. Asynchronous Methods - A3C and A2C 7. Trust Region Policy Optimization and Proximal Policy Optimization 8. Deep RL Applied to Autonomous Driving 9. Assessment 10. Other Books You May Enjoy

On-policy versus off-policy learning

RL algorithms can be classified as on-policy or off-policy. We will now learn about both of these classes and how to distinguish a given RL algorithm into one or the other.

On-policy method

On-policy methods use the same policy to evaluate as was used to make the decisions on actions. On-policy algorithms generally do not have a replay buffer; the experience encountered is used to train the model in situ. The same policy that was used to move the agent from state at time t to state at time t+1, is used to evaluate if the performance was good or bad. For example, if a robot is exploring the world at a given state, if it uses its current policy to ascertain whether the actions it took in the current state were good or bad, then it is an on-policy algorithm, as the current policy is also used to evaluate its actions. SARSA, A3C, TRPO, and PPO are on-policy algorithms that we will be covering in this book.

Off-policy method

Off-policy methods, on the other hand, use different policies to make action decisions and to evaluate the performance. For instance, many off-policy algorithms use a replay buffer to store the experiences, and sample data from this buffer to train the model. During the training step, a mini-batch of experience data is randomly sampled and used to train the policy and value functions. Coming back to the previous robot example, in an off-policy setting, the robot will not use the current policy to evaluate its performance, but rather use a different policy for exploring and for evaluation. If a replay buffer is used to sample a mini-batch of experience data and then train the agent, then it is off-policy learning, as the current policy of the robot (which was used to obtain the immediate actions) is different from the policy that was used to obtain the samples in the mini-batch of experience used to train the agent (as the policy has changed from an earlier time instant when the data was collected, to the current time instant). DQN, DDQN, and DDPG are off-policy algorithms that we'll look at in later chapters of this book.

You have been reading a chapter from
TensorFlow Reinforcement Learning Quick Start Guide
Published in: Mar 2019 Publisher: Packt ISBN-13: 9781789533583
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime