Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow Reinforcement Learning Quick Start Guide

You're reading from   TensorFlow Reinforcement Learning Quick Start Guide Get up and running with training and deploying intelligent, self-learning agents using Python

Arrow left icon
Product type Paperback
Published in Mar 2019
Publisher Packt
ISBN-13 9781789533583
Length 184 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Kaushik Balakrishnan Kaushik Balakrishnan
Author Profile Icon Kaushik Balakrishnan
Kaushik Balakrishnan
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Up and Running with Reinforcement Learning 2. Temporal Difference, SARSA, and Q-Learning FREE CHAPTER 3. Deep Q-Network 4. Double DQN, Dueling Architectures, and Rainbow 5. Deep Deterministic Policy Gradient 6. Asynchronous Methods - A3C and A2C 7. Trust Region Policy Optimization and Proximal Policy Optimization 8. Deep RL Applied to Autonomous Driving 9. Assessment 10. Other Books You May Enjoy

Defining the Bellman equation

The Bellman equation, named after the great computer scientist and applied mathematician Richard E. Bellman, is an optimality condition associated with dynamic programming. It is widely used in RL to update the policy of an agent.

Let's define the following two quantities:

The first quantity, Ps,s', is the transition probability from state s to the new state s'. The second quantity, Rs,s', is the expected reward the agent receives from state s, taking action a, and moving to the new state s'. Note that we have assumed the MDP property, that is, the transition to state at time t+1 only depends on the state and action at time t. Stated in these terms, the Bellman equation is a recursive relationship, and is given by the following equations respectively for the value function and action-value function:

Note that the Bellman equations represent the value function V at a state, and as functions of the value function at other states; similarly for the action-value function Q.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime