Value, state, and optimality
You may remember our definition of the value of the state from Chapter 1. This is a very important notion and the time has come to explore it further.
This whole part of the book is built around the value of the state and how to approximate it. We defined this value as an expected total reward (optionally discounted) that is obtainable from the state. In a formal way, the value of the state is given by
where rt is the local reward obtained at step t of the episode.
The total reward could be discounted with 0 < γ < 1 or not discounted (when γ = 1); it’s up to us how to define it. The value is always calculated in terms of some policy that our agent follows. To illustrate this, let’s consider a very simple environment with three states, as shown in Figure 5.1:
Figure 5.1: An example of an environment’s state transition with rewards ...