Value, state, and optimality
You may remember our definition of the value of the state from Chapter 1, What Is Reinforcement Learning?. This is a very important notion and the time has come to explore it further.
This whole part of the book is built around the value and how to approximate it. We defined the value as an expected total reward (optionally discounted) that is obtainable from the state. In a formal way, the value of the state is , where rt is the local reward obtained at step t of the episode.
The total reward could be discounted with or not (the undiscounted case corresponds to ); it's up to us how to define it. The value is always calculated in terms of some policy that our agent follows. To illustrate this, let's consider a very simple environment with three states:
- The agent's initial state.
- The final state that the agent is in after executing action "right" from the initial state. The reward obtained from this is 1. ...