Value, state, and optimality
You may remember our definition of the value of the state in Chapter 1, What is Reinforcement Learning?. This is a very important notion and the time has come to explore it further. This whole part of the book is built around the value and how to approximate it. We defined value as an expected total reward that is obtainable from the state. In a formal way, the value of the state is: , where is the local reward obtained at the step t of the episode.
The total reward could be discounted or not; it's up to us how to define it. Value is always calculated in the respect of some policy that our agent follows. To illustrate, let's consider a very simple environment with three states:
- The agent's initial state.
- The final state that the agent is in after executing action "left" from the initial state. The reward obtained from this is 1.
- The final state that the agent is in after action "down". The reward obtained from this is 2...