The value of the action
To make our life slightly easier, we can define different quantities, in addition to the value of the state, V (s), as the value of the action, Q(s,a). Basically, this equals the total reward we can get by executing action a in state s and can be defined via V (s). Being a much less fundamental entity than V (s), this quantity gave a name to the whole family of methods called Q-learning, because it is more convenient. In these methods, our primary objective is to get values of Q for every pair of state and action:
Q, for this state, s, and action, a, equals the expected immediate reward and the discounted long-term reward of the destination state. We also can define V (s) via Q(s,a):
This just means that the value of some state equals to the value of the maximum action we can execute from this state.
Finally, we can express Q(s,a) recursively (which will be used in Chapter 6):