SARSA on-policy TD control
State-action-reward-state-action (SARSA) is an on-policy TD control problem, in which policy will be optimized using policy iteration (GPI), only time TD methods used for evaluation of predicted policy. In the first step, the algorithm learns a SARSA function. In particular, for an on-policy method we estimate qπ (s, a) for the current behavior policy π and for all states (s) and actions (a), using the TD method for learning vπ.
Now, we consider transitions from state-action pair to state-action pair, and learn the values of state-action pairs:
This update is done after every transition from a non-terminal state St. If St+1 is terminal, then Q (St+1, At+1) is defined as zero. This rule uses every element of the quintuple of events (St, At, Rt, St+1, At+1), which make up a transition from one state-action pair to the next. This quintuple gives rise to the name SARSA for the algorithm.
As in all on-policy methods, we continually estimate qπ for the behavior policy...