Q-learning is another TD algorithm with some very useful and distinct features from SARSA. Q-learning inherits from TD learning all the characteristics of one-step learning (from TD learning, that is, the ability of learning at each step) and the characteristic to learn from experience without a proper model of the environment.
The most distinctive feature about Q-learning compared to SARSA is that it's an off-policy algorithm. As a reminder, off-policy means that the update can be made independently from whichever policy has gathered the experience. This means that off-policy algorithms can use old experiences to improve the policy. To distinguish between the policy that interacts with the environment and the one that actually improves, we call the former a behavior policy and the latter a target policy.
Here, we'll explain the more primitive version of...