Another MC-based approach to solve an MDP is with off-policy control, which we will discuss in this recipe.
The off-policy method optimizes the target policy, π, using data generated by another policy, called the behavior policy, b. The target policy performs exploitation all the time while the behavior policy is for exploration purposes. This means that the target policy is greedy with respect to its current Q-function, and the behavior policy generates behavior so that the target policy has data to learn from. The behavior policy can be anything as long as all actions in all states can be chosen with non-zero probabilities, which guarantees that the behavior policy can explore all possibilities.
Since we are dealing with two different policies in the off-policy method, we can only use the common steps in episodes that take place...