In general, Q-learning can be used to solve the same kinds of problems that can be tackled with SARSA, and because they both come from the same family (TD learning), they generally have similar performances. Nevertheless, in some specific problems, one approach can be preferred to the other. So it's useful to also know how Q-learning is implemented.
For this reason, here we'll implement Q-learning to solve Taxi-v2, the same environment that was used for SARSA. But be aware that with just a few adaptations, it can be used with every other environment with the correct characteristics. Having the results from both Q-learning and SARSA from the same environment we'll have the opportunity to compare their performance.
To be as consistent as possible, we kept some functions unchanged from the SARSA implementation. These are as follows:
...