In this chapter, we looked at the concept of TD. We also learned about our first two RL algorithms: Q-learning and SARSA. We saw how you can code these two algorithms in Python and use them to solve the cliff walking and grid world problems. These two algorithms give us a good understanding of the basics of RL and how to transition from theory to code. These two algorithms were very popular in the 1990s and early 2000s, before deep RL gained prominence. Despite that, Q-learning and SARSA still find use in the RL community today.
In the next chapter, we will look at the use of deep neural networks in RL that gives rise to deep RL. We will see a variant of Q-learning called Deep Q-Networks (DQNs) that will use a neural network instead of a tabular state-action value function, which we saw in this chapter. Note that only problems with small number of states and actions are...