In this chapter, we were introduced to the basic concepts of RL. We understood the relationship between an agent and its environment, and also learned about the MDP setting. We learned the concept of reward functions and the use of discounted rewards, as well as the idea of value and advantage functions. In addition, we saw the Bellman equation and how it is used in RL. We also learned the difference between an on-policy and an off-policy RL algorithm. Furthermore, we examined the distinction between model-free and model-based RL algorithms. All of this lays the groundwork for us to delve deeper into RL algorithms and how we can use them to train agents for a given task.
In the next chapter, we will investigate our first two RL algorithms: Q-learning and SARSA. Note that in Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will be using Python-based agents as they are tabular-learning. But from Chapter 3, Deep Q-Network, onward, we will be using TensorFlow to code deep RL agents, as we will require neural networks.