In this chapter, we took a break from model-free algorithms and started discussing and exploring algorithms that learn from a model of the environment. We looked at the key reasons behind the change of paradigm that inspired us to develop this kind of algorithm. We then distinguished two main cases that can be found when dealing with a model, the first in which the model is already known, and the second in which the model has to be learned.
Moreover, we learned how the model can either be used to plan the next actions or to learn a policy. There's no fixed rule to choose one over the other, but generally, it is related to the complexity of the action and observation space and the inference speed. We then investigated the advantages and disadvantages of model-free algorithms and deepened our understanding of how to learn a policy with model-free algorithms by combining...