Summary
In this chapter, we covered model-based methods. We started the chapter by describing how we humans use the world models we have in our brains to plan our actions. Then, we introduced several methods that can be used to plan an agent's actions in an environment when a model is available. These were derivative-free search methods, and for the CEM and CMA-ES methods, we implemented parallelized versions. As a natural follow-up to this section, we then went into how a world model can be learned to be used for planning or developing policies. This section contained some important discussions about model uncertainty and how learned models can suffer from it. At the end of the chapter, we unified the model-free and model-based approaches in the Dyna framework.
As we conclude our discussion on model-based RL, we proceed to the next chapter for yet another exciting topic: multi-agent RL. Take a break, and we will see you soon!