Summary
In this chapter, we covered several advanced topics that are very hot areas of research. Distributed reinforcement learning is key to be able to scale RL experiments efficiently. Curiosity-driven RL is making solving hard-exploration problems possible through effective exploration strategies. And finally, offline RL has a potential to transform how RL is used for real-world problems by leveraging the data logs already available for many processes.
With this chapter, we conclude the part of our book on algorithmic and theoretical discussions. The remaining chapters will be more applied, starting with robotics applications in the next chapter.