Case Study – The MAB Problem
So far in the previous chapters, we have learned the fundamental concepts of reinforcement learning and also several interesting reinforcement learning algorithms. We learned about a model-based method called dynamic programming and a model-free method called the Monte Carlo method, and then we learned about the temporal difference method, which combines the advantages of dynamic programming and the Monte Carlo method.
In this chapter, we will learn about one of the classic problems in reinforcement learning called the multi-armed bandit (MAB) problem. We start the chapter by understanding the MAB problem, and then we will learn about several exploration strategies, called epsilon-greedy, softmax exploration, upper confidence bound, and Thompson sampling, for solving the MAB problem. Following this, we will learn how a MAB is useful in real-world use cases.
Moving forward, we will understand how to find the best advertisement banner...