Chapter 3: Contextual Bandits
A more advanced version of the multi-armed bandit is the contextual bandit (CB) problem, where decisions are tailored to the context they are made in. In the previous chapter, we identified the best performing ad in an online advertising scenario. In doing so, we did not use any information about, for instance, the user persona, age, gender, location, previous visits etc., which would have increased the likelihood of a click. Contextual bandits allow us to leverage such information, which makes them play a central role in commercial personalization and recommendation applications.
Context is similar to a state in a multi-step reinforcement learning (RL) problem, with one key difference. In a multi-step RL problem, the action an agent takes affects the states it is likely to visit in the subsequent steps. For example, while playing tic-tac-toe, an agent's action in the current state changes the board configuration (state) in a particular way, which...