Contextual Bandits
In the classical bandit problem, the reward from pulling an arm solely depends on the reward distribution associated with that arm, and our goal is to identify the optimal arm as soon as possible and keep pulling it until the end of the process. A contextual bandit problem, on the other hand, includes an additional element to the problem: the environment, or the context. Similar to its definition in the context of reinforcement learning, an environment contains all of the information about the problem settings, the state of the world at any given time, as well as other agents that might be participating in the same environment as our player.
Context That Defines a Bandit Problem
In the traditional MAB problem, we only care about what potential reward each arm will return if we pull it at any time. In contextual bandits, we are provided with the contextual information about the environment that we are operating in, and depending on the setting, the reward distribution...