Categorical DQN
The last, and the most complicated, method in our DQN improvements toolbox is from the paper published by DeepMind in June 2017, called A distributional perspective on reinforcement learning [BDM17]. Although this paper is a few years old now, it remains highly relevant, and active research is still ongoing in this area. The book Distributional reinforcement learning was published in 2023, where the same authors describe the method in greater detail [BDR23].
In the paper, the authors questioned the fundamental pieces of Q-learning — Q-values — and tried to replace them with a more generic Q-value probability distribution. Let’s try to understand the idea. Both the Q-learning and value iteration methods work with the values of the actions or states represented as simple numbers and showing how much total reward we can achieve from a state, or an action and a state. However, is it practical to squeeze all future possible rewards into one...