Dueling DQN
This improvement to DQN was proposed in 2015, in the paper called Dueling network architectures for deep reinforcement learning [Wan+16]. The core observation of this paper is that the Q-values, Q(s,a), that our network is trying to approximate can be divided into quantities: the value of the state, V (s), and the advantage of actions in this state, A(s,a).
You have seen the quantity V (s) before, as it was the core of the value iteration method from Chapter 5. It is just equal to the discounted expected reward achievable from this state. The advantage A(s,a) is supposed to bridge the gap from V (s) to Q(s,a), as, by definition, Q(s,a) = V (s) + A(s,a). In other words, the advantage A(s,a) is just the delta, saying how much extra reward some particular action from the state brings us. The advantage could be positive or negative and, in general, could have any magnitude. For example, at some tipping point, the choice of one action over another can cost us...