In this recipe, we are going to develop another advanced type of DQNs, Dueling DQNs (DDQNs). In particularly, we will see how the computation of the Q value is split into two parts in DDQNs.
In DDQNs, the Q value is computed with the following two functions:
Here, V(s) is the state-value function, calculating the value of being at state s; A(s, a) is the state-dependent action advantage function, estimating how much better it is to take an action, a, rather than taking other actions at a state, s. By decoupling the value and advantage functions, we are able to accommodate the fact that our agent may not necessarily look at both the value and advantage at the same time during the learning process. In other words, the agent using DDQNs can efficiently optimize either or both functions as it prefers.