In this is a bonus recipe, in this chapter where we will develop the double Q-learning algorithm.
Q-learning is a powerful and popular TD control reinforcement learning algorithm. However, it may perform poorly in some cases, mainly because of the greedy component, maxa'Q(s', a'). It can overestimate action values and result in poor performance. Double Q-learning was invented to overcome this by utilizing two Q functions. We denote two Q functions as Q1 and Q2. In each step, one Q function is randomly selected to be updated. If Q1 is selected, Q1 is updated as follows:
If Q2 is selected, it is updated as follows:
This means that each Q function is updated from another one following the greedy search, which reduces the overestimation of action values using a single Q function.