Summary
In this chapter, we walked through and implemented lots of DQN improvements discovered by researchers since the first DQN paper was published in 2015. This list is far from complete. First of all, for the list of methods, I've used the paper, [1] Rainbow: Combining Improvements in Deep Reinforcement Learning, which was published by DeepMind, so the list of methods is definitely biased to DeepMind papers. Secondly, RL is so active nowadays that new papers come out almost every day, which makes it very hard to keep up with, even if we limit ourselves to one kind of RL model such as a DQN. The goal of this chapter was to give you a practical view of different ideas that the field has developed.
In the next chapter, we will apply our DQN knowledge to a real-life scenario of stocks trading.