Introduction
In the previous chapter, we learned that DQNs achieved higher performance compared to traditional reinforcement learning techniques. Video games are a perfect example of where DQN models excel. Training an agent to play video games can be quite difficult for traditional reinforcement learning agents as there is a huge number of possible combinations of states, actions, and Q-values to be processed and analyzed during the training.
Deep learning algorithms are renowned for handling high-dimensional tensors. Some researchers combined Q-learning techniques with deep learning models to overcome this limitation and came up with DQNs. A DQN model comprises a deep learning model that is used as a function approximation of Q-values. This technique constituted a major breakthrough in the reinforcement learning field as it helped to handle much larger state and action spaces than traditional models.
Since then, further research has been undertaken and different types of DQN...