If we recall Chapter 2, Training Reinforcement Learning Agents Using OpenAI Gym, where we tried to implement a basic Q-network, we studied that for a real-world problem, Q-learning using a Q-table is not a feasible solution owing to continuous state and action spaces. Moreover, a Q-table is environment-specific and not generalized. Therefore, we need a model which can map the state information provided as input to Q-values of the possible set of actions. This is where a neural network comes to play the role of a function approximator, which can take state information input in the form of a vector, and learn to map them to Q-values for all possible actions.
Let's discuss the issues with Q-learning in a gaming environment and evolution of deep Q-networks. Consider applying Q-learning to a gaming environment, the state would be defined by the location of...