Understanding deep Q-learning
Instead of creating a Q-values table, DQN uses a deep neural network (DNN) that outputs a Q-value for a given state-action pair. DQN is used with complex environments such as video games, where there are far too many states for them to be managed in a Q-values table. The current image frame of the video game is used to represent the current state and is fed as input to the underlying DNN model, together with the current action.
The DNN outputs a scalar Q-value for each such input. In practice, instead of just passing the current image frame, N number of neighboring image frames in a given time window are passed as input to the model.
We are using a DNN to solve an RL problem. This has an inherent concern. While working with DNNs, we have always worked with independent and identically distributed (iid) data samples. However, in RL, every current output impacts the next input. For example, in the case of Q-learning, the Bellman equation itself suggests that...