You will recall that Function Approximation (FA) approximates the state space using a set of features generated from the original states. Deep Q-Networks (DQNs) are very similar to FA with neural networks, but they use neural networks to map the states to action values directly instead of using a set of generated features as media.
In Deep Q-learning, a neural network is trained to output the appropriate Q(s,a) values for each action given the input state, s. The action, a, of the agent is chosen based on the output Q(s,a) values following the epsilon-greedy policy. The structure of a DQN with two hidden layers is depicted in the following diagram:
You will recall that Q-learning is an off-policy learning algorithm and that it updates the Q-function based on the following equation:
Here, s' is the resulting state after taking action, a, in state...