6. Deep Q-Network (DQN)
Using the Q-table to implement Q-learning is fine in small discrete environments. However, when the environment has numerous states or is continuous, as in most cases, a Q-table is not feasible or practical. For example, if we are observing a state made of four continuous variables, the size of the table is infinite. Even if we attempt to discretize the four variables into 1,000 values each, the total number of rows in the table is a staggering 10004 = 1e12. Even after training, the table is sparse – most of the cells in this table are zero.
A solution to this problem is called DQN [2], which uses a deep neural network to approximate the Q-table, as shown in Figure 9.6.1. There are two approaches to building the Q-network:
- The input is the state-action pair, and the prediction is the Q value
- The input is the state, and the prediction is the Q value for each action
The first option is not optimal since the network will...