In the previous example, we saw how it is relatively simple, using a 16x4 grid, to update the Q-table at each step of the learning process. It is easy to imagine that the use of this table can serve for simple problems, but in real-world problems, we need a more sophisticated mechanism to update the system state. This is the point where deep learning steps in. Neural networks are exceptionally good at coming up with good features for highly structured data.
In this final section, we'll look at how to manage a Q-function with a neural network, which takes the state and action as input, and outputs the corresponding Q-value.
To do that, we'll build a one layer network that takes the state, encoded in a [1x16] vector, which learns the best move (action), mapping the possible actions in a vector of length four.