When the state space for an environment gets too big, a Q-table is no longer a practical way in which to model the transition function between states and actions. Neural networks can help us to approximate the Q-value of a state, so that we don't need to use a lookup table to find the exact recorded function value.
One popular way to train a Q-network is to give it images that represent states. The network then looks at the actions that are possible in each state and predicts which action will yield the highest value if taken from that state. Generally, it is not looking at an exact Q-value in a table but at a probability distribution of values. We'll explore this type of network in the next chapter, after we learn about the basics of building Q-networks.