We've learned how neural networks work on a basic level and how to implement a simple network using NumPy. We learned about computing loss functions, gradient descent, and backpropagation to update the weights of a network and fit its internal function to a useful model of a dataset. We built our first Q-network using TensorFlow and gained an understanding of using the framework.
In the next chapter, we'll discuss how to improve on the Q-network that we built using methods such as experience replay and using images as input to a network. We'll be building a deep Q-network using Keras running on a TensorFlow backend. You can think of Keras as a wrapper or frontend for TensorFlow; it abstracts many of the functions that TensorFlow provides into an easy framework for building complex deep learning architectures.