In this section, we are implementing a fully-connected ReLU classifier using NumPy.
Note that, in practice, we wouldn't implement a simple neural network with this level of detail; this is only for demonstration purposes so that we can get comfortable with the matrix multiplication and feedforward structure that is involved.
As mentioned in the previous section, NumPy has no internal structure for handling gradients or computation graphs; it is a broadly-used framework within Python for scientific computing. However, we can apply matrix operations to NumPy objects to simulate a two-layer network that incorporates feedforward and backpropagation.
All of the code for this section can be found in the GitHub repository for this chapter; note that not all of the code is published here.
We begin by importing the required package, as follows...