Thus far, we have been looking at a simple example of a single perceptron and how to train it. This worked well for our small dataset, but as the number of inputs increases, the complexity of our networks increases, and this cascades into the math as well. The following diagram shows a multilayer perceptron, or what we commonly refer to as an ANN:
In the diagram, we see a network with one input, one hidden, and one output layer. The inputs are now shared across an input layer of neurons. The first layer of neurons processes the inputs, and outputs the results to be processed by the hidden layer and so on, until they finally reach the output layer.
Multilayer networks can get quite complex, and the code for these models is often abstracted away by high-level interfaces such as Keras, PyTorch, and so on. These tools work...