Multi-layer perceptron: our first example of a network
In this chapter, we present our first example of a network with multiple dense layers. Historically, “perceptron” was the name given to the model having one single linear layer, and as a consequence, if it has multiple layers, we call it a Multi-Layer Perceptron (MLP). Note that the input and the output layers are visible from the outside, while all the other layers in the middle are hidden – hence the name hidden layers. In this context, a single layer is simply a linear function and the MLP is therefore obtained by stacking multiple single layers one after the other:
Figure 1.3: An example of multiple layer perceptron
In Figure 1.3 each node in the first hidden layer receives an input and “fires” (0,1) according to the values of the associated linear function. Then the output of the first hidden layer is passed to the second layer where another linear function is applied, the results...