Exploring the foundations of neural networks using an MLP
A deep learning architecture is created when at least three perceptron layers are used, excluding the input layer. A perceptron is a single-layer network consisting of neuron units. Neuron units hold a bias variable and act as nodes for vertices to be connected. These neurons will interact with other neurons in a separate layer with weights applied to the connections/vertices between neurons. A perceptron is also known as a fully connected layer or dense layer, and MLPs are also known as feedforward neural networks or fully connected neural networks.
Let’s refer back to the MLP figure from the previous chapter to get a better idea.
Figure 2.1 – Simple deep learning architecture, also called an MLP
The figure shows how three data column inputs get passed into the input layer, then subsequently get propagated to the hidden layer, and finally, through the output layer. Although not...