MultiLayer Perceptron
When we connect the artificial neurons together, based on a well-defined structure, we call it a neural network. Here is the simplest neural network with one neuron:
Neural network with one neuron
We connect the neurons such that the output of one layer becomes the input of the next layer, until the final layer's output becomes the final output. Such neural networks are called feed forward neural networks (FFNN). As these FFNNs are made up of layers of neurons connected together, they are hence called MultiLayerPerceptrons (MLP) or deep neural networks (DNN).
As an example, the MLP depicted in the following diagram has three features as inputs: two hidden layers of five neurons each and one output y. The neurons are fully connected to the neurons of the next layer. Such layers are also called dense layers or affine layers and such models are also known as sequential models.
Let's revisit some of the example datasets that we explored earlier and build simple neural networks...