A multi-layer perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer). While a single layer perceptron can learn only linear functions, a MLP can also learn non-linear functions.
Figure 7 shows MLP with a single hidden layer. Note that all connections have weights associated with them, but only three weights (w0, w1, and w2) are shown in the figure.
Input Layer: The Input layer has three nodes. The bias node has a value of 1. The other two nodes take X1 and X2 as external inputs (which are numerical values depending upon the input dataset). As discussed before, no computation, is performed in the Input Layer, so the outputs from nodes in the Input Layer are 1, X1, and X2 respectively, which are fed into the Hidden Layer.
Hidden Layer: The Hidden Layer also has three nodes, with the bias node having an output...