Building the network
Individual neurons can be organized in a network (see Figure 8.4), usually by joining several neurons in parallel in a layer and then stacking layers on top of each other. Such a network is known as a feed-forward NN or a multilayer perceptron (MLP). The first layer is an input layer, the last layer is an output layer, and all inner layers are known as hidden layers. If each neuron of one layer is connected to the all neurons in the next layer, such a network is called a fully-connected NN.
A fully-connected feed-forward multilayer perceptron with one type of activation (usually sigmoid) is a traditional (canonical) type of NN. It is mostly used for classification purposes. In the following chapters, we will discuss other types of NNs, but in this chapter we will stick to the MLP:
Figure 8.4: Fully-connected feed-forward NN with five layers