Feed-forward networks
Feed-forward networks (FFNs) or fully connected networks are the most basic architecture a neural network can take. We discussed perceptrons in Chapter 11, Introduction to Deep Learning. If we stack multiple perceptrons (both linear units and non-linear activations) and create a network of such units, we get what we call an FFN. The following diagram will help us understand this:
Figure 12.2 – Feed-forward network
An FFN takes a fixed-size input vector and passes it through a series of computational layers leading up to the desired output. This architecture is called feed-forward because the information is fed forward through the network. This is also called a fully connected network because every unit in a layer is connected to every unit in the previous layer and every unit in the next layer.
The first layer is called the input layer, and this is equal to the dimension of the input. The last layer is called the output layer...