Introduction
The focus of this chapter is to provide solutions to common implementation problems for FNN and other network topologies. The techniques discussed in this chapter also apply to the following chapters.
FNNs are networks where the information only moves in one direction and does not cycle (as we will see in Chapter 4, Recurrent Neural Networks). FNNs are mainly used for supervised learning where the data is not sequential or time-dependent, for example for general classification and regression tasks. We will start by introducing a perceptron and we will show how to implement a perceptron with NumPy. A perceptron demonstrates the mechanics of a single unit. Next, we will increase the complexity by increasing the number of units and introduce single-layer and multi-layer neural networks. The high number of units, in combination with a high number of layers, gives the depth of the architecture and is responsible for the name deep learning.