Implementing a single-layer neural network
Now we can move on to neural networks. We will start by the simplest form of a neural network: a single-layer neural network. The difference from a perceptron is that the computations are done by multiple units (neurons), hence a network. As you may expect, adding more units will increase the number of problems that can be solved. The units perform their computations separately and are in a layer; we call this layer the hidden layer. Therefore, we call the units in this layer the hidden units. For now, we will only consider a single hidden layer. The output layer performs as a perceptron. This time, as input we have the hidden units in the hidden layer instead of the input variables:
Figure 2.4: Single-layer neural network with two input variables, n hidden units, and a single output unit
In our implementation of the perceptron, we've used a unit step function to determine the class. In the next recipe, we will use a non-linear activation function...