Inspired by its biological counterpart (represented in Figure 1.11), the artificial neuron takes several inputs (each a number), sums them together, and finally applies an activation function to obtain the output signal, which can be passed to the following neurons in the network (this can be seen as a directed graph):
The summation of the inputs is usually done in a weighted way. Each input is scaled up or down, depending on a weight specific to this particular input. These weights are the parameters that are adjusted during the training phase of the network in order for the neuron to react to the correct features. Often, another parameter is also trained and used for this summation process—the neuron's bias. Its value is simply added to the weighted sum as an offset.
Let's quickly formalize this process mathematically. Suppose...