A practical overview of backpropagation
Multi-layer perceptrons learn from training data through a process called backpropagation. In this paragraph we will give an intuition while more details are in Chapter 14, The Math Behind Deep Learning. The process can be described as a way of progressively correcting mistakes as soon as they are detected. Let’s see how this works.
Remember that each neural network layer has an associated set of weights that determine the output values for a given set of inputs. Additionally, remember that a neural network can have multiple hidden layers.
At the beginning, all the weights have some random assignment. Then the neural network is activated for each input in the training set: values are propagated forward from the input stage through the hidden stages to the output stage where a prediction is made.
Note that we keep Figure 1.27 below simple by only representing a few values with green dotted lines but in reality, all the values...