Forward Propagation and the Loss Function
So far, we have seen how a neuron can take an input and perform some mathematical operations on it and get an output. We learned that a neural network is a combination of multiple layers of neurons.
The process of transforming the inputs of a neural network into a result is called forward propagation (or the forward pass). What we are asking the neural network to do is to make a prediction (the final output of the neural network) by applying multiple neurons to the input data:
The neural network relies on the weights matrices, biases, and activation function of each neuron to calculate the predicted output value, . For now, let's assume the values of the weight matrices and biases are set in advance. The activation functions are defined when you design the architecture of the neural networks.
As for any supervised machine learning algorithm, the goal is...