The BPN algorithm is one of the most studied and most used algorithms in neural networks. It is used to propagate the errors from the output layers to the neurons of the hidden layer, which are then used to update the weights. The whole learning can be broken into two passes--forward pass and backward pass.
Forward pass: The inputs are fed to the network and the signal is propagated from the input layer via the hidden layers, finally to the output layer. At the output layer, the error and the loss function are computed.
Backward pass: In backward pass, the gradient of the loss function is computed first for the output layer neurons and then for the hidden layer neurons. The gradients are then used to update the weights.
The two passes are repeatedly iterated till convergence is reached.