So far, we have a clear grasp of how data actually propagates through our perceptron. We also briefly saw how the errors of our model can be propagated backwards. We use a loss function to compute a loss value at each training iteration. This loss value tells us how far our model's predictions lie from the actual ground truth. But what then?
Training a perceptron
Quantifying loss
Since the loss value gives us an indication of the difference between our predicted and actual outputs, it stands to reason that if our loss value is high, then there is a big difference between our model's predictions and the actual output. Conversely, a low loss value indicates that our model is closing the distance between the predicted...