We will now explore how can we evaluate the performance of a neural network by using the cost function. We will use it to measure how far we are from the expected value. We are going to use the following notation and variables:
- Variable Y to represent the true value
- Variable a to represent the neuron prediction
In terms of weight and biases, the formula is as follows:
We pass z, which is the input (X) times the weight (X) added to the bias (b), into the activation function of .
There are many types of cost functions, but we are just going to discuss two of them:
- The quadratic cost function
- The cross-entropy function
The first cost function we are going to discuss is the quadratic cost function, which is represented with the following formula:
In the preceding formula, we can see that when the error is high, which means the actual value (Y) is less than the predictive value (a), then the value of the cost function...