Now, let's implement what is known as the cross-entropy loss function. This is used to measure how accurate an NN is on a small subset of data points during the training process; the bigger the value that is output by our loss function, the more inaccurate our NN is at properly classifying the given data. We do this by calculating a standard mean log-entropy difference between the expected output and the actual output of the NN. For numerical stability, we will limit the value of the output to 1:
MAX_ENTROPY = 1
def cross_entropy(predictions=None, ground_truth=None):
if predictions is None or ground_truth is None:
raise Exception("Error! Both predictions and ground truth must be float32 arrays")
p = np.array(predictions).copy()
y = np.array(ground_truth).copy()
if p.shape != y.shape:
raise Exception("Error! Both predictions...