So, how well did we do? How do we actually measure how well we did? It all depends on the situation.
Let's evaluate our model by plotting the error counts:
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
plt.show()
Now, let's view the output:
Fig 3.12: Count of predicted errors in the model
It looks like the model predicted reasonably well. The distribution error of the model shows it is not quite Gaussian or normally distributed, but we can expect non-Gaussian as the number of samples is very small.