In the previous section, Choosing a good strategy to validate model performance, we talked about choosing a good strategy for validating your neural network. In the following sections, we'll dive into choosing metrics for different kinds of models.
When you're building a classification model, you're looking for metrics that express how many samples were correctly classified. You're probably also interested in measuring how many samples were incorrectly classified.
You can use a confusion matrix—a table with the predicted output versus the expected output—to find out a lot of detail about the performance of your model. This tends to get complicated, so we'll also look at a way to measure the performance of a model using the F-measure.