Measuring model performance with a confusion matrix
To measure the performance of a classification model, we can first generate a classification table based on our predicted label and actual label. We then use a confusion matrix to obtain performance measures such as precision, recall, specificity, and accuracy. In this recipe, we will demonstrate how to retrieve a confusion matrix using the caret
package.
Getting ready
You need to have the previous recipes completed by generating a classification model, and assign the model to the variable fit
.
How to do it…
Perform the following steps to generate classification measurement:
- Predict labels using the fitted model,
fit
:> pred = predict(fit, testset[,! names(testset) %in% c("buy")], type="class")
- Generate a classification table:
> table(pred, testset[,c("buy")]) pred no yes no 11 1 yes 0 8
- Lastly, generate a confusion matrix using prediction results and actual labels from the testing dataset...