Measuring prediction performance using ROCR
One obstacle to using a confusion matrix to assess the classification model is that you have to arbitrarily select a threshold to determine the value of the matrix. A possible way to determine the threshold is to visualize the confusion matrix under different thresholds in a Receiver Operating Characteristic (ROC) curve.
An ROC curve is a plot that illustrates the performance of a binary classifier system, and plots the true positive rate against the false positive rate for different cut points. We most commonly use this plot to calculate the area under curve (AUC), to measure the performance of a classification model. In this recipe, we demonstrate how to illustrate a ROC curve and calculate the AUC to measure the performance of a classification model.
Getting ready
You need to have the previous recipes completed by generating a classification model, and assign the model into variable fit.
How to do it…
Perform the following steps to generate...