Measuring precision and recall of a classifier
In addition to accuracy, there are a number of other metrics used to evaluate classifiers. Two of the most common are precision and recall. To understand these two metrics, we must first understand false positives and false negatives. False positives happen when a classifier classifies a feature set with a label it shouldn't have gotten. False negatives happen when a classifier doesn't assign a label to a feature set that should have it. In a binary classifier, these errors happen at the same time.
Here's an example: the classifier classifies a movie review as pos
when it should have been neg
. This counts as a false positive for the pos
label, and a false negative for the neg
label. If the classifier had correctly guessed neg
, then it would count as a true positive for the neg
label, and a true negative for the pos
label.
How does this apply to precision and recall? Precision is the lack of false positives, and recall is the lack of false...