Precision-recall curve is used in many machine learning problems. The general idea is to visualize the precision and the recall of the model at each threshold of confidence. With every bounding box, our model will output a confidence—a number between 0 and 1 characterizing how confident the model is that a prediction is correct.
Because we do not want to keep the less confident predictions, we usually remove those below a certain threshold, 𝑇. For instance, if 𝑇 = 0.4, we will not consider any prediction with a confidence below this number.
Moving the threshold has an impact on precision and on recall:
- If T is close to 1: Precision will be high, but the recall will be low. As we filter out many objects, we miss a lot of them—recall shrinks. As we only keep confident predictions, we do not have many false positives—precision rises.
- If T is close to 0: Precision will be low, but the recall will be high. As we keep most...