Mean average precision
So far, we have looked at getting an output that comprises a bounding box around each object within an image and the class corresponding to the object within the bounding box. Now comes the next question: how do we quantify the accuracy of the predictions coming from our model? mAP comes to the rescue in such a scenario.
Before we try to understand mAP, let’s first understand precision, then average precision, and finally, mAP:
- Precision: Typically, we calculate precision as:
A true positive refers to the bounding boxes that predicted the correct class of objects and have an IoU with a ground truth that is greater than a certain threshold. A false positive refers to the bounding boxes that predicted the class incorrectly or have an overlap that is less than the defined threshold with the ground truth. Furthermore, if there are multiple bounding boxes that are identified for the same ground-truth bounding box, only one box...