While the precision-recall curve can tell us a lot about the model, it is often more convenient to have a single number. Average precision (AP) corresponds to the area under the curve. Since it is always contained in a one-by-one rectangle, AP is always between 0 and 1.
Average precision gives information about the performance of a model for a single class. To get a global score, we use mean Average Precision (mAP). This corresponds to the mean of the average precision for each class. If the dataset has 10 classes, we will compute the average precision for each class and take the average of those numbers.
Mean average precision is used in at least two object detection challenges—PASCAL Visual Object Classes (usually referred to as Pascal VOC), and Common Objects in Context (usually referred to as COCO). The latter is larger and contains more classes; therefore, the scores obtained are usually lower than for the former.