Features are extracted, matched, and tracked by the FeatureMatching class—especially by the public match method. However, before we can begin analyzing the incoming video stream, we have some homework to do. It might not be clear right away what some of these things mean (especially for SURF and FLANN), but we will discuss these steps in detail in the following sections.
For now, we only have to worry about initialization:
class FeatureMatching: def __init__(self, train_image: str = "train.png") -> None:
The following steps cover the initialization process:
- The following line sets up a SURF detector, which we will use for detecting and extracting features from images (see the Learning feature extraction section for further details), with a Hessian threshold between 300 and 500, that is, 400:
self.f_extractor = cv.xfeatures2d_SURF...