Application infrastructure
So far, we've learned how to detect a pattern and estimate its 3D position with regards to the camera. Now it's time to show how to put these algorithms into a real application. So our goal for this section is to show how to use OpenCV to capture a video from a web camera and create the visualization context for 3D rendering.
As our goal is to show how to use key features of marker-less AR, we will create a simple command-line application that will be capable of detecting arbitrary pattern images either in a video sequence or in still images.
To hold all image-processing logic and intermediate data, we introduce the ARPipeline
class. It's a root object that holds all subcomponents necessary for augmented reality and performs all processing routines on the input frames. The following is a UML diagram of ARPipeline
and its subcomponents:
It consists of:
The camera-calibration object
An Instance of the pattern-detector object
A trained pattern object
Intermediate data of...