Reconstructing a 3D scene from calibrated cameras
We saw in the previous recipe that it is possible to recover the position of a camera observing a 3D scene, when this one is calibrated. The approach described took advantage of the fact that, sometimes, the coordinates of some 3D points visible in the scene might be known. We will now learn that if a scene is observed from more than one point of view, 3D pose and structure can be reconstructed even if no information about the 3D scene is available. This time, we will use correspondences between image points in the different views in order to infer 3D information. We will introduce a new mathematical entity encompassing the relation between two views of a calibrated camera, and we will discuss the principle of triangulation in order to reconstruct 3D points from 2D images.
How to do it...
Let's again use the camera we calibrated in the first recipe of this chapter and take two pictures of some scene. We can match feature points between these...