Estimating the camera motion from a pair of images
Before we set out to actually find the motion between two cameras, let us examine the inputs and the tools we have at hand to perform this operation. First, we have two images of the same scene from (hopefully not extremely) different positions in space. This is a powerful asset, and we will make sure to use it. Now as far as tools go, we should take a look at mathematical objects that impose constraints over our images, cameras, and the scene.
Two very useful mathematical objects are the fundamental matrix (denoted by F) and the essential matrix (denoted by E). They are mostly similar, except that the essential matrix is assuming usage of calibrated cameras; this is the case for us, so we will choose it. OpenCV only allows us to find the fundamental matrix via the findFundamentalMat
function; however, it is extremely simple to get the essential matrix from it using the calibration matrix K
as follows:
Mat_<double> E = K.t() * F * K...