Chapter 3. Finding Objects via Feature Matching and Perspective Transforms
The goal of this chapter is to develop an app that is able to detect and track an object of interest in the video stream of a webcam, even if the object is viewed from different angles or distances or under partial occlusion.
In this chapter, we will cover the following topics:
- Feature extraction
- Feature matching
- Feature tracking
In the previous chapter, you learned how to detect and track a simple object (the silhouette of a hand) in a very controlled environment. To be more specific, we instructed the user of our app to place the hand in the central region of the screen and made assumptions about the size and shape of the object (the hand). But what if we wanted to detect and track objects of arbitrary sizes, possibly viewed from a number of different angles or under partial occlusion?
For this, we will make use of feature descriptors, which are a way of capturing the important properties of our object of interest...