To build the app, we need to combine the two main features discussed previously—a saliency map and object tracking. The final app will convert each RGB frame of a video sequence into a saliency map, extract all the interesting proto-objects, and feed them to a mean-shift tracking algorithm. To do this, we need the following components:
- main: This is the main function routine (in chapter6.py) to start the application.
- saliency.py: This is a module to generate a saliency map and proto-object map from an RGB color image. It includes the following functions:
- get_saliency_map: This is a function to convert an RGB color image to a saliency map.
- get_proto_objects_map: This is a function to convert a saliency map into a binary mask containing all the proto-objects.
- plot_power_density: This is a function to display the two-dimensional power density of an RGB color...