Detecting simple actions
Let's see now how we can enhance our application and leverage the Kinect sensor's Natural User Interface (NUI) capabilities.
We implement a manager that, using the skeleton data, is able to interpret a body motion or a posture and translate the same to an action as "click". Similarly, we could create other actions as "zoom in". Unfortunately, the Kinect for Windows SDK does not provide APIs for recognizing gestures, so we need to develop our custom gesture recognition engine.
Gesture detection can be relatively simple or intensely complex depending on the gesture and the environment (image noise, scene with more users, and so on).
In literature there are many approaches for implementing gesture recognition, the most common ones are as follows:
A neural network that utilizes the weighted networks (Gestures and neural networks in human-computer interaction, Beale R and Alistair D N E)
A DTW that utilizes the Dynamic Time Warping algorithm initially developed for the speech...