Mission Accomplished
In this project, we taught Processing how to see by connecting the Kinect and installing the OpenNI framework.
We used the depth image provided by the SimpleOpenNI framework and located the user in the image using the tracking capabilities of OpenNI. For every tracked user, the framework provides us with a bitmap defining which pixels belong to the tracked user and which don't. For the last two tasks, we used the depth image with the colored pixels that we created in the second task as an HUD display to enable the player to see what is currently being tracked.
Starting with our third task, we used the skeleton tracker to access the 3D coordinates of the player's limbs and joints and used the information to draw a stick figure that followed the player's moves.
In our final task, we refactored the drawing code for our stick figure to a class and used this class to draw multiple dancers so that our stick figure had some company while dancing.