Here is one way to grab skeletal data from the kinect, based on the Kinect Blocks project's code.
Kinect blocks is a project made by two students in the Emerging Themes class. Basicallly, its a 3D content editor for a uniform grid of textured blocks, although its more of a prototype than something ready for use in content creation.Technologies used:C/C++OpenNI frameworkWindowsVisual StudioOpenGLGLEWSOILTinyXML
Remote collaboration between two sites. 3D video is generated by one Kinect camera at each site. The software is a combination of Vrui, the Vrui collaboration infrastructure, a Kinect 3D video plug-in for the latter, and a Vrui-based viewer for Doom3 maps (the map shown here is mars_city1).
First test of merging the 3D video streams from two Kinect cameras into a single 3D reconstruction. The cameras were placed at an angle of about 90 degrees, aimed at the same spot in 3D space. The two cameras were calibrated internally using the method described in the previous video, and were calibrated externally (with respect to each other) using a flat checkerboard calibration pattern and manual measurements.
HMD is doing is rotating the camera angle based on the information, OpenNI only linked to the head of the joint inputs. Were able to build a virtual reality environment to be able to reflect movement of the body by itself without any special equipment. This is the second demonstration was held in Nagoya May 18, what you will be presented at the seminar, a video clip when I get a little plus. State that can be seen moving their limbs and face down a little. The scene is reflected in his non-action of his body, feeling very strange and interesting.
We are developing the Flexible Action and Articulated Skeleton Toolkit (FAAST), which is middleware to facilitate integration of full-body control with games and VR applications. FAAST currently supports the PrimeSensor and the Microsoft Kinect using the OpenNI framework. In this video, we show how FAAST can be used to control off-the-shelf video games such as World of Warcraft. Since these games would not normally support motion sensing devices, FAAST emulates keyboard input triggered by body posture and specific gestures.
I've written a program that uses the Kinect 3D camera to take 3D snapshots which record the nearest object to the camera in each part of the image. By taking multiple 3D snapshots at different times and then merging them to show the closest object, I can create a 3D sculpture that you can walk through. This runs in real time, creating a fully interactive experience.
Ever wanted to throw fireballs from you hands? Now you can. This setup uses a combination of FAAST for Kinect input and GlovePIE for the chuck and scripting of special moves. Probably the most interesting aspect of this is that you can play over the internet or LAN with your friends... Give it a shot and why not modify my scripts for some other characters?