I am now building two, 125kb DLL's (x86 & x64 versions) to send JSON formatted bone position / rotation back to a python callback every frame. When the 'Stop' button was clicked, I am successfully taking the frame info that was being put aside and generating an action for each bodies the device detected (up to 6). The results are currently horrific, but I am starting to get a handle on changing that.
I have seen that I am not taking the best advantage of this multiple body tracking. To do that, I need to separate the completion of the session from assignment of "parts" to different meshes. Examples of bringing up lists of foreign data, something called a UIList are basically non-existent. Have put up a question on blender stack exchange, but on way or another I'll resolve this.
Finally, to set some expectations you are going to need:
- Windows 7 / 8 / 10
- USB 3.0
- A Kinect V2 (V1) will not work
- A Kinect adapter for PC / XBox One
- Kinect runtime redistributable (will put in the Github repo)
The adapter splits the sensor's combo power / data cable into a USB 3.0 & power-supply / wall plug.
I do not really have a home for the Visual Studio C++ project source code yet. Mixing python & C++ in the same repo seems forced. This code could also used for any purpose that parses JSON, like Javascript.