Kinect V2 mocap coming to MH Blender add-on
Posted: Wed Jan 03, 2018 4:09 pm
Sort of a per-announcement that I am making considerable progress toward motion capture of multiple bodies using a Kinect V2 direct from Blender. I have also made significant improvements in IK rigs. Have not made the Default skeleton IK capable (except for finger rig), but the KinectV2 skeleton is, and you can keep the facial bones. The Game Rig is IK capable as before, but now IK rigs can also be reversed with a click.
I am now building two, 125kb DLL's (x86 & x64 versions) to send JSON formatted bone position / rotation back to a python callback every frame. When the 'Stop' button was clicked, I am successfully taking the frame info that was being put aside and generating an action for each bodies the device detected (up to 6). The results are currently horrific, but I am starting to get a handle on changing that.
I have seen that I am not taking the best advantage of this multiple body tracking. To do that, I need to separate the completion of the session from assignment of "parts" to different meshes. Examples of bringing up lists of foreign data, something called a UIList are basically non-existent. Have put up a question on blender stack exchange, but on way or another I'll resolve this.
Finally, to set some expectations you are going to need:
The adapter splits the sensor's combo power / data cable into a USB 3.0 & power-supply / wall plug.
I do not really have a home for the Visual Studio C++ project source code yet. Mixing python & C++ in the same repo seems forced. This code could also used for any purpose that parses JSON, like Javascript.
I am now building two, 125kb DLL's (x86 & x64 versions) to send JSON formatted bone position / rotation back to a python callback every frame. When the 'Stop' button was clicked, I am successfully taking the frame info that was being put aside and generating an action for each bodies the device detected (up to 6). The results are currently horrific, but I am starting to get a handle on changing that.
I have seen that I am not taking the best advantage of this multiple body tracking. To do that, I need to separate the completion of the session from assignment of "parts" to different meshes. Examples of bringing up lists of foreign data, something called a UIList are basically non-existent. Have put up a question on blender stack exchange, but on way or another I'll resolve this.
Finally, to set some expectations you are going to need:
- Windows 7 / 8 / 10
- USB 3.0
- A Kinect V2 (V1) will not work
- A Kinect adapter for PC / XBox One
- Kinect runtime redistributable (will put in the Github repo)
The adapter splits the sensor's combo power / data cable into a USB 3.0 & power-supply / wall plug.
I do not really have a home for the Visual Studio C++ project source code yet. Mixing python & C++ in the same repo seems forced. This code could also used for any purpose that parses JSON, like Javascript.