This is my first post here! First of all thank you for the excellent tools you have provided. I am a PhD student from Greece and since I have been using makehuman as my primary human model generator I would like to contribute something back to the community in the form of my 3D Pose estimation software that can derive BVH poses from RGB images and videos.
The code is written in C++ and uses the Tensorflow C-API as well as OpenCV for the visualization.
The idea is to be able to convert a single person video to a BVH file that can then be used to animate a makehuman model. Ideally you could sit in front of the camera and bring a makehuman model to life by performing poses yourself.
You can both render the animation using the MakeHuman/MakeWalk plugins of blender ( as seen in the Qualitative results on the following video )

However I have also baked a model as a visualization mode inside the demo application ( the project needs to be built with the CMake ENABLE_OPENGL flag and then it is seen if you start the demo application with --opengl )

The repository is the following

https://github.com/FORTH-ModelBasedTracker/MocapNET
Looking forward to your comments, and I hope someone will find it useful.!
Feel free to ask any questions you may have. I am using the github version of makehuman 1.2.0 and the CMU Default Hybrid rig, but scaled to millimeters instead of centimeters.
In order to animate the model with OpenGL I export the model in .dae format which I then load using Assimp convert to my own vertex container file format called .tri
The output quality is not perfect, and hands and face expressions are currently not tracked, however I am working full-time to improve it..!
Ammar