There seem to be several threads going as to makehuman/blender interfacing. One of these is in the scripting forum. As it is, blender is working towards 2.5 and makehuman towards the new 1.0, and both of these will drastically affect any interfacing, so it may be a bit early to make any moves. The following repeats a bit what I have written in other posts...
The current alpha of MH has an integrated user interface via some C++ code which generates a Python module called mh. The C++ code calls OpenGL and cannot co-exist with blender as blender also has its own OpenGL interface. What I have started on (see the other thread) is a 'fake' mh Python module that does not do anything and in particular does not call any C++ or OpenGL. This can then operate in 'headless' mode and get called by blender (running under blender's Python interpreter not MH's).
Let me add that what I had in mind was to use Blender's IPOs (or whatever they call them in 2.5) to drive the MH pose and expression systems. This would imply a blender interface that would allow setting the pose and expression values (lots of floats as far as I can see, so an interface with lots of sliders would work). The values could then be captured by keyframes and played back as part of the animation; blender would interpolate automatically. The vertexes, faces etc could then be puilled back from MH into blender. This approach would imply, I think,
not using blender's armature and skinning system.
There is the interesting question of objects held in the character's hand and the like. To do this it is necessary for locations (and orientations) dependent on part of the makehuman figure to be reflected back into blender so that the 'child' blender object's location and orientation copy the "parent"'s.
I've written a test of concept; code is at viewtopic.php?f=9&t=201&p=3316#p3316