Homura wrote:Is there any tutorials on how to do that?
http://www.russian3dscanner.com/tutorials/http://www.makehuman.org/doc/node/mhble ... arget.htmlThis is mostly unexplorered territory, though, so you would have to do a bit of research yourself.
I'd be very interested in your results though

If you say you have millions of polygons, I assume this is a sculpted mesh, like created in ZBrush? You might consider modeling (with MakeHuman) on a lower subdivision level, and plugging it back into ZBrush after the shape is modified, to reproject/reapply the high level details.
You could consider too if at the point where you apply MH, whether it would be enough to use normal and displacement maps. Since this is for game characters, I assume at some point you will be poly-reducing and baking maps.
Another interesting thing to look at might be
Zbrush GoZ which is an exchange format that appears to have been designed for problems like this, where you interchange your high poly sculpt back and forth between lower poly game-oriented tools like Maya or motionbuilder. I believe blender has a plugin for it too. We have not (yet) made direct compatibility for it with MakeHuman, but through a tool like Blender you might be able to set up a valid pipeline. What GoZ allows you to do is export to a lower poly mesh with normal/disp maps, modifiy it, and reimport it in Zbrush while re-applying the high resolution details. Much like what I proposed a little above.
Blender has a very similar and cool feature like this (and I believe ZBrush does it too): the multires modifier, which allows you to subdivide a mesh multiple times, and model and sculpt on it at different subdivision levels. The cool thing is that modifications on one level are synchronized to the other levels. This means that you can modify, for example, the body proportions on a low level, while the sculpted pores and details on the higher subdivision level simply follow the shapes. Something like this would be a very valid approach to using MakeHuman in such scenarios.
A more rudimentary approach would be to re-shape a lower poly retopoed version of the mesh, that has normal and displacement maps, and then reverse-apply the normal+disp map to transform them into geometry again and restore the high poly version.