My "Best Case" End Game for MakeHuman

This forum is aimed at user contributions, in the form of assets, side projects, code patches and similar.

Moderator: joepal

My "Best Case" End Game for MakeHuman

Postby Eternl Knight » Fri Mar 07, 2008 6:55 am

Notes on this post wrote: This is not a post criticising the direction of the MakeHuman developers, not is it any form of "demand" on my part. It is simply an outline of what I would like at the end of the MakeHuman rainbow :) My needs are not other people's needs and, as such, I know not everyone will agree with me. :)


OK, firstly - I love the underlying concept of MakeHuman. The idea of having a dedicated humanoid character creator (complete with pretty damn awesome rigging/posing results) with the most common/powerful morphing controls summarised into the low dimension controls (i.e. the "tetra-parametric" UI widgets) is incredibly useful for those of us that don't like modelling, morphing and re-rigging a common base mesh every time we need to add height, bulk ,breasts, etc.

That said, I think there is an "ultimate goal" (for me) which would have MakeHuman an essential (rather than "preferred") part of every studio's pipeline. I outline these below:

Alternate Topologies: Whilst the current topology is damn good, it is not the best for every situation. Especially in some subdivision-based applications, the number of triangles can be prohibitive for properly smooth surfaces. And while 11K polygons is small for "high resolution" animation & imagery - it is quite high for game meshes. I have used MakeHuman once for the creation of a morphing game character. The project never got publisher approval, but effort required to move the crafted MakeHuman form to the low-res game form was non-trivial!

There are two ways I think for this to work. One is to simply allow for meshes to connect the vertices in whatever way they like (or even leave vertices out of the quad/tri mesh). This is a "simplistic" solution, but has worked for other hacks similar in purpose I have tried before. The other is to have the alternate mesh deformed in much the same way as clothing (detailed next).

Clothing Matching Morphs & Posing: The MakeHuman mesh has had alot of effort put into it to ensure that the morphs, joint rotations, etc all work out smoothly. Thing is, we need to clothe these magnificent creations... which requires being able to import, fit, and rig the clothes to our generated humanoid. It is not feasible to follow the standard MakeHuman editing process for this (think of all the man hours so far in correcting morph combinations, joint movements, etc).

My suggestion is that we use the underlying MESH for the rigging of the clothing. There is a technique I used to use for moving clothing from rig-to-rig while using Poser. It uses the triangles of a mesh as the "rig controller", with the vertices of the "deforming" (i.e. clothing) mesh being weighted to each triangle based on proximity & triangle normal. As the triangles are moved, rotated & scaled, so too would the vertices of the clothing mesh. For moving from one figure to another, I would simply sculpt a common "rig" mesh to approximate the "source", sculpt the same mesh to approximate the "target" and then use the "skin deformer" code to weight & transform the vertices of the clothing mesh based on these approximations. I used approximation meshes as Poser meshes are VERY high res for simply transforming clothing models. Cutting the polygon count down of the "rig mesh" vastly decreased the time required to calculate the weights & transformation.

It would rock if this (or a similar) technique were available to clothe the figure once he/she/it is finished being morphed. A secondary benefit is that this can be used to pose the clothes as well.

MakeHuman as Plugin: My final dream goal would be to have MakeHuman as a plugin to other applications. Currently you can create and (with some fiddling) pose the figure, but for best use you have to export this out and import into another application. This hence only works if you are creating static images, as animation is simply a no go (you either rig in the external app, losing the cool joint morph corrections or worse export a series of OBJ meshes to be rendered each frame... urgh!). My ideal would be that the MakeHuman figure AND rig is imported into another application (Blender atm simply due to the Open Source nature making it more feasible). If the bones can be used as the "input" to the MakeHuman posing engine - it would be possible to add a "real" rig with IK/FK, advanced controllers, etc. With an advanced rig suited to the project driving the MakeHuman engine (i.e. I pose using the "internal" rig and the MakeHuman engine morphs the mesh to nmatch using all it's cool addtions). With the clothing option above, it would be possible then to use MakeHuman in features without having only the basic "shape/form" morphs applied.

Let me know what you guys think (good or bad). The skin deformer stuff I am willing to help with as I already have (semi-optimised) code that does this.

--eK
Eternl Knight
 
Posts: 28
Joined: Thu Mar 06, 2008 5:12 am

Re: My "Best Case" End Game for MakeHuman

Postby Manuel » Sat Mar 08, 2008 6:30 pm

Eternl Knight wrote:
Clothing Matching Morphs & Posing: The MakeHuman mesh has had alot of effort put into it to ensure that the morphs, joint rotations, etc all work out smoothly. Thing is, we need to clothe these magnificent creations... which requires being able to import, fit, and rig the clothes to our generated humanoid. It is not feasible to follow the standard MakeHuman editing process for this (think of all the man hours so far in correcting morph combinations, joint movements, etc).

My suggestion is that we use the underlying MESH for the rigging of the clothing. There is a technique I used to use for moving clothing from rig-to-rig while using Poser. It uses the triangles of a mesh as the "rig controller", with the vertices of the "deforming" (i.e. clothing) mesh being weighted to each triangle based on proximity & triangle normal. As the triangles are moved, rotated & scaled, so too would the vertices of the clothing mesh. For moving from one figure to another, I would simply sculpt a common "rig" mesh to approximate the "source", sculpt the same mesh to approximate the "target" and then use the "skin deformer" code to weight & transform the vertices of the clothing mesh based on these approximations. I used approximation meshes as Poser meshes are VERY high res for simply transforming clothing models. Cutting the polygon count down of the "rig mesh" vastly decreased the time required to calculate the weights & transformation.

It would rock if this (or a similar) technique were available to clothe the figure once he/she/it is finished being morphed. A secondary benefit is that this can be used to pose the clothes as well.

--eK


This is very interesting.
Intially, because each movement in MH is a combination of traslation and rotation, we have thinked an algo like this:

1) link each vert of base mesh to a vert of cloth (using min distance)
2) when translate the baseVert, translate the clothVert too
3) when rotate the baseVert, rotate the clothVert too

simple!
But it has a problem: a long time required to link verts...

Can you explain better the triangles idea?


Regards,

Manuel

Note: I'm coding MH proto using the half-hedges data structure, so we can use a lot of adjacency queries.
Manuel
 

Re: My "Best Case" End Game for MakeHuman

Postby Eternl Knight » Sun Mar 09, 2008 1:35 am

The triangle idea is close, but has some benefits over linking directly to the vertices. The basics of my implementation is based on the "Surface Oriented Deformation" paper here. The extensions I made were to accelerate the binding process (using an octree structure to organise the vertices & triangles) and to use a catmull-clark subdivsion to smooth the deformation.

The disadvantage of the vertices method you describe is that one needs to know the rotation & translations involved to get it where required AND that you are binding the deformation to being only accurate near the vertices (which causes "wiggles" in the deformed mesh as you move from one vertice's influence to the next). With the triangle method, you are binding to the barycentric coordinates of the closest point on the closest triangle(s). This means you can deform the base mesh however you like, using whatever methods you like, and so long as the topology remains the same - you can transform the "rigged" mesh from just the two imported OBJ mesh files (which was important for my usage).

Another modification I made to the underlying method was to use catmull-clark subdivision on the underlying source & target deformation meshes and bind the vertices of the "rigged" mesh to this smoothed mesh. This made the resulting transformation much smoother while preserving most of the underlying "crisp" edges in the rigged mesh.

Let me know if you want more details

--EK
Eternl Knight
 
Posts: 28
Joined: Thu Mar 06, 2008 5:12 am

Re: My "Best Case" End Game for MakeHuman

Postby Manuel » Sun Mar 09, 2008 7:45 pm

The disadvantage of the vertices method you describe is that one needs to know the rotation & translations involved to get it where required AND that you are binding the deformation to being only accurate near the vertices (which causes "wiggles" in the deformed mesh as you move from one vertice's influence to the next). With the triangle method, you are binding to the barycentric coordinates of the closest point on the closest triangle(s). This means you can deform the base mesh however you like, using whatever methods you like, and so long as the topology remains the same - you can transform the "rigged" mesh from just the two imported OBJ mesh files (which was important for my usage).


Yes...it's very close to our idea. Not sure about wiggles problem (I remember some overlap problem using a barycentric method with muscle-skin model).
So you adapt the clothes *after* base transformation, in difference with "our" method, where it's deformed *while* transformation. To test...



Eternl Knight wrote:The triangle idea is close, but has some benefits over linking directly to the vertices. The basics of my implementation is based on the "Surface Oriented Deformation" paper here. The extensions I made were to accelerate the binding process (using an octree structure to organise the vertices & triangles) and to use a catmull-clark subdivsion to smooth the deformation.


Yes, we have thinked to an octree-like system too, but we must test it with 20.000 - 30.000 polygons.
Probably it's ok in C-C++, but not sure in python...(do you know the MH-proto idea, right?)


Another modification I made to the underlying method was to use catmull-clark subdivision on the underlying source & target deformation meshes and bind the vertices of the "rigged" mesh to this smoothed mesh. This made the resulting transformation much smoother while preserving most of the underlying "crisp" edges in the rigged mesh.


Not fully understand this last suggestion. Can you provide some images?

Anyway thanks, a very interesting thread! I'll read the paper you post as soon I can.

Manuel
Manuel
 


Return to User contributions

Who is online

Users browsing this forum: MSN [Bot] and 1 guest