luacs wrote:ThomasL wrote:luacs wrote:but thats just for windows right? im on a mac, thats why i prefer a old version with the facial expressions..
Mhx2 is pure python and does not use any special libraries except numpy, which is already deeply integrated into MH. So if MH and Blender works for you, there should not be any problems. But I cannot guarantee it, since I have only tested on windows.luacs wrote:also why do you guys took out such a great feature from this software? i know you are working on a better tool but during the time why dont let people use that option?
Don't blame me. If it had been possible to keep facial expressions inside MH, MHX2 would probably not have been released as an external plugin.
I was not making that question direct to you, i just found strange such a great and easy tool to just be cut out of the project, in the end i just want to animate my characters face (maybe put a smile?)
also the last nightly version for MAC was more than a month ago, can you guys update that?
If you will allow me, I think I can explain the design logic for this change, as I am heavily involved in analysis of facial expressions and their 3d animations.
The issue is interoperability; shape keys (morph targets) are notoriously proprietary. Each major 3d (DAZ, Poser) app has its own way of using them.
What is required for a open-source program is to advance along two fronts (always a problem in strategy): (1) to create facial expressions that are biophysically (genetically?) universal; (2) to ensure those facial expressions have some sort of standardization with respect to morphing.
For example, you want to create a simple smile. How will a designer/programmer know that their smile code is universally regarded as a smile? To do that, it must be a Duchenne (after the 19th century physician who first measured facial expressions) smile that engages the tiny muscles around the eyes called the obicularis oculi. Their contraction is commonly known as 'a twinkle in the eye'.
If these muscles are not taken into account, what you will get is a 'false smile' or grimace or rictus, which of course sends the exact opposite message to the viewer.
Right now, the 1.1 mesh and rig can create the basic (agreed-upon) facial expressions without shape keys simply by using poses; this is a superior solution because it means that the expressions can be uniquely tuned to a particular character using shape keys. The upshot of this is that I can export the 1.1 rig and mesh via Collada to a completely unrelated app such as Marvelous Designer, and my character can smile and lip-sync, even while I am focussed on garment creation. I don't use FBX, but I'm under the impression that the FBX export will integrate the rig poses with shape keys, and that puts Makehuman in a very good position.