Difference between revisions of "MakeHuman Workflows"

From MakeHuman Community Wiki
Jump to: navigation, search
(Illustrating the Export Process for Subsequent Import)
(Moving Assets into Autodesk Maya)
Line 55: Line 55:
  
 
This subsection describes the restoration and use of MakeHuman's skeletal transparency and skeletal assets after importing in Autodesk Maya.
 
This subsection describes the restoration and use of MakeHuman's skeletal transparency and skeletal assets after importing in Autodesk Maya.
<LI>Importing and Initial Setup</LI>
+
*Importing and Initial Setup
<LI>Basic Repair of Eye Transparency</Li>
+
*Basic Repair of Eye Transparency
<LI>Repeat for other Transparent Assets</LI>
+
*Repeat for other Transparent Assets
<LI>Raytrace Shadows</LI>
+
*Raytrace Shadows
<LI>Re-importing "Fixed MakeHuman Assets" from Maya after saving in FBX format</LI>
+
*Re-importing "Fixed MakeHuman Assets" from Maya after saving in FBX format
<LI>Adopting MakeHuman Skeletons to Maya</LI>
+
*Adopting MakeHuman Skeletons to Maya
  
 
<font color = 'red'>--------Skeleton subsection still to be written--------</font>
 
<font color = 'red'>--------Skeleton subsection still to be written--------</font>

Revision as of 20:12, 25 July 2015

Summary

More often than not MakeHuman users will want to move their creation into another program to continue the creation process. This section provides information on how to achieve this for common target programs. The Background and Technical Consideratins section provides some background that are common to all workflows. It is not essential to read the introduction before going straight to the program of interest, but doing so might help you understand why certain steps work or are necessary. In many cases there can be more than one way to accomplish the same thing. The method described here may not be optimal for your situation, but it should help get you started.

Background and Technical Considerations

Introduction

Many MakeHuman users will use the work they create in MakeHuman as a component of a project completed in an external, downstream application. The most common target applications are 3D graphics applications like Blender, Autodesk 3DSMax, and Autodesk Maya or gaming engines like Unity Engine or Unreal Engine 4 or Virtual Worlds like Second Life. In some cases, the MakeHuman user will be interested in moving just the mesh. In other cases, the user will be interested in moving a complete, posed, skeletonized character to the downstream application. The purpose of this document is to discuss the basic process of moving a finished MakeHuman asset to a downstream program and producing a reasonable facsimile of its MakeHuman appearance in the downstream application. It is important to recognize that because each application has its own way of representing 3D assets, and because each export format has its own set of idiosyncrasies, there will be adjustments and approximations in the asset transfer process.


Format Considerations

In an effort to provide the broadest latitude for user workflows, the MakeHuman development team is directing it major export support toward Autodesk FBX format and Khronos Collada format. In addition, Blender users have the option to install the independently developed .mhx2 tools, which are independently developed and maintained by Thomas Larson. Collada is a text-based format and is an open standard, but implementation of the standard is not as consistent as might be hoped. FBX is a proprietary standard developed by Autodesk and supported through a software development kit (FBX SDK) written in C++ and a Python FBX scripting template. The FBX standard is periodically updated by Autodesk and can be saved as either an ASCII or a binary format. Downstream programs may make assumptions about the FBX version being imported, and may limit users to either binary or ASCII importing. Surprisingly, CG assets exported from various programs as collada and FBX do not make it back into the same program as a round-trip without changes to some components of the scene. For example, this is true of FBX format in both 3DSMax and Maya for MakeHuman assets. It is also true for Blender with both FBX and collada format for MakeHuman assets. In this document, we will attempt to provide some support for dealing with this type of surprise.

MakeHuman Shader and Asset Rendering

A “shader” is computer code that translates digital information into a properly rendered image on the computer screen. Different algorithms use different methods of constructing such a display. By default in MakeHuman, the viewport lighting is handled by the built-in light-sphere shader. The light-sphere shader takes advantage of support for the openGL standard provided by today’s graphics cards. What the viewer sees on the screen depends not only on the color of assets, like skin and clothing, but also on the nature of the lights illuminating the scene and causing reflections and shadows. Most downstream programs use other shader strategies, the most common of which include Phong shaders, Blinn shaders, or Lambert shaders. Because the code and conceptual design of these shaders differs from that of the MakeHuman light-sphere shader, the user should be prepared to make some adjustments in the downstream program to get the final desired effect. Some MakeHuman assets also include Normal maps which the user will want to integrate into the downstream applications rendering.

When a MakeHuman material (for example, skin or clothing or eyelashes) is exported, it carries with it a limited amount of fundamental information for by the downstream program’s shaders. In an ideal world, the downstream program would import this raw information, correctly populate the variables in its shaders, and yield the same view of the human that was exported from MakeHuman. At present this does not typically happen, and the user must make some adjustments. This Wiki section is designed to help you negotiate through that process.

Spatial transformation: MakeHuman Coordinate System, MakeHuman Skeletons, and MakeHuman Scales

The MakeHuman viewport is oriented so that the positive end of the x-axis is to the right, the positive end of the y-axis it up, and the positive end of the z-axis is out through the front to the screen. Other CG programs can use other schemes, so it is essential to know the coordinate system used by the downstream target at export time, if you wish to minimize the adjustments you make after importing. The settings tab allows users to work in metric units of decimeters or imperial units of inches. Downstream programs vary in their de facto units, and again, it is generally a good idea to anticipate your needs at export time rather than trying to make adjustments during or after import. It is sometimes said that these programs have no default unit. To a degree this is true. However, the default grid pattern, assumptions of other humanoid producing programs, and assumptions of the “physics systems” will often work best with the export units specified in the table below:

TABLE HERE

Working with Rigs and Skeletons

Internally, MakeHuman uses its default skeleton as the basis of mesh vertex weighting for creating the poses available on the pose tab. This rig includes a special set of bones that can be used for driving facial expressions, but as of the current release (MakeHuman 1.1.x), there is not extensive support for inverse kinematics, and animation must be done downstream in the production pipeline. Historically, MakeHuman has had close ties to Blender, but the ultimate development goal is to provide strong generic support for all the major tools used in the industry, and with MakeHuman 1.1.x, substantial progress has been made by providing weighted skeletons suited for standard use in various downstream applications. For the most part, the name of the most appropriate rig will be clearly associated with the target downstream application, but the tag filters on the Pose/Animate | skeleton tab are intended to help choose a rig suited more generally to a specific purpose like motion capture integration, facial animation, or gaming applications. MakeHuman is designed to provide the user with maximum flexibility at creation time. To provide skeletons that work perfectly with every conceivable variation of the sliders is a laudable but elusive goal. That said, the user should get excellent results with the current approach, but the user should be prepared to make some minor tweaks to skeletons and weights downstream, particularly with extreme character dimensions or extreme poses.


MakeHuman Assets with transparency

In CG art, materials for mesh objects are frequently created by shaders that use various types of image maps to provide some of the information for determining the color, transparency, and light behavior on the surface of mesh objects. These image maps are referred to as textures. MakeHuman has textured materials available for the body (including proxy topologies), eyes, eyebrows, eyelashes, tongue, teeth, and hair. During export, these textures need to made available to the program that will be importing the MakeHuman output. Two aspects of this process need to be understood by the MakeHuman user: 1) how textures are saved; and 2) how transparency is handled.

There are two possible standards for storing texture assets upon export. They can either be embedded in an exported file, or they can be saved to a folder as named images with reference to the location of those files in the exported file. At present, MakeHuman uses the latter approach when exporting material textures. Following the export process, MakeHuman exports will be located in ~/makehuman/v1/exports/, and the texture images will be located in a corresponding subfolder ~/makehuman/v1/exports/textures. The tilde symbol indicates your user drive and home document directory. [For example, on Windows Vista and later, this typically would be: C:/Users/%USERNAME%/Documents/makehuman/v1/exports and a textures subfolder, where %USERNAME% is replaced by your username.]

Transparency is used in 3D art to let a viewer see through a mesh object to experience the color, reflections and lighting behind that object. MakeHuman uses transparency with textures for the eyes, eyebrows, eyelashes, and hair. This allows complex real world geometry to be represented by much simpler quasi-planar objects. For example, the hairs of the eyebrows are painted onto a flat surface with transparent spaces in between the painted hairs rather than creating tiny mesh objects for each of the 250-1000 individual hairs. CG programs can indicate which regions of the eyebrow should be transparent either by using a separate image where blackness indicates transparent and whiteness indicates opacity. Grayness is used to represent partial transparency. An alternative approach to representing transparency is to use an image format like .tga or .png which has a one byte channel, called an alpha channel, for just this purpose. Again, values near zero (black) typically represent “completely transparent” and values near 255 (white) typically represent “completely opaque”. MakeHuman represents the transparent regions of its assets using the alpha channel method and the .png format for textures. Unfortunately, importers for filmbox (FBX) and collada formats (DAE) do not automatically extract the transparency information provided by MakeHuman, or more specifically, they do not provide it to their shaders as anticipated. This is not just a MakeHuman problem but reflects multiple ways of handling transparency and alpha channels within the industry. Notably, exporting and reimporting assets with alpha channel transparency into the very same program fails with Blender, 3DSMax, and Maya for FBX, and where available, DAE formats. This forces the MakeHuman user to understand how to overcome this limitation in his favorite downstream application. Some introductory help will be provided for common applications later in this document.

There is one additional aspect of using transparency to make a single, simple mesh represent multiple small objects like eyebrows, and that is the issue of shadows. Lighting systems in 3D worlds creates a simulation of reality by making shadows appear on mesh objects in places you would expect to see them in the real world and with the soft edges you would expect to see in the real world. Shadows are a function of both the light source, the object blocking that light, and the objects receiving the reflected and refracted light from other objects. For objects that have transparency, like MakeHuman’s eyes, eyelashes, eyebrows, and hair, shadows should only be cast by the opaque portion of the texture and not by the transparent portions of the texture. The handling of shadows differs considerably from program to program, and objects with alpha channel transparency can need some special handling to accommodate these differences. This handling is not just about the assets with alpha map transparency, themselves. Sometimes it is necessary to set parameters to explicitly tell other objects like the MakeHuman body to not display shadows from the transparent regions of the eyebrows but to show shadows from the opaque hairs of the eyebrows. Shadows of transparently mapped assets will be addressed further in subsequent sections (where information is available).

Illustrating the Export Process for Subsequent Import

For the purposes of illustration we will take a straight forward MakeHuman character that has been given a skeleton, a pose, a skin material, clothing, eyes, eyelashes, eyebrows, hair, tongue and teeth. We will then export it for import into a number of popular downstream applications. For each application, our goal will be to get the character to look similar to its original look in MakeHuman. We will pay particular attention to the head and face because this is the area of greatest difficulty for dealing with transparency and shadows. The character has the default skeleton and has been set in the T-pose which is a common starting pose for animation-based work. The image we will be exporting looks like this in MakeHuman:

ImpExp-001.png

Figure 1. MakeHuman character as viewed from within MakeHuman prior to export.

In subsequent sections, we will discuss the steps needed to reproduce this look after importing into a downstream program.

Moving Assets into Autodesk Maya

Moving Assets into Autodesk Maya

This subsection describes the restoration and use of MakeHuman's skeletal transparency and skeletal assets after importing in Autodesk Maya.

  • Importing and Initial Setup
  • Basic Repair of Eye Transparency
  • Repeat for other Transparent Assets
  • Raytrace Shadows
  • Re-importing "Fixed MakeHuman Assets" from Maya after saving in FBX format
  • Adopting MakeHuman Skeletons to Maya

--------Skeleton subsection still to be written--------

Moving Assets into Autodesk 3DSMax

This subsection describes the restoration and use of MakeHuman's skeletal transparency and skeletal assets after importing in Autodesk 3DSMax.

  • Import process
  • Handling or Transparency after import
  • Handling of Skelton and Rigging after import
  • Moving Assets into Blender

    Moving Assets into Unreal Engine 4

    Moving Assets into Unity Game Engine

    Moving Assets into Google Sketchup

    Preparing assets for use with Second Life