Human Shape Space

If your topic doesn't fit anywhere else, put it here.

Moderator: joepal

Human Shape Space

Postby Altaica » Wed May 16, 2012 10:36 am

Ever since I played with Procrustica I've wanted to make an open source human shape space.
MakeHuman seems to have a lack Morphometrics... From what I've been able find out from the doc MH just seams to be same as RVK system as Blender?

P.S. I've found a nice paper that is basically a how to for what I'm trying to do: The space of human body shapes: reconstruction and parameterization from range scans
Altaica
 
Posts: 7
Joined: Fri May 11, 2012 7:25 pm

Re: Human Shape Space

Postby brkurt » Wed May 16, 2012 8:20 pm

Are you primarily interested in a static model, or rigging / animating that model? Having perused the paper, I'm under the impression that the scanning algorithms are mature, but rigging and animating are in their infancy. Just how effective is this technology for something as complex as ballet, or martial arts? :geek:
brkurt
 
Posts: 1100
Joined: Sun Feb 17, 2008 8:49 pm

Re: Human Shape Space

Postby duststorm » Thu May 17, 2012 11:54 am

At MH we are also investigating techniques of fitting the basemesh to full body 3D scans.

You're right that MH works with something like a RVK (Relative Vertex Keys) concept instead of with the point cloud approach the paper proposes.
The power of MH lies in the fact that we use a very high quality basemesh with good topology. This makes it an ideal fit for animation and for editing by artists (both in classic 3D editors like Blender or Max, or with sculpting tools), as it is constructed entirely of quads instead of tris. No matter what variation we transform the basemesh to, it will always maintain the same topology (maintain the same number of vertices and faces).
The deformations we apply on the basemesh are called targets in MH terminology, and are exactly like RVKs: relative offset of individual vertices of the basemesh, multiplied by a scalar (between 0 and 1) which determines their impact (say, as a percenatage or force of the total target transformation).

A problem with meshed pointclouds that come straight from a 3D scan is that they are practically unusable for anything but static renders. At least not without serious optimization (which in most cases happens manually by artists). The raw result of such a scan is an unordened surface of triangles that are quite randomly placed. This is very difficult to edit or animate without retopoing it first.

I believe that MH has a very strong potential for creating animateable characters from 3D scans. We are investigating how we can parametrize and transform a 3D scan into a target for makehuman. Additionally we can use a repository of scans to deduce morphing characteristics from.
My hope is that we can use this technology to make some serious progress in face modeling too.

Also, a model with the reasonable resolution of the MH basemesh (as opposed to an unmodified 3D scan) is more useable for animation and rendering in practice. It would be ideal if we could capture additional detail in the form of normal maps, texture maps and reflection maps to achieve a useable realistic reproduction of the original scan.

This illustration is to show you the difference. Left is the topology of the MH basemesh and right is a typical result obtained from a 3D scan. Notice how the 3D scan results in just a bunch of triangles that have no logical ordering and are not grouped.
scan_mh_topo_difference.png
Comparison between 3D scan data and MH face topo



PS: The Procrustica tool (http://web.archive.org/web/201101082107 ... shape.net/) you mentioned is unfortunately not online anymore. Do you by any chance still know a place where it is available, as I would very much like to experiment a bit with it.
MakeHuman™ developer
User avatar
duststorm
 
Posts: 2569
Joined: Fri Jan 27, 2012 11:57 am
Location: Belgium

Re: Human Shape Space

Postby Altaica » Fri May 18, 2012 9:06 am

Blanz and Vetter model facial variation, using dense surface and color data. They use the term morphable model to describe the idea of creating a single surface representation that can be adapted to to all of the example faces. Using a polygon mesh representation, each vertex's position and color may vary between examples, but its semantic identity must be the same; e.g., if a vertex is located at the tip of the nose in one face, then it should be located at the tip of the nose in all faces. Thus, the main challenge in constructing the morphable model is to reparameterize the example surfaces so that they have a consistent representation.

If you don't have a consistent representationthis type of stuff happens
ImageImage



duststorm wrote:You're right that MH works with something like a RVK (Relative Vertex Keys) concept instead of with the point cloud approach the paper proposes.

which paper is that? Principal Component Analysis to blend between targets
duststorm wrote:The power of MH lies in the fact that we use a very high quality basemesh with good topology. This makes it an ideal fit for animation and for editing by artists (both in classic 3D editors like Blender or Max, or with sculpting tools), as it is constructed entirely of quads instead of tris.
tris are acceptable in a polygon mesh. the Popular catmull-clark subdivision (limit) surf differentiate between poles and n-gons as they are converted to quads(and poles) after the first subdivision.


duststorm wrote:A problem with meshed pointclouds that come straight from a 3D scan is..
unicorn infestations? ;) Point clouds don't magically skin themselves.

I quick side note: I found this The Point Cloud Library (PCL) is a standalone, large scale, open project for 3D point cloud processing. which claims to have an algorithm that provides a very fast triangulation of the original points.

duststorm wrote:The raw result of such a scan is an unordened surface of triangles that are quite randomly placed. This is very difficult to edit or animate without retopoing it first.

I believe that MH has a very strong potential for creating animateable characters from 3D scans.

The Naked Truth: Estimating Body Shape Under Clothing

duststorm wrote:We are investigating how we can parametrize and transform a 3D scan into a target for makehuman.


A technique for fitting your basemesh, B , to a scanned example surface, T. To accomplish the match, employ an optimization framework. Each vertex vi in the basemesh is influenced by a 4x4 affine transformation matrix Ti. These transformation matrices comprise the degrees of freedom in the optimization, i.e., twelve degrees of freedom per vertex to define an affine transformation. You wish to find a set of transformations that move all of the points in B to a deformed surface B′, such that B′ matches well with T. You evaluate the quality of the match using a set of error functions: data error, smoothness error, and marker error.

Data error
The first criterion of a good match is that the basemesh should be as close as possible to the target surface. To this end, define data objective term as the weighted sum of the squared distances between each vertex in the basemesh and the distance to the closest point on T with a surface normal no more than 90 apart(so that front-facing surfaces will not be matched to back-facing surfaces), and the distance between them is within a threshold (Allen et al. used a threshold of 10 cm in their experiments).

The smoothness error penalizes differences between adjacent Ti transformations. The marker error penalizes distance between the marker points on the transformed surface and on T.

PS: The Procrustica tool (http://web.archive.org/web/201101082107 ... shape.net/) you mentioned is unfortunately not online anymore. Do you by any chance still know a place where it is available, as I would very much like to experiment a bit with it.[/quote]No That's link I gave is what's onWorld Engineering Anthropometry Resource's site and was working a few months ago.

You could try asking Chang Shu
Altaica
 
Posts: 7
Joined: Fri May 11, 2012 7:25 pm

Re: Human Shape Space

Postby brkurt » Fri May 18, 2012 5:34 pm

And it gets even trickier, once you have decided which 3d design suite you are going to use. :o

Speaking only about Blender, I have come to learn by way of Angela Guedette's truly superior Sintel tutorials, that Blender wants every vertex group (and mesh) to be resolvable to edge loops. When one thinks about it, the reason is simple: the armature algorithms expect an evenly distributed set of vertices to control; anything else, and shape keys come into play, and then, it is tweak, tweak, tweak.

But...and this is a big but...Sintel is a cartoon / anime figure, and not a photorealistic human. The level of realism needed to for the success of the Makehuman project necessitates a less cartoony mesh.

Let me give you a practical example. I needed extensive bridge work because of cracked molars, and my dentist proudly showed me his state-of-the-art 3d graphic of my teeth. The scan cost serious money, he told me, but the quality of work was far superior.

Of course, I had the nerve to ask if he could place this set of dentures in the a 3d head, matched to the patient, so that they could see what their new bridge work would look like. That includes a wide-open mouth.

This hadn't happened yet, he told me, but it was a good idea, he agreed.

So, this inspiration came from my money-grubbing dentist, and he was on the money. ;)
brkurt
 
Posts: 1100
Joined: Sun Feb 17, 2008 8:49 pm

Re: Human Shape Space

Postby Altaica » Sat May 19, 2012 9:18 am

brkurt wrote:Speaking only about Blender, I have come to learn by way of Angela Guedette's truly superior Sintel tutorials,

Never heard of them before. You have a link?

brkurt wrote:But...and this is a big but...Sintel is a cartoon / anime figure, and not a photorealistic human. The level of realism needed to for the success of the Makehuman project necessitates a less cartoony mesh.
for realistic animatable human you need soft tissue simulation, unless you only doing body builders with no body fat that is.
SOFA is an Open Source framework primarily targeted at real-time simulation, with an emphasis on medical simulation. Might be useful.

brkurt wrote:Let me give you a practical example. I needed extensive bridge work because of cracked molars, and my dentist proudly showed me his state-of-the-art 3d graphic of my teeth. The scan cost serious money, he told me, but the quality of work was far superior.
Of course, I had the nerve to ask if he could place this set of dentures in the a 3d head, matched to the patient, so that they could see what their new bridge work would look like. That includes a wide-open mouth.

Reminds me of this thing I stumbled a pond last night Development and Implementation of a Web-Enabled 3D Consultation Tool for Breast Augmentation Surgery Based on 3D-Image Reconstruction of 2D Pictures
Altaica
 
Posts: 7
Joined: Fri May 11, 2012 7:25 pm

Re: Human Shape Space

Postby Manuel » Sun May 27, 2012 12:00 pm

Altaica wrote:Blanz and Vetter model facial variation, using dense surface and color data. They use the term morphable model to describe the idea of creating a single surface representation that can be adapted to to all of the example faces. Using a polygon mesh representation, each vertex's position and color may vary between examples, but its semantic identity must be the same; e.g., if a vertex is located at the tip of the nose in one face, then it should be located at the tip of the nose in all faces. Thus, the main challenge in constructing the morphable model is to reparameterize the example surfaces so that they have a consistent representation.


Of course, we know these studies.
Anyway we are not interested in morphing arbitrary topologies. MH base mesh[*] is one of the best topology in the world of 3D human character. It's suitable to be retouched by hand, used in zbrush & C., used in games and even in crowd simulation. It's a very good balance between quality and polycount, also it's easy to rig, and provided of detached eyes, teeth and tongue. Also, quads are very important from modelling point of view.

Our current goal is to provide a set of realistic characters, fitting our own mesh to 3d scans.
A couple of years ago, one of our researcher, Alexis Mignon, has coded a good fitting algo, using SVD (see result below), but the problem (that stopped us) was to retrieve the facial feature points (about 20 points) on the scans. It's not trivial.

Do you like to help us?

// Manuel

[*]And the proxies too.
Last edited by Manuel on Thu Apr 05, 2018 5:21 pm, edited 1 time in total.
Manuel
 

Re: Human Shape Space

Postby duststorm » Sun May 27, 2012 1:04 pm

As manuel said, in relation to the Blanz and Vetter paper.
We already have a generalized model, being the basemesh, which has a lot of very nice qualities that might be difficult to get from a pure automated approximation.
What I'm interested in to experiment with, is how well the MH basemesh is suited to be fitted to 3D scans. If we can devise a technique to create something like custom targets for the MH mesh (maybe together with additional normal/displacement maps) to transform it to the body of the 3D scan, and later maybe even the face, I believe we really have something that has not been done before: High quality 3D people from 3D scans, automatically rigged with a polycount that allows realtime rendering. Together with all the other beautiful features that MH already offers, or will have in the future.
MakeHuman™ developer
User avatar
duststorm
 
Posts: 2569
Joined: Fri Jan 27, 2012 11:57 am
Location: Belgium

Re: Human Shape Space

Postby Altaica » Thu May 31, 2012 6:00 am

Manuel wrote:Of course, we know these studies.
Anyway we are not interested in morphing arbitrary topologies. MH base mesh[*] is one of the best topology in the world of 3D human character.

<sarcasm>Oh. well if you already have a method to put the range scan data into MH base mesh topology...</sarcasm>
:roll:


Manuel wrote:Our current goal is to provide a set of realistic characters, fitting our own mesh to 3d scans.
A couple of years ago, one of our researcher, Alexis Mignon, has coded a good fitting algo, using SVD (see result below), but the problem (that stopped us) was to retrieve the facial feature points (about 20 points) on the scans. It's not trivial.

Do you like to help us?

You have any one actually working on Registration Techniques?
Last edited by Altaica on Thu May 31, 2012 1:31 pm, edited 1 time in total.
Altaica
 
Posts: 7
Joined: Fri May 11, 2012 7:25 pm

Re: Human Shape Space

Postby Manuel » Thu May 31, 2012 10:53 am

Altaica wrote:
Manuel wrote:Of course, we know these studies.
Anyway we are not interested in morphing arbitrary topologies. MH base mesh[*] is one of the best topology in the world of 3D human character.

<sarcasm>Oh. well if you already have a method to put the range scan data into MH base mesh topology...</sarcasm>
:roll:


Our current goal is to provide a set of realistic characters, fitting our own mesh to 3d scans.
A couple of years ago, one of our researcher, Alexis Mignon, has coded a good fitting algo, using SVD (see result below), but the problem (that stopped us) was to retrieve the facial feature points (about 20 points) on the scans. It's not trivial.

Do you like to help us?

You have any one actually working on Registration Techniques?[/quote]

Uhm ..sarcasm?
Maybe it's because my English, and the sentence above can sound as arrogant, but it was not my intention.
We are not interested in morphing arbitrary, dense and triangulated topology, because we are focused on morphing MH topology. That's all.
What do you mean with " Registration Techniques"?
Manuel
 

Next

Return to General discussions about makehuman

Who is online

Users browsing this forum: No registered users and 2 guests