Blanz and Vetter model facial variation, using dense surface and color data. They use the term
morphable model to describe the idea of creating a single surface representation that can be adapted to to all of the example faces. Using a polygon mesh representation, each vertex's position and color may vary between examples, but its semantic identity must be the same; e.g., if a vertex is located at the tip of the nose in one face, then it should be located at the tip of the nose in all faces. Thus, the main challenge in constructing the morphable model is to reparameterize the example surfaces so that they have a consistent representation.
If you don't have a consistent representation
this type of stuff happens


duststorm wrote:You're right that MH works with something like a RVK (Relative Vertex Keys) concept instead of with the point cloud approach the paper proposes.
which paper is that? Principal Component Analysis to blend between targets
duststorm wrote:The power of MH lies in the fact that we use a very high quality basemesh with good topology. This makes it an ideal fit for animation and for editing by artists (both in classic 3D editors like Blender or Max, or with sculpting tools), as it is constructed entirely of quads instead of tris.
tris are acceptable in a polygon mesh. the Popular catmull-clark subdivision (limit) surf differentiate between poles and n-gons as they are converted to quads(and poles) after the first subdivision.
duststorm wrote:A problem with meshed pointclouds that come straight from a 3D scan is..
unicorn infestations?

Point clouds don't magically skin themselves.
I quick side note: I found this
The Point Cloud Library (PCL) is a standalone, large scale, open project for 3D point cloud processing. which claims to have an algorithm that provides a very fast triangulation of the original points.
duststorm wrote:The raw result of such a scan is an unordened surface of triangles that are quite randomly placed. This is very difficult to edit or animate without retopoing it first.
I believe that MH has a very strong potential for creating animateable characters from 3D scans.
The Naked Truth: Estimating Body Shape Under Clothingduststorm wrote:We are investigating how we can parametrize and transform a 3D scan into a target for makehuman.
A technique for fitting your basemesh,
B , to a scanned example surface,
T. To accomplish the match, employ an optimization framework. Each vertex
vi in the basemesh is influenced by a 4x4 affine transformation matrix
Ti. These transformation matrices comprise the degrees of freedom in the optimization, i.e., twelve degrees of freedom per vertex to define an affine transformation. You wish to find a set of transformations that move all of the points in
B to a deformed surface
B′, such that
B′ matches well with
T. You evaluate the quality of the match using a set of error functions: data error, smoothness error, and marker error.
Data error
The first criterion of a good match is that the basemesh should be as close as possible to the target surface. To this end, define data objective term as the weighted sum of the squared distances between each vertex in the basemesh and the distance to the closest point on
T with a surface normal no more than 90 apart(so that front-facing surfaces will not be matched to back-facing surfaces), and the distance between them is within a threshold (Allen et al. used a threshold of 10 cm in their experiments).
The
smoothness error penalizes differences between adjacent
Ti transformations. The
marker error penalizes distance between the marker points on the transformed surface and on
T.
PS: The Procrustica tool (
http://web.archive.org/web/201101082107 ... shape.net/) you mentioned is unfortunately not online anymore. Do you by any chance still know a place where it is available, as I would very much like to experiment a bit with it.[/quote]No That's link I gave is what's on
World Engineering Anthropometry Resource's site and was working a few months ago.
You could try asking
Chang Shu