As a guy limited binocular capabilities because my eyes are fairly close together, it's interesting that this study is apparently based completely on 2D morphing rather than 3D patterns with depth: (
http://www.cell.com/cms/attachment/2095 ... 5/mmc1.mp4) . Does this mean that binocular perceptions are of minor importance. My recollection is that facial recognition is on the inferior portions of the cortex rather than in the occipital lobe and surrounding visual association cortex. The article seems to focus on "recognition". I wonder if if our "emotional reads" involce processing by those same neurons.
When I look a the control points, though, I'm reminded of Manuel's choices for face rig bone positions. Because MH doses't "come with hairline", and hairline seems an intrinsic part of the recognition system, perhaps the facial rig will require some tweaking for 3D psychology work. A hair system or systems remains a limitation for MH at present, but bones could move poly hair much like the 2D morphing program. It will be interesting to see how MB Labs approaches this issue.if it ever goes there. I wonder if lattice-based rigging has any benefits over bone-based rigging for facial recognition and expression systems.