Page 1 of 1

Look who's talking

PostPosted: Tue Dec 31, 2013 10:32 am
by ThomasL
http://youtu.be/3x7cvZp3CtE

First test with facial mocap. I should have deleted stray keyframes on the left eye, but I didn't want to spend another hour rerendering.

The workflow essentially follows Sebastian Königs vimeo tutorial "How to use face tracking data in Blender" (http://vimeo.com/24016505), although most of it has been scripted. The scripts can be found in the sandbox in MakeHuman svn (utils/tools/sandbox).

The mocap data is sample file 6 from Facemotion (http://www.easycapstudio.com/?q=support/sample-files). A drawback of the method is that there is a one to one correspondance between markers and bones in the face rig. If we want to use data recorded with a different marker setup, we need a different face rig.

Re: Look who's talking

PostPosted: Tue Dec 31, 2013 8:56 pm
by Rhynedahll
Very interesting.

Do you plan to develop this into another tool?

Re: Look who's talking

PostPosted: Wed Jan 01, 2014 5:36 pm
by ThomasL
There should be some kind of face rig in alpha 9, but we don't know how it will look like. This is just an experiment.

Re: Look who's talking

PostPosted: Fri Jan 03, 2014 1:18 am
by duststorm
In any case the result of this test looks promising.
I hope we can make it work with different types of faces and good visual quality.