Page 1 of 1

Papagayo-NG now automated!

PostPosted: Sat Jul 03, 2021 5:20 am
by HunanbeanCollective
the Papagayo-NG software to create lipsync timing/phoneme data for animating your characters now includes Allosaurus automatic speech recognition!
This speeds up the process of making accurate lipsync data 20 fold. The results are astounding. It looks better than when I do the timing manually.

This branch https://github.com/steveway/papagayo-ng has the speech recognition.
If you use my faceshapes.mxa file from https://github.com/Hunanbean-Collective/MakeHuman-Additions
and the lipsync plugin for blender https://github.com/iCEE-HAM/io_import_lipSync-blender2.8
you will have your character talking in no time!

Re: Papagayo-NG now automated!

PostPosted: Tue Oct 05, 2021 8:23 am
by FendyWhite
Such sоftwarе is just perfect for animators on YоuTubе. But isn't it dangerous for face recognition systems?

Re: Papagayo-NG now automated!

PostPosted: Fri Oct 08, 2021 12:35 am
by HunanbeanCollective
FendyWhite wrote:But isn't it dangerous for face recognition systems?


Perhaps you are confusing this with something else. This has nothing to do with facial recognition, per se.
This procedure takes spoken words, and performs speech recognition on them, then outputs that data as markers that correspond to shape key/morph targets that make your character move there lips to match what was spoken.