VscorpianC has a nine-part video tutorial on using Papagayo to get a MakeHuman character to lip sync to some spoken audio. If you just want to play around with it a bit to see if it's something you want to make use of there is a much quicker way to get started. A lot of her tutorial deals with creating new visemes because the ones that are included in the MHX2 runtime are too extreme. What I don't think she realized was that the visemes can just be scaled in the graph editor. This way you can create anything from barely any movement all the way up to exaggerated yelling. You can select and scale particular channels, so you can for example scale the mouth open movements, which move the jaw, down to something appropriate for normal speaking.
The Papagayo 1.2 distribution includes some samples so if you want (near) instant gratification you can install Papagayo, open one of the samples, and export a .dat MOHO file. You can then load that with the MHX2 runtime and voila, you have a talking character. You will most likely want to scale some or all of the channels down in the graph editor unless you want really exaggerated movement. Here's an animation of an excerpt from one of the samples:
https://www.youtube.com/watch?v=DeLCKaiWYso
Papagayo can be downloaded from:
http://www.lostmarble.com/papagayo/
As an interesting aside, I find that still renders during the speech can sometimes add a little sense of movement and life to the character versus a neutral expression.