Blender Lip Synching for The Movies Heads

These tools are intended to assist animators who want to use assets from The Movies, a video game by Lionhead, to make videos in Blender. Blender version 2.49b was used in the development, but the python scripts used here should be compatible with some earlier versions in the 2.4x series.

Character heads imported from "The Movies" come with face controls, or "bones," which are presumably used in the game to create facial expressions and perform lip-syncing animation within the game. These tools allow you to create your own lip-syncing animations in blender, not for export back to the game, but to make animated videos in Blender.

scripts:

phoneme_poses.py
tm_lipsync.py

Resource files:
jenny.lip
setup.dat
moho.png

Other Resources:
Papagayo

The system described here is based on a set of 10 phonemes used by the 2-D animation software known as MOHO. The papagayo package includes images of expressions corresponding to these phonemes (visemes).

MOHO Visemes

The user can easily learn how to use papagayo by reading the instructions in the package. Papagayo is a tool which converts text into phonemes and helps the user synchronize those phonemes with a graphical visualization of the sound waveform. The program outputs a file specifying which phoneme expression needs to be displayed at keyframes for animation.

The script I am providing, tm_lipsync.py, will import the file output by Papagayo and apply appropriate settings to the face controls at the corresponding key frames to create the desired viseme poses.

The visemes corresponding to each phoneme are specified in a file with a .lip extension. For best results, these poses should be customized for different character heads. Included with this package is a .lip file which I created for the generic_facial_skin.msh.

In order to create your own mouth poses for your character head, set up the expressions in frames 1-10 using the moho mouths image above as a guide. You will probably need to regroup the lower teeth vertices to "fa_jaw" in order to provide control over the jaw and make the lower teeth move with the jaw as they should. Once you are satisfied with all 10 expressions, export the data to a .lip file with a unique name that indentifies the head for which it was designed (execute the script phoneme_poses.py).

When you run "tm_lipsync.py" the file selection dialog asks you for the .lip file which is to be used for the lip-syncing. Next, it asks you for a MOHO switch file (file extension .dat) that specifies the complete lip-syncing animation.

If you do not want to start from scratch creating your own viseme poses, you can load the MOHO file included called "setup.dat." This initializes the first 10 frames in blender using any .lip file you have. You can then modify the viseme poses in Blender and then export them to a new .lip file using "phoneme_poses.py."

You can import the wave audio file for the dialog into blender using the sequence editor so that you can hear the speech when you play the animation.

You can download the lip-sync package here.

Return to Index