Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models

We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two original contributions are put forward here: the trainable trajectory formation model that p...

Full description

Bibliographic Details
Main Authors: Gaspard Breton, Frédéric Elisei, Oxana Govokhina, Gérard Bailly
Format: Article
Language:English
Published: SpringerOpen 2009-01-01
Series:EURASIP Journal on Audio, Speech, and Music Processing
Online Access:http://dx.doi.org/10.1155/2009/769494