Example-based natural three-dimensional expression generation for virtual human faces

This thesis proposes a methodology which produces natural facial expressions for 3D human face models by making use of a databank. Firstly, guided by the specified feature point motions, an approach is introduced for generating facial expressions by blending the examples in an expression databank. T...

Full description

Bibliographic Details
Main Author: Zhu, Lijia
Format: Others
Language:en
Published: University of Ottawa (Canada) 2013
Subjects:
Online Access:http://hdl.handle.net/10393/27435
http://dx.doi.org/10.20381/ruor-12083
Description
Summary:This thesis proposes a methodology which produces natural facial expressions for 3D human face models by making use of a databank. Firstly, guided by the specified feature point motions, an approach is introduced for generating facial expressions by blending the examples in an expression databank. The optimized blending weights are obtained implicitly by Genetic Algorithms (GA). Secondly, a consistent parameterization technique is shown to model the desired structured human face by processing the raw laser-scanned face data. Based on this technique, we construct a facial motion databank consisting of consistent facial meshes. Thirdly, rather than producing novel facial expressions for a different subject from scratch, an efficient method is presented to retarget facial motions from one person to another. Finally, these techniques are combined together to reconstruct the 3D facial motions (surface motions) based on the motion capture data (feature point motions).