Facial Model Fitting and Expression Animation for Realistic Talking Head Application

碩士 === 大同大學 === 資訊工程研究所 === 90 === To build the 3D-models of human faces and to animate the facial expressions realistically are difficult tasks in computer graphic, particularly when they are done manually. To reduce the laborious works, we propose a photo-based 3D facial model fitting t...

Full description

Bibliographic Details
Main Authors: Hueng-Pei Lin, 林宏沛
Other Authors: Tai-Wen Yue
Format: Others
Language:en_US
Published: 2002
Online Access:http://ndltd.ncl.edu.tw/handle/42456364736954171130
Description
Summary:碩士 === 大同大學 === 資訊工程研究所 === 90 === To build the 3D-models of human faces and to animate the facial expressions realistically are difficult tasks in computer graphic, particularly when they are done manually. To reduce the laborious works, we propose a photo-based 3D facial model fitting technique and a parametric muscle model to automate the model construction process and to animate the rich facial expressions of human being realistically. A parametric muscle-model was adopt for facial animation. This approach inserts contractile muscles at anatomically correct positions within a facial model to mimic the skin deformation of human face. By controlling the parameters of these muscles, some typical facial expressions such as happiness, anger, sadness, and surprise can be synthesized in advance. Accordingly, the miscellaneous facial expressions can be generated by linear or convex combinations the parameters of the typical facial expressions. The facial model fitting process discussed in the thesis is based on the direct linear transformation (DLT). In this approach, multiple cameras are used to capture face images of individuals. By properly identifying the corresponded facial feature points in these 2D images, their positions in the object space can be measured, i.e., computed based on DLT. To achieve high-precision measurement of feature points, lens distortions are taken into account in calibration of multiple cameras. After certain feature points located at the face silhouette are measured, a volumetric scattered data interpolation algorithm is applied to deform a generic face mesh to fit the individual facial geometry. In addition, it also moves automatically original muscles attached on the generic mesh to the fitted face mesh. This will save a great deal of laborious works for an animator to build the mesh and to attach the muscles manually. To realistically represent an individual’s face, texture mapping is also an important task. In the thesis, we also propose a simple texture mapping technique to extract the texture information from the captured images of the participation cameras. The texture map for each individual, as a result, will have the same texture index while with different texture image. The texture image, in turn, is built by blending from the view-dependent texture image for participant cameras. This will greatly reduce the algorithmic complexity in recomputing the texture indices for different individuals. Some experimental results will be demonstrated in the thesis to confirm the feasibility and the efficiency of our approach.