Realistic 3D Facial Animation Using Parameter-based Deformation

碩士 === 國立臺灣大學 === 電機工程學研究所 === 94 === Animated face models are essential to 3D games, movies, online chat, virtual presence and video conferencing. Nowadays, some commercially available tools make use of 3D laser scanners to acquire facial images. However, the drawbacks of 3D laser scanners are thei...

Full description

Bibliographic Details
Main Authors: Yi-Chih Liu, 劉奕志
Other Authors: Sheng-De Wang
Format: Others
Language:en_US
Published: 2006
Online Access:http://ndltd.ncl.edu.tw/handle/94346593477724711265
Description
Summary:碩士 === 國立臺灣大學 === 電機工程學研究所 === 94 === Animated face models are essential to 3D games, movies, online chat, virtual presence and video conferencing. Nowadays, some commercially available tools make use of 3D laser scanners to acquire facial images. However, the drawbacks of 3D laser scanners are their costs and they are not widely used. In this thesis, we present a method using inexpensive computers and video cameras to produce face models directly from images acquired by cameras. This is an efficient approach to synthesize realistic facial expressions from only two facial images on a 3D facial muscle model. This model is capable of simulating facial dynamics through the muscle-based computation. The facial muscle parameters can be estimated from captured image sequences. Moreover, a face detection algorithm is proposed in this thesis. At first, YCbCr skin color model is used to detect the possible face area of the image. Second, we can obtain the feature points of the face by the symmetry of a face and the gray level characteristics of eyes and mouth. According to the positions of the feature points on the facial image, we can measure the quantity of transformation of the face when an expression appears. Finally, we can synthesize the realistic facial animations based on these. To prove its feasibility, we implemented the system on a Windows XP pc. We clarified conditions that could achieve high quality animations by optimizing the number of polygons that form the 3D face model and the stiffness values applied to the spring models embedded in the face model. Reasonable qualities for facial expression animations were obtained. We hope this method can be applied to video conference systems on mobile phones in the future.