Expression Detail Mapping for Realistic Facial Animation

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 91 === Facial animation is one of important research topics of computer graphics in the past 30 years. Generating realistic results, rapid production and easiness of user interfaces are main goals of facial animation systems. While the conventional animation...

Full description

Bibliographic Details
Main Authors: Pei-Hsuan Tu, 杜佩璇
Other Authors: Ming Ouhyoung
Format: Others
Language:zh-TW
Published: 2003
Online Access:http://ndltd.ncl.edu.tw/handle/87064300499952393661
Description
Summary:碩士 === 國立臺灣大學 === 資訊工程學研究所 === 91 === Facial animation is one of important research topics of computer graphics in the past 30 years. Generating realistic results, rapid production and easiness of user interfaces are main goals of facial animation systems. While the conventional animation systems focus on the presentation of feature motions, the details in illumination changes are often ignored. However, it's very important to human's vision. In this thesis, we propose a facial animation system for capturing both geometrical information and subtle illumination changes, called expression detail, from video clips, and the captured data can be widely applied to different 2D face images and 3D face models. When tracking the geometric data, we record the expression detail by ratio images. For 2D facial animation synthesis, these ratio images are used to generate dynamic textures. But for 3D facial animation, it is insufficient to animate expression detail by textures only since the intensities of expression detail can change dramatically according to different incident and reflective angles of light. To handle this, we convert all captured ratio images into a sequence of normal maps and then apply them to animated 3D model rendering. With the expression detail mapping, the resulting facial animations are more life-like and more expressive. Moreover, the data captured from video clips can be applied to different face models. Hence, since the facial animation data captured from a live subject’s performance can be reused, plenty of animator's work can be reduced.