Human Facial Animation Based on Real Image Sequence

碩士 === 國立中山大學 === 資訊工程學系研究所 === 88 === 3D animation has developed rapidly in the multimedia nowadays, in computer games, virtual reality and films. Therefore, how to make a 3D model which is really true to life, especially in the facial expressions, and can have vivid actions, is a significant issu...

Full description

Bibliographic Details
Main Authors: Yen-Chun Yu, 顏俊育
Other Authors: John y. Chiang
Format: Others
Language:zh-TW
Published: 2000
Online Access:http://ndltd.ncl.edu.tw/handle/62489779938599733341
id ndltd-TW-088NSYS5392041
record_format oai_dc
spelling ndltd-TW-088NSYS53920412016-07-08T04:22:57Z http://ndltd.ncl.edu.tw/handle/62489779938599733341 Human Facial Animation Based on Real Image Sequence 以實際影像序列為依據之人臉動作模擬 Yen-Chun Yu 顏俊育 碩士 國立中山大學 資訊工程學系研究所 88 3D animation has developed rapidly in the multimedia nowadays, in computer games, virtual reality and films. Therefore, how to make a 3D model which is really true to life, especially in the facial expressions, and can have vivid actions, is a significant issue. At the present time, the methods to construct 3D facial model are divided into two categories: one is based on computer graphic technology, like geometric function, polygon, or simple geometric shapes, the other one is using hardware to measure a real face by laser scanning system, and three-dimensional digitizer. Moreover, the method to acquire the 3D facial expression primarily are applied as following: keyframing, motion capture, and simulation. The research covers two areas: 1. Use two CCDs to digitalize the facial expressions of a real person simultaneously from both right and left side, and save the obtained standard image. Then, get the feature match points from the two standard images in the space domain, and by using the Stereo to attain the “depth information” which helps to build 3D facial model. 2. Use one CCD to continuously digitalize two facial expressions and get the feature match points’ coordinates in the time domain to calculate the motion vector. By combining the “depth information” from space domain and the motion vector from the time domain, the 3D facial model’s motion sequence can be therefore obtained. If sufficient digitalized facial expressions are processed by the 3D facial model’s motion sequence, a database could be built. By matching the feature points between the 2D test image and 2D standard image in the database, the standard image’s “depth information” and motion vector can be used and turn the test image into 3D model which can also imitate the facial expressions of the standard images sequences. The method to match the feature points between the test image and standard images in the database can be entirely processed by computers, and as a result eliminate unnecessary human resources. John y. Chiang 蔣依吾 2000 學位論文 ; thesis 74 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立中山大學 === 資訊工程學系研究所 === 88 === 3D animation has developed rapidly in the multimedia nowadays, in computer games, virtual reality and films. Therefore, how to make a 3D model which is really true to life, especially in the facial expressions, and can have vivid actions, is a significant issue. At the present time, the methods to construct 3D facial model are divided into two categories: one is based on computer graphic technology, like geometric function, polygon, or simple geometric shapes, the other one is using hardware to measure a real face by laser scanning system, and three-dimensional digitizer. Moreover, the method to acquire the 3D facial expression primarily are applied as following: keyframing, motion capture, and simulation. The research covers two areas: 1. Use two CCDs to digitalize the facial expressions of a real person simultaneously from both right and left side, and save the obtained standard image. Then, get the feature match points from the two standard images in the space domain, and by using the Stereo to attain the “depth information” which helps to build 3D facial model. 2. Use one CCD to continuously digitalize two facial expressions and get the feature match points’ coordinates in the time domain to calculate the motion vector. By combining the “depth information” from space domain and the motion vector from the time domain, the 3D facial model’s motion sequence can be therefore obtained. If sufficient digitalized facial expressions are processed by the 3D facial model’s motion sequence, a database could be built. By matching the feature points between the 2D test image and 2D standard image in the database, the standard image’s “depth information” and motion vector can be used and turn the test image into 3D model which can also imitate the facial expressions of the standard images sequences. The method to match the feature points between the test image and standard images in the database can be entirely processed by computers, and as a result eliminate unnecessary human resources.
author2 John y. Chiang
author_facet John y. Chiang
Yen-Chun Yu
顏俊育
author Yen-Chun Yu
顏俊育
spellingShingle Yen-Chun Yu
顏俊育
Human Facial Animation Based on Real Image Sequence
author_sort Yen-Chun Yu
title Human Facial Animation Based on Real Image Sequence
title_short Human Facial Animation Based on Real Image Sequence
title_full Human Facial Animation Based on Real Image Sequence
title_fullStr Human Facial Animation Based on Real Image Sequence
title_full_unstemmed Human Facial Animation Based on Real Image Sequence
title_sort human facial animation based on real image sequence
publishDate 2000
url http://ndltd.ncl.edu.tw/handle/62489779938599733341
work_keys_str_mv AT yenchunyu humanfacialanimationbasedonrealimagesequence
AT yánjùnyù humanfacialanimationbasedonrealimagesequence
AT yenchunyu yǐshíjìyǐngxiàngxùlièwèiyījùzhīrénliǎndòngzuòmónǐ
AT yánjùnyù yǐshíjìyǐngxiàngxùlièwèiyījùzhīrénliǎndòngzuòmónǐ
_version_ 1718340874541203456