Personalized face animation framework for multimedia systems
Advances in multimedia-related technologies are enabling new applications such as virtual agents, video conferencing, visual effects in movies, and virtual players in computer games. Such applications are, in turn, motivating much research in digital character and face animation. This thesis addres...
Main Author: | |
---|---|
Format: | Others |
Language: | English |
Published: |
2009
|
Online Access: | http://hdl.handle.net/2429/16168 |
id |
ndltd-UBC-oai-circle.library.ubc.ca-2429-16168 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-UBC-oai-circle.library.ubc.ca-2429-161682018-01-05T17:38:15Z Personalized face animation framework for multimedia systems Arya, Ali Advances in multimedia-related technologies are enabling new applications such as virtual agents, video conferencing, visual effects in movies, and virtual players in computer games. Such applications are, in turn, motivating much research in digital character and face animation. This thesis addresses an important area in this field, Personalized Face Animation which is concerned with creating multimedia data representing the facial actions of a certain character, such as talking, expressions, and head movements. Much success has been achieved for this purpose using 3D head models (general and customized to specific individuals) and also view morphing based on 2D images. The model acquisition and computational complexity of 3D models, and large image databases for 2D methods, however, are major drawbacks. The thesis addresses these issues along with other important ones, mainly realism, authoring tools, content description, and architecture of the whole face animation system. We propose a comprehensive framework for personalized face animation which we call ShowFace. ShowFace integrates a component-based architecture, well-defined interfaces, helper objects and tools with a simple, yet effective, approach to content generation. These are paired with a language for describing face animation events. ShowFace is designed to satisfy the following basic requirements of face animation systems: • Generalized decoding of short textual input into multimedia objects that minimizes the model complexity and database size • Structured content description for face activities like talking, expressions, and head movement, their temporal relation, and hierarchical grouping into meaningful stories • Streaming for continuously receiving and producing frames of multimedia data • Timeliness issues • Compatibility with existing standards and technologies and • Efficiency with regards to algorithms and required data ShowFace achieves this objective by introducing: Feature-based image transformations along with a 2D image-based method for creating MPEG-4 compatible and realistic facial actions. This is accomplished without the need for a complicated 3D head model and/or large databases of 2D images • A face modeling language which is an XML-based language. It is compatible with MPEG-4 standard and specifically designed for face animation It is also capable of describing spatial and temporal relations of facial actions, behavioural templates, and external event handling. • A component-based structure for development of animation applications. This structure has a well-defined interface, independently usable components, and streaming capability and • A comprehensive set of evaluation criteria for face animation systems The thesis review basic concepts and related work in the area of face animation. Then the ShowFace system is introduced and its contributions are thoroughly discussed. A comparative evaluation of the system features and performance is also provided. Applied Science, Faculty of Electrical and Computer Engineering, Department of Graduate 2009-12-02T21:18:23Z 2009-12-02T21:18:23Z 2004 2004-05 Text Thesis/Dissertation http://hdl.handle.net/2429/16168 eng For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use. 16786565 bytes application/pdf |
collection |
NDLTD |
language |
English |
format |
Others
|
sources |
NDLTD |
description |
Advances in multimedia-related technologies are enabling new applications such as
virtual agents, video conferencing, visual effects in movies, and virtual players in computer games. Such applications are, in turn, motivating much research in digital character and face animation. This thesis addresses an important area in this field, Personalized Face Animation which is concerned with creating multimedia data representing the facial actions of a certain
character, such as talking, expressions, and head movements. Much success has been
achieved for this purpose using 3D head models (general and customized to specific
individuals) and also view morphing based on 2D images. The model acquisition and
computational complexity of 3D models, and large image databases for 2D methods,
however, are major drawbacks. The thesis addresses these issues along with other important ones, mainly realism, authoring tools, content description, and architecture of the whole face animation system.
We propose a comprehensive framework for personalized face animation which we
call ShowFace. ShowFace integrates a component-based architecture, well-defined
interfaces, helper objects and tools with a simple, yet effective, approach to content
generation. These are paired with a language for describing face animation events. ShowFace is designed to satisfy the following basic requirements of face animation systems:
• Generalized decoding of short textual input into multimedia objects that minimizes the model complexity and database size
• Structured content description for face activities like talking, expressions, and head movement, their temporal relation, and hierarchical grouping into meaningful stories
• Streaming for continuously receiving and producing frames of multimedia data
• Timeliness issues
• Compatibility with existing standards and technologies and
• Efficiency with regards to algorithms and required data
ShowFace achieves this objective by introducing: Feature-based image transformations along with a 2D image-based method for creating MPEG-4 compatible and realistic facial actions. This is accomplished without the need for a complicated 3D head model and/or large databases of 2D images • A face modeling language which is an XML-based language. It is compatible with MPEG-4 standard and specifically designed for face animation It is also capable of describing spatial and temporal relations of facial actions, behavioural templates, and external event handling.
• A component-based structure for development of animation applications. This structure has a well-defined interface, independently usable components, and streaming capability and
• A comprehensive set of evaluation criteria for face animation systems
The thesis review basic concepts and related work in the area of face animation. Then
the ShowFace system is introduced and its contributions are thoroughly discussed. A
comparative evaluation of the system features and performance is also provided. === Applied Science, Faculty of === Electrical and Computer Engineering, Department of === Graduate |
author |
Arya, Ali |
spellingShingle |
Arya, Ali Personalized face animation framework for multimedia systems |
author_facet |
Arya, Ali |
author_sort |
Arya, Ali |
title |
Personalized face animation framework for multimedia systems |
title_short |
Personalized face animation framework for multimedia systems |
title_full |
Personalized face animation framework for multimedia systems |
title_fullStr |
Personalized face animation framework for multimedia systems |
title_full_unstemmed |
Personalized face animation framework for multimedia systems |
title_sort |
personalized face animation framework for multimedia systems |
publishDate |
2009 |
url |
http://hdl.handle.net/2429/16168 |
work_keys_str_mv |
AT aryaali personalizedfaceanimationframeworkformultimediasystems |
_version_ |
1718590126874951680 |