Transferable Speech-Driven Lips Synthesis
碩士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 94 === Image-based videorealistic speech animation achieves significant visual realism such that it can be potentially used for creating virtual teachers in language learning, digital characters in movies, or even user’s representatives in video conferencing under v...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | en_US |
Published: |
2006
|
Online Access: | http://ndltd.ncl.edu.tw/handle/26572252526368269372 |
id |
ndltd-TW-094NTU05641017 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-094NTU056410172015-12-16T04:38:40Z http://ndltd.ncl.edu.tw/handle/26572252526368269372 Transferable Speech-Driven Lips Synthesis 可置換之語音驅動唇形合成方法 Hong-Dien Chen 陳宏典 碩士 國立臺灣大學 資訊網路與多媒體研究所 94 Image-based videorealistic speech animation achieves significant visual realism such that it can be potentially used for creating virtual teachers in language learning, digital characters in movies, or even user’s representatives in video conferencing under very low bit-rate. However, it comes at the cost of the collection of a large video corpus from the specific person to be animated. This requirement hinders its use in broad applications, since a large video corpus for a specific person under a controlled recording setup may not be easily obtained. Hence, we adopt a simply method which allows us to transfer original animation model to a novel person only with a few different lip images. 莊永裕 2006 學位論文 ; thesis 64 en_US |
collection |
NDLTD |
language |
en_US |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立臺灣大學 === 資訊網路與多媒體研究所 === 94 === Image-based videorealistic speech animation achieves significant visual realism such that it can be potentially used for creating virtual teachers in language learning, digital characters in movies, or even user’s representatives in video conferencing under very low bit-rate. However, it comes at the cost of the collection of a large video corpus from the specific person to be animated. This requirement hinders its use in broad applications, since a large video corpus for a specific person under a controlled recording setup may not be easily obtained. Hence, we adopt a simply method which allows us to transfer original animation model to a novel person only with a few different lip images.
|
author2 |
莊永裕 |
author_facet |
莊永裕 Hong-Dien Chen 陳宏典 |
author |
Hong-Dien Chen 陳宏典 |
spellingShingle |
Hong-Dien Chen 陳宏典 Transferable Speech-Driven Lips Synthesis |
author_sort |
Hong-Dien Chen |
title |
Transferable Speech-Driven Lips Synthesis |
title_short |
Transferable Speech-Driven Lips Synthesis |
title_full |
Transferable Speech-Driven Lips Synthesis |
title_fullStr |
Transferable Speech-Driven Lips Synthesis |
title_full_unstemmed |
Transferable Speech-Driven Lips Synthesis |
title_sort |
transferable speech-driven lips synthesis |
publishDate |
2006 |
url |
http://ndltd.ncl.edu.tw/handle/26572252526368269372 |
work_keys_str_mv |
AT hongdienchen transferablespeechdrivenlipssynthesis AT chénhóngdiǎn transferablespeechdrivenlipssynthesis AT hongdienchen kězhìhuànzhīyǔyīnqūdòngchúnxínghéchéngfāngfǎ AT chénhóngdiǎn kězhìhuànzhīyǔyīnqūdòngchúnxínghéchéngfāngfǎ |
_version_ |
1718151308278497280 |