Semantic Retrieval of Personal Photos with User Annotations
博士 === 國立臺灣大學 === 資訊工程學研究所 === 102 === With the prevalence of hand-held smart devices and social networks, people tend to collect tons of personal photos for sharing. Efficient approaches to manage personal photos are therefore highly desired. Semantic image retrieval has been very successful in rec...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2014
|
Online Access: | http://ndltd.ncl.edu.tw/handle/69765759301124372049 |
id |
ndltd-TW-102NTU05392007 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-102NTU053920072016-03-09T04:24:03Z http://ndltd.ncl.edu.tw/handle/69765759301124372049 Semantic Retrieval of Personal Photos with User Annotations 基於使用者語音標註之個人相片語意檢索 Yi-Sheng Fu 傅怡聖 博士 國立臺灣大學 資訊工程學研究所 102 With the prevalence of hand-held smart devices and social networks, people tend to collect tons of personal photos for sharing. Efficient approaches to manage personal photos are therefore highly desired. Semantic image retrieval has been very successful in recent years, in which huge quantity of photos and their annotations available over the Internet were used to derive semantic relationships between high-level semantic terms and the photos for retrieval. However, when personal photos are considered, the personal annotations for personal photos can be very sparse, completely impossible for development of the above semantic relationships. So those successful approaches of semantic image retrieval cannot be used for personal photos. In this dissertation, we adopt a new scenario and propose a new framework to tackle this problem: allowing users to annotate their photos using voice while taking pictures, and analyze the semantic relationships between the annotations and photos by fusing the speech and image features together. A series of research works are therefore developed in order to construct a practical solution for semantic image retrieval of personal photos. In the preliminary research, we collected some personal photos with clean read speech annotations describing roughly defined categories of information. By fusing low-level image features with speech features in probabilistic latent semantic analysis (PLSA), very good results were obtained with only 10\% of the photos manually annotated. In the second-stage work, we re-collected a larger database of personal photos with fluent and free form speech annotations as experimental dataset. The recognition errors became a much more challenging problem. We adopted cepstral normalization, acoustic model adaption, and language model interpolation to improve the recognition results. We also used expected term frequency derived from lattices as more robust speech features. We further used visual words as representative image features rather than the low-level image features used in preliminary research and tried to integrate Columbia374 derived from content-based image detectors as additional image information. Moreover, we replaced the PLSA model with non-negative matrix factorization (NMF) to analyze the latent "topics". The experimental results showed that NMF model outperformed the PLSA model in this task. Finally, we implemented a prototype system based on these results. In addition, we adopted the concept of diversifying retrieval results for better presentation. All these results show that the proposed framework is an effective solution to the problem of semantic image retrieval of personal photos. Lin-Shan Lee 李琳山 2014 學位論文 ; thesis 94 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
博士 === 國立臺灣大學 === 資訊工程學研究所 === 102 === With the prevalence of hand-held smart devices and social networks, people tend to collect tons of personal photos for sharing. Efficient approaches to manage personal photos are therefore highly desired. Semantic image retrieval has been very successful in recent years, in which huge quantity of photos and their annotations available over the Internet were used to derive semantic relationships between high-level semantic terms and the photos for retrieval. However, when personal photos are considered, the personal annotations for personal photos can be very sparse, completely impossible for development of the above semantic relationships. So those successful approaches of semantic image retrieval cannot be used for personal photos. In this dissertation, we adopt a new scenario and propose a new framework to tackle this problem: allowing users to annotate their photos using voice while taking pictures, and analyze the semantic relationships between the annotations and photos by fusing the speech and image features together. A series of research works are therefore developed in order to construct a practical solution for semantic image retrieval of personal photos.
In the preliminary research, we collected some personal photos with clean read speech annotations describing roughly defined categories of information. By fusing low-level image features with speech features in probabilistic latent semantic analysis (PLSA), very good results were obtained with only 10\% of the photos manually annotated.
In the second-stage work, we re-collected a larger database of personal photos with fluent and free form speech annotations as experimental dataset. The recognition errors became a much more challenging problem. We adopted cepstral normalization, acoustic model adaption, and language model interpolation to improve the recognition results. We also used expected term frequency derived from lattices as more robust speech features. We further used visual words as representative image features rather than the low-level image features used in preliminary research and tried to integrate Columbia374 derived from content-based image detectors as additional image information. Moreover, we replaced the PLSA model with non-negative matrix factorization (NMF) to analyze the latent "topics". The experimental results showed that NMF model outperformed the PLSA model in this task.
Finally, we implemented a prototype system based on these results. In addition, we adopted the concept of diversifying retrieval results for better presentation. All these results show that the proposed framework is an effective solution to the problem of semantic image retrieval of personal photos.
|
author2 |
Lin-Shan Lee |
author_facet |
Lin-Shan Lee Yi-Sheng Fu 傅怡聖 |
author |
Yi-Sheng Fu 傅怡聖 |
spellingShingle |
Yi-Sheng Fu 傅怡聖 Semantic Retrieval of Personal Photos with User Annotations |
author_sort |
Yi-Sheng Fu |
title |
Semantic Retrieval of Personal Photos with User Annotations |
title_short |
Semantic Retrieval of Personal Photos with User Annotations |
title_full |
Semantic Retrieval of Personal Photos with User Annotations |
title_fullStr |
Semantic Retrieval of Personal Photos with User Annotations |
title_full_unstemmed |
Semantic Retrieval of Personal Photos with User Annotations |
title_sort |
semantic retrieval of personal photos with user annotations |
publishDate |
2014 |
url |
http://ndltd.ncl.edu.tw/handle/69765759301124372049 |
work_keys_str_mv |
AT yishengfu semanticretrievalofpersonalphotoswithuserannotations AT fùyíshèng semanticretrievalofpersonalphotoswithuserannotations AT yishengfu jīyúshǐyòngzhěyǔyīnbiāozhùzhīgèrénxiāngpiànyǔyìjiǎnsuǒ AT fùyíshèng jīyúshǐyòngzhěyǔyīnbiāozhùzhīgèrénxiāngpiànyǔyìjiǎnsuǒ |
_version_ |
1718200271540060160 |