Object-Based Image Retrieval Using Self-Organizing Map
碩士 === 逢甲大學 === 資訊電機工程碩士在職專班 === 100 === Content-based images retrieval (CBIR) has been widely used in many application fields. Yet, in commercial photography, ornaments and décor are often used to better the vision of the product as a whole. The images of the digital archive system for Taiwan flowe...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2012
|
Online Access: | http://ndltd.ncl.edu.tw/handle/05698743129249762603 |
id |
ndltd-TW-100FCU05392017 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-100FCU053920172015-10-13T20:52:01Z http://ndltd.ncl.edu.tw/handle/05698743129249762603 Object-Based Image Retrieval Using Self-Organizing Map 基於類神經網路SOM之主體物件影像檢索 Wen-yu Kaun 關雯尤 碩士 逢甲大學 資訊電機工程碩士在職專班 100 Content-based images retrieval (CBIR) has been widely used in many application fields. Yet, in commercial photography, ornaments and décor are often used to better the vision of the product as a whole. The images of the digital archive system for Taiwan flower anthography group are usually accompanied by other background objects that have nothing to do with the target plant. The background noise will decrease the precision rate of image retrieval. Therefore, we propose a method based on Visual Attention Model to extract image area of interest as training datasets, thus improving the retrieval precision rate. The number of digital images is growing rapidly in social networks due to the popularity of the Internet and digital cameras. To deal with the large amount of image data and computation cost, we choose the Self-Organizing Map (SOM) as our unsupervised learning algorithm. As an efficient Artificial Neural Network approach, SOM is useful for visualizing low-dimensional views of high-dimensional data. Our experiments show good performance results. Don-Lin Yang 楊東麟 2012 學位論文 ; thesis 65 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 逢甲大學 === 資訊電機工程碩士在職專班 === 100 === Content-based images retrieval (CBIR) has been widely used in many application fields. Yet, in commercial photography, ornaments and décor are often used to better the vision of the product as a whole. The images of the digital archive system for Taiwan flower anthography group are usually accompanied by other background objects that have nothing to do with the target plant. The background noise will decrease the precision rate of image retrieval.
Therefore, we propose a method based on Visual Attention Model to extract image area of interest as training datasets, thus improving the retrieval precision rate. The number of digital images is growing rapidly in social networks due to the popularity of the Internet and digital cameras. To deal with the large amount of image data and computation cost, we choose the Self-Organizing Map (SOM) as our unsupervised learning algorithm. As an efficient Artificial Neural Network approach, SOM is useful for visualizing low-dimensional views of high-dimensional data. Our experiments show good performance results.
|
author2 |
Don-Lin Yang |
author_facet |
Don-Lin Yang Wen-yu Kaun 關雯尤 |
author |
Wen-yu Kaun 關雯尤 |
spellingShingle |
Wen-yu Kaun 關雯尤 Object-Based Image Retrieval Using Self-Organizing Map |
author_sort |
Wen-yu Kaun |
title |
Object-Based Image Retrieval Using Self-Organizing Map |
title_short |
Object-Based Image Retrieval Using Self-Organizing Map |
title_full |
Object-Based Image Retrieval Using Self-Organizing Map |
title_fullStr |
Object-Based Image Retrieval Using Self-Organizing Map |
title_full_unstemmed |
Object-Based Image Retrieval Using Self-Organizing Map |
title_sort |
object-based image retrieval using self-organizing map |
publishDate |
2012 |
url |
http://ndltd.ncl.edu.tw/handle/05698743129249762603 |
work_keys_str_mv |
AT wenyukaun objectbasedimageretrievalusingselforganizingmap AT guānwényóu objectbasedimageretrievalusingselforganizingmap AT wenyukaun jīyúlèishénjīngwǎnglùsomzhīzhǔtǐwùjiànyǐngxiàngjiǎnsuǒ AT guānwényóu jīyúlèishénjīngwǎnglùsomzhīzhǔtǐwùjiànyǐngxiàngjiǎnsuǒ |
_version_ |
1718052399085518848 |