Video retrieval based on fractal orthogonal bases and temporal graph
碩士 === 國立中山大學 === 資訊工程學系研究所 === 98 === In this paper, we present a structural video for video retrieval with fractal orthogonal bases composed of the five steps: video summarization (extract key-frames from video), normalized group cuts (classify key-frames), temporal graph (according to key-frames...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2010
|
Online Access: | http://ndltd.ncl.edu.tw/handle/95227338635104085487 |
id |
ndltd-TW-098NSYS5392008 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-098NSYS53920082015-10-13T18:35:38Z http://ndltd.ncl.edu.tw/handle/95227338635104085487 Video retrieval based on fractal orthogonal bases and temporal graph 以碎形正交基底和時間情境圖為基礎進行之視訊檢索 Min-luen Chang 張敏倫 碩士 國立中山大學 資訊工程學系研究所 98 In this paper, we present a structural video for video retrieval with fractal orthogonal bases composed of the five steps: video summarization (extract key-frames from video), normalized group cuts (classify key-frames), temporal graph (according to key-frames time in video), transformation of a directed graph into string (the process of transformation is one-to-one mapping), and comparison of string similarity (contain of sting architecture and content), to establish the framework of the video contents. With the above-mentioned information, the structure of the video and its complementary knowledge can be built up according to main line and branch line. Therefore, users can not only browse the video efficiently but also focus on the structure what they are interest. In order to construct the fundamental system, we employ distortion metric that extract key-frames from video and classify key-frames according to normalized group cuts that shot are linked together based on their content. After constructing the relation graph, the graph is transformed into string that has enriched structure. The result clusters form a directed graph and a shortest path algorithm is proposed to find main structure of video. In string similarity, it divides into string architecture and content. In string architecture, we adopt edit distance in main structure and recursive branch line. After comparison of string similarity in architecture, it gets the high similarity string comparing with fractal orthogonal bases that guarantee the similar index has the similar image the characteristic union support vector clustering. The results demonstrate that our system can achieve better performance and information coverage. John Y. Chiang 蔣依吾 2010 學位論文 ; thesis 80 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立中山大學 === 資訊工程學系研究所 === 98 === In this paper, we present a structural video for video retrieval with fractal orthogonal bases composed of the five steps: video summarization (extract key-frames from video), normalized group cuts (classify key-frames), temporal graph (according to key-frames time in video), transformation of a directed graph into string (the process of transformation is one-to-one mapping), and comparison of string similarity (contain of sting architecture and content), to establish the framework of the video contents. With the above-mentioned information, the structure of the video and its complementary knowledge can be built up according to main line and branch line. Therefore, users can not only browse the video efficiently but also focus on the structure what they are interest.
In order to construct the fundamental system, we employ distortion metric that extract key-frames from video and classify key-frames according to normalized group cuts that shot are linked together based on their content. After constructing the relation graph, the graph is transformed into string that has enriched structure. The result clusters form a directed graph and a shortest path algorithm is proposed to find main structure of video. In string similarity, it divides into string architecture and content. In string architecture, we adopt edit distance in main structure and recursive branch line. After comparison of string similarity in architecture, it gets the high similarity string comparing with fractal orthogonal bases that guarantee the similar index has the similar image the characteristic union support vector clustering. The results demonstrate that our system can achieve better performance and information coverage.
|
author2 |
John Y. Chiang |
author_facet |
John Y. Chiang Min-luen Chang 張敏倫 |
author |
Min-luen Chang 張敏倫 |
spellingShingle |
Min-luen Chang 張敏倫 Video retrieval based on fractal orthogonal bases and temporal graph |
author_sort |
Min-luen Chang |
title |
Video retrieval based on fractal orthogonal bases and temporal graph |
title_short |
Video retrieval based on fractal orthogonal bases and temporal graph |
title_full |
Video retrieval based on fractal orthogonal bases and temporal graph |
title_fullStr |
Video retrieval based on fractal orthogonal bases and temporal graph |
title_full_unstemmed |
Video retrieval based on fractal orthogonal bases and temporal graph |
title_sort |
video retrieval based on fractal orthogonal bases and temporal graph |
publishDate |
2010 |
url |
http://ndltd.ncl.edu.tw/handle/95227338635104085487 |
work_keys_str_mv |
AT minluenchang videoretrievalbasedonfractalorthogonalbasesandtemporalgraph AT zhāngmǐnlún videoretrievalbasedonfractalorthogonalbasesandtemporalgraph AT minluenchang yǐsuìxíngzhèngjiāojīdǐhéshíjiānqíngjìngtúwèijīchǔjìnxíngzhīshìxùnjiǎnsuǒ AT zhāngmǐnlún yǐsuìxíngzhèngjiāojīdǐhéshíjiānqíngjìngtúwèijīchǔjìnxíngzhīshìxùnjiǎnsuǒ |
_version_ |
1718035619437871104 |