Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification
碩士 === 國立宜蘭大學 === 土木工程學系碩士班 === 99 === Hyperspectral images provide information of hundreds of bands. Comparing to the traditional multi-spectral remote sensing image, it has a higher spectral resolution and spectral information than the rich to help the classification and interpretation. Because al...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
2011
|
Online Access: | http://ndltd.ncl.edu.tw/handle/87676167318847801943 |
id |
ndltd-TW-099NIU07015004 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-099NIU070150042015-10-13T19:07:21Z http://ndltd.ncl.edu.tw/handle/87676167318847801943 Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification 三種非監督特徵轉換於高光譜影像分類效能之比較 Chang Chiao-Po 張喬博 碩士 國立宜蘭大學 土木工程學系碩士班 99 Hyperspectral images provide information of hundreds of bands. Comparing to the traditional multi-spectral remote sensing image, it has a higher spectral resolution and spectral information than the rich to help the classification and interpretation. Because along with the number of bands increases, the relative lack of training data caused the Hughes phenomenon. It is not only reducing the classification accuracy but also decreasing computational performance. At present, band selection or feature transformation are used to overcome the dimensionality reduction problem. In this thesis, for dimensionality reduction of hyperspectral image the three unsupervised feature transformation methods (Principal Component Analysis, Locally Linear Embedding, and Minimum Noise Fraction) were compared. First, the hyperspectral image was transformed into lower-dimensional feature space. Second, with thirty groups of the training samples by using random and manual selection and the two classifiers (support vector machine and the minimum distance), the classification maps were generated. At last, by using test data the kappa coefficient was used to evaluate the performances of the classification maps. The test results showed that by using minimal noise fraction the hyperspectral image reduces to 15 feature spaces and along with the support vector machine classifier, the classification accuracy has the best result. However, local linear embedding has better stability. For the SVM classifier, when the training samples increase to greater than 20 percents of ground truth data, there is no helpful to increase the classification accuracy. In this thesis, we just compared three unsupervised dimension reduction methods. For further research, we may put attention on supervised dimension reduction methods. Wu Jee-Cheng 吳至誠 2011 學位論文 ; thesis 82 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立宜蘭大學 === 土木工程學系碩士班 === 99 === Hyperspectral images provide information of hundreds of bands. Comparing to the traditional multi-spectral remote sensing image, it has a higher spectral resolution and spectral information than the rich to help the classification and interpretation. Because along with the number of bands increases, the relative lack of training data caused the Hughes phenomenon. It is not only reducing the classification accuracy but also decreasing computational performance. At present, band selection or feature transformation are used to overcome the dimensionality reduction problem.
In this thesis, for dimensionality reduction of hyperspectral image the three unsupervised feature transformation methods (Principal Component Analysis, Locally Linear Embedding, and Minimum Noise Fraction) were compared. First, the hyperspectral image was transformed into lower-dimensional feature space. Second, with thirty groups of the training samples by using random and manual selection and the two classifiers (support vector machine and the minimum distance), the classification maps were generated. At last, by using test data the kappa coefficient was used to evaluate the performances of the classification maps. The test results showed that by using minimal noise fraction the hyperspectral image reduces to 15 feature spaces and along with the support vector machine classifier, the classification accuracy has the best result. However, local linear embedding has better stability.
For the SVM classifier, when the training samples increase to greater than 20 percents of ground truth data, there is no helpful to increase the classification accuracy. In this thesis, we just compared three unsupervised dimension reduction methods. For further research, we may put attention on supervised dimension reduction methods.
|
author2 |
Wu Jee-Cheng |
author_facet |
Wu Jee-Cheng Chang Chiao-Po 張喬博 |
author |
Chang Chiao-Po 張喬博 |
spellingShingle |
Chang Chiao-Po 張喬博 Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification |
author_sort |
Chang Chiao-Po |
title |
Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification |
title_short |
Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification |
title_full |
Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification |
title_fullStr |
Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification |
title_full_unstemmed |
Compare the Performance of Three Unsupervised FeatureTransformation in Hyperspectral Dataset Classification |
title_sort |
compare the performance of three unsupervised featuretransformation in hyperspectral dataset classification |
publishDate |
2011 |
url |
http://ndltd.ncl.edu.tw/handle/87676167318847801943 |
work_keys_str_mv |
AT changchiaopo comparetheperformanceofthreeunsupervisedfeaturetransformationinhyperspectraldatasetclassification AT zhāngqiáobó comparetheperformanceofthreeunsupervisedfeaturetransformationinhyperspectraldatasetclassification AT changchiaopo sānzhǒngfēijiāndūtèzhēngzhuǎnhuànyúgāoguāngpǔyǐngxiàngfēnlèixiàonéngzhībǐjiào AT zhāngqiáobó sānzhǒngfēijiāndūtèzhēngzhuǎnhuànyúgāoguāngpǔyǐngxiàngfēnlèixiàonéngzhībǐjiào |
_version_ |
1718041757861543936 |