Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping

The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based...

Full description

Bibliographic Details
Main Author: Jacob, Alexander
Format: Others
Language:English
Published: KTH, Geoinformatik och Geodesi 2011
Subjects:
SAR
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-45978
id ndltd-UPSALLA1-oai-DiVA.org-kth-45978
record_format oai_dc
spelling ndltd-UPSALLA1-oai-DiVA.org-kth-459782013-01-08T13:50:49ZRadar and Optical Data Fusion for Object Based Urban Land Cover MappingengRadar och optisk datafusion för objektbaserad kartering av urbant marktäckeJacob, AlexanderKTH, Geoinformatik och Geodesi2011Data FusionImage SegmentationObject based MappingUrban Land CoverSARRadarMultispectralRegion growing and mergingThe creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion. Dragon 2 ProjectStudent thesisinfo:eu-repo/semantics/bachelorThesistexthttp://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-45978TRITA-GIT, 1653-5227 ; 11-009application/pdfinfo:eu-repo/semantics/openAccess
collection NDLTD
language English
format Others
sources NDLTD
topic Data Fusion
Image Segmentation
Object based Mapping
Urban Land Cover
SAR
Radar
Multispectral
Region growing and merging
spellingShingle Data Fusion
Image Segmentation
Object based Mapping
Urban Land Cover
SAR
Radar
Multispectral
Region growing and merging
Jacob, Alexander
Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping
description The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion. === Dragon 2 Project
author Jacob, Alexander
author_facet Jacob, Alexander
author_sort Jacob, Alexander
title Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping
title_short Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping
title_full Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping
title_fullStr Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping
title_full_unstemmed Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping
title_sort radar and optical data fusion for object based urban land cover mapping
publisher KTH, Geoinformatik och Geodesi
publishDate 2011
url http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-45978
work_keys_str_mv AT jacobalexander radarandopticaldatafusionforobjectbasedurbanlandcovermapping
AT jacobalexander radarochoptiskdatafusionforobjektbaseradkarteringavurbantmarktacke
_version_ 1716530357931606016