Very High Resolution Remote Sensing Imagery Classification Using a Fusion of Random Forest and Deep Learning Technique—Subtropical Area for Example

Recently, convolutional neural networks (CNNs) showed excellent performance in many tasks, such as computer vision and remote sensing semantic segmentation. Especially, the ability to learn high-representation features of CNN draws much attention. And random forest (RF) algorithm, on the other hand,...

Full description

Bibliographic Details
Main Authors: Luofan Dong, Huaqiang Du, Fangjie Mao, Ning Han, Xuejian Li, Guomo Zhou, Di'en Zhu, Junlong Zheng, Meng Zhang, Luqi Xing, Tengyan Liu
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8935521/
Description
Summary:Recently, convolutional neural networks (CNNs) showed excellent performance in many tasks, such as computer vision and remote sensing semantic segmentation. Especially, the ability to learn high-representation features of CNN draws much attention. And random forest (RF) algorithm, on the other hand, is widely applied for variables selection, classification, and regression. Based on the previous fusion models that fused CNN with the other models, such as conditional random fields (CRFs), support vector machine (SVM), and RF, this article tested a method based on the fusion of an RF classifier and the CNN for a very high resolution remote sensing (VHRRS) based forests mapping. The study area is located in the south of China and the main purpose was to precisely distinguish Lei bamboo forests from the other subtropical forests. The main novelties of this article are as follows. First, a test was conducted to confirm if a fusion of CNN and RF make an improvement in the VHRRS information extraction. Second, based on RF, variables with high importance were selected. Then, a test was again conducted to confirm if the learning from the selected variables will further give better results.
ISSN:2151-1535