Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion
With the amount of remote sensing data increasing at an extremely fast pace, machine learning-based technique has been shown to perform superiorly in many applications. However, most of the existing methods in the real-time application are based on single modal image data. Although a few approaches...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9119205/ |
id |
doaj-b1490174fc354947b4d58f966a3e3f8d |
---|---|
record_format |
Article |
spelling |
doaj-b1490174fc354947b4d58f966a3e3f8d2021-06-03T23:01:33ZengIEEEIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing2151-15352020-01-01133723373410.1109/JSTARS.2020.30028859119205Multitask Multisource Deep Correlation Filter for Remote Sensing Data FusionXu Cheng0https://orcid.org/0000-0003-2355-9010Yuhui Zheng1https://orcid.org/0000-0002-4408-3800Jianwei Zhang2Zhangjing Yang3Jiangsu Key Laboratory of Big Data Analysis Technology, School of Computer and Software, Engineering Research Center of Digital Forensics Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, ChinaJiangsu Key Laboratory of Big Data Analysis Technology, School of Computer and Software, Engineering Research Center of Digital Forensics Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, ChinaSchool of Mathematics and Statistics, Nanjing University of Information Science and Technology, Nanjing, ChinaSchool of Information Engineering, Nanjing Audit University, Nanjing, ChinaWith the amount of remote sensing data increasing at an extremely fast pace, machine learning-based technique has been shown to perform superiorly in many applications. However, most of the existing methods in the real-time application are based on single modal image data. Although a few approaches use the different source images to represent the object via a fusion scheme, it may not be appropriate for multimodality information processing. In addition, these methods hardly benefit from the end-to-end network training due to the limitations of implementation difficulty and computational cost. In this article, we propose a multitask multisource information fusion method in the deep learning and correlation filter frameworks, which is applied to the fields of tracking and remote sensing data processing. The contribution of individual layers from different source data inside the deep network model is considered as a task. The proposed method can employ interdependencies among different sources data and tasks to learn deep network parameters and filters jointly to improve the performance. Second, we present an effective object appearance selection scheme to adaptively capture the object appearance changes via an effective deep learning network, then integrating information from different modalities to achieve information fusion. Different source information can provide robust performance from different aspects with complementary properties. Third, we further extend the proposed approach to the field of remote sensing for semantic labeling. The layers' sensitivity is utilized to verify the robustness of different classes. Extensively experiments on five benchmarks show that the proposed approach performs favorably against the state-of-the-arts.https://ieeexplore.ieee.org/document/9119205/Deep learningmultimodalmultitaskinformation fusionremote sensing |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Xu Cheng Yuhui Zheng Jianwei Zhang Zhangjing Yang |
spellingShingle |
Xu Cheng Yuhui Zheng Jianwei Zhang Zhangjing Yang Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Deep learning multimodal multitask information fusion remote sensing |
author_facet |
Xu Cheng Yuhui Zheng Jianwei Zhang Zhangjing Yang |
author_sort |
Xu Cheng |
title |
Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion |
title_short |
Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion |
title_full |
Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion |
title_fullStr |
Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion |
title_full_unstemmed |
Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion |
title_sort |
multitask multisource deep correlation filter for remote sensing data fusion |
publisher |
IEEE |
series |
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
issn |
2151-1535 |
publishDate |
2020-01-01 |
description |
With the amount of remote sensing data increasing at an extremely fast pace, machine learning-based technique has been shown to perform superiorly in many applications. However, most of the existing methods in the real-time application are based on single modal image data. Although a few approaches use the different source images to represent the object via a fusion scheme, it may not be appropriate for multimodality information processing. In addition, these methods hardly benefit from the end-to-end network training due to the limitations of implementation difficulty and computational cost. In this article, we propose a multitask multisource information fusion method in the deep learning and correlation filter frameworks, which is applied to the fields of tracking and remote sensing data processing. The contribution of individual layers from different source data inside the deep network model is considered as a task. The proposed method can employ interdependencies among different sources data and tasks to learn deep network parameters and filters jointly to improve the performance. Second, we present an effective object appearance selection scheme to adaptively capture the object appearance changes via an effective deep learning network, then integrating information from different modalities to achieve information fusion. Different source information can provide robust performance from different aspects with complementary properties. Third, we further extend the proposed approach to the field of remote sensing for semantic labeling. The layers' sensitivity is utilized to verify the robustness of different classes. Extensively experiments on five benchmarks show that the proposed approach performs favorably against the state-of-the-arts. |
topic |
Deep learning multimodal multitask information fusion remote sensing |
url |
https://ieeexplore.ieee.org/document/9119205/ |
work_keys_str_mv |
AT xucheng multitaskmultisourcedeepcorrelationfilterforremotesensingdatafusion AT yuhuizheng multitaskmultisourcedeepcorrelationfilterforremotesensingdatafusion AT jianweizhang multitaskmultisourcedeepcorrelationfilterforremotesensingdatafusion AT zhangjingyang multitaskmultisourcedeepcorrelationfilterforremotesensingdatafusion |
_version_ |
1721398816987414528 |