Multimodal Data Fusion Using Non-Sparse Multi-Kernel Learning With Regularized Label Softening

Due to the need of practical application, multiple sensors are often used for data acquisition, so as to realize the multimodal description of the same object. How to effectively fuse multimodal data has become a challenge problem in different scenarios including remote sensing. Nonsparse multi-Kern...

Full description

Bibliographic Details
Main Authors: Peihua Wang, Chengyu Qiu, Jiali Wang, Yulong Wang, Jiaxi Tang, Bin Huang, Jian Su, Yuanpeng Zhang
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9449986/
Description
Summary:Due to the need of practical application, multiple sensors are often used for data acquisition, so as to realize the multimodal description of the same object. How to effectively fuse multimodal data has become a challenge problem in different scenarios including remote sensing. Nonsparse multi-Kernel learning has won many successful applications in multimodal data fusion due to the full utilization of multiple Kernels. Most existing models assume that the nonsparse combination of multiple Kernels is infinitely close to a strict binary label matrix during the training process. However, this assumption is very strict so that label fitting has very little freedom. To address this issue, in this article, we develop a novel nonsparse multi-Kernel model for multimodal data fusion. To be specific, we introduce a label softening strategy to soften the binary label matrix which provides more freedom for label fitting. Additionally, we introduce a regularized term based on manifold learning to anti over fitting problems caused by label softening. Experimental results on one synthetic dataset, several UCI multimodal datasets and one multimodal remoting sensor dataset demonstrate the promising performance of the proposed model.
ISSN:2151-1535