Semi-Supervised Learning Based Semantic Cross-Media Retrieval

With the advent of the era of big data, information has gradually changed from a single modal to a diversified form, such as image, text, video, audio, etc. With the growth of multimedia data, the key problem faced by cross-media retrieval technology is how to quickly retrieve multimedia data with d...

Full description

Bibliographic Details
Main Authors: Xiyuan Zheng, Wei Zhu, Zhenmei Yu, Meijia Zhang
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9432800/
Description
Summary:With the advent of the era of big data, information has gradually changed from a single modal to a diversified form, such as image, text, video, audio, etc. With the growth of multimedia data, the key problem faced by cross-media retrieval technology is how to quickly retrieve multimedia data with different modalities of the same semantic. At present, many cross-media retrieval techniques use local annotated samples for training. In this way, the semantic information of the data cannot be fully utilized, and manual annotation is required, which is rather labor-intensive prone to errors and subjective viewing. Therefore, this paper proposes a Semi-Supervised learning based Semantic Cross-Media Retrieval (S3CMR) method aiming at the above problems. The main advantage of this method is to make full use of the relationship between the semantic information of the labeled samples and the unlabeled samples. Simultaneously, we integrate the linear regression term, correlation analysis term, and feature selection term into a joint cross-media learning framework. These terms interact with each other and embed more semantics in the shared subspace. Furthermore, an iterative method guaranteed with convergence is proposed to solve the formulated optimization problem. Experimental results on three publicly available datasets demonstrate that the proposed method outperforms eight state-of-the-art cross-media retrieval methods.
ISSN:2169-3536