Learning Cross-Modal Aligned Representation With Graph Embedding
The main task of cross-modal analysis is to learn discriminative representation shared across different modalities. In order to pursue aligned representation, conventional approaches tend to construct and optimize a linear projection or train a complex architecture of deep layers, yet it is difficul...
Main Authors: | Youcai Zhang, Jiayan Cao, Xiaodong Gu |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2018-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8543794/ |
Similar Items
-
Semantic Consistency Cross-Modal Retrieval With Semi-Supervised Graph Regularization
by: Gongwen Xu, et al.
Published: (2020-01-01) -
Combination subspace graph learning for cross-modal retrieval
by: Gongwen Xu, et al.
Published: (2020-06-01) -
Deep Semantic Cross Modal Hashing Based on Graph Similarity of Modal-Specific
by: Junzheng Li
Published: (2021-01-01) -
Cross-Modal Retrieval via Similarity-Preserving Learning and Semantic Average Embedding
by: Tao Zhi, et al.
Published: (2020-01-01) -
On the Limitations of Visual-Semantic Embedding Networks for Image-to-Text Information Retrieval
by: Yan Gong, et al.
Published: (2021-07-01)