Improving reasoning with contrastive visual information for visual question answering

Abstract Visual Question Answering (VQA) aims to output a correct answer based on cross‐modality inputs including question and visual content. In general pipeline, information reasoning plays the key role for a reasonable answer. However, visual information is commonly not fully employed in many pop...

Full description

Bibliographic Details
Main Authors: Yu Long, Pengjie Tang, Hanli Wang, Jian Yu
Format: Article
Language:English
Published: Wiley 2021-09-01
Series:Electronics Letters
Online Access:https://doi.org/10.1049/ell2.12255
Description
Summary:Abstract Visual Question Answering (VQA) aims to output a correct answer based on cross‐modality inputs including question and visual content. In general pipeline, information reasoning plays the key role for a reasonable answer. However, visual information is commonly not fully employed in many popular models nowadays. Facing this challenge, a new strategy is proposed in this work to make the best of visual information during reasoning. In detail, visual information is divided into two subsets: (1) question‐relevant visual set, and (2) question‐irrelevant visual set. Then, both of these two sets are employed by reasoning to generate reasonable outputs. Experiments are conducted on the benchmark VQAv2 dataset, which demonstrate the effectiveness of the proposed strategy. The project page can be found in https://mic.tongji.edu.cn/e6/8d/c9778a190093/page.htm.
ISSN:0013-5194
1350-911X