• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Wang, Boyue (Wang, Boyue.) | Ma, Yujian (Ma, Yujian.) | Li, Xiaoyan (Li, Xiaoyan.) | Gao, Junbin (Gao, Junbin.) | Hu, Yongli (Hu, Yongli.) | Yin, Baocai (Yin, Baocai.) (学者:尹宝才)

收录:

Scopus SCIE

摘要:

The objective of visual question answering (VQA) is to adequately comprehend a question and identify relevant contents in an image that can provide an answer. Existing approaches in VQA often combine visual and question features directly to create a unified cross-modality representation for answer inference. However, this kind of approach fails to bridge the semantic gap between visual and text modalities, resulting in a lack of alignment in cross-modality semantics and the inability to match key visual content accurately. In this article, we propose a model called the caption bridge-based cross-modality alignment and contrastive learning model (CBAC) to address the issue. The CBAC model aims to reduce the semantic gap between different modalities. It consists of a caption-based cross-modality alignment module and a visual-caption (V-C) contrastive learning module. By utilizing an auxiliary caption that shares the same modality as the question and has closer semantic associations with the visual, we are able to effectively reduce the semantic gap by separately matching the caption with both the question and the visual to generate pre-alignment features for each, which are then used in the subsequent fusion process. We also leverage the fact that V-C pairs exhibit stronger semantic connections compared to question-visual (Q-V) pairs to employ a contrastive learning mechanism on visual and caption pairs to further enhance the semantic alignment capabilities of single-modality encoders. Extensive experiments conducted on three benchmark datasets demonstrate that the proposed model outperforms previous state-of-the-art VQA models. Additionally, ablation experiments confirm the effectiveness of each module in our model. Furthermore, we conduct a qualitative analysis by visualizing the attention matrices to assess the reasoning reliability of the proposed model.

关键词:

cross-modality analysis Caption bridge visual question answering (VQA) contrastive learning

作者机构:

  • [ 1 ] [Wang, Boyue]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligent, Beijing 100124, Peoples R China
  • [ 2 ] [Ma, Yujian]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligent, Beijing 100124, Peoples R China
  • [ 3 ] [Li, Xiaoyan]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligent, Beijing 100124, Peoples R China
  • [ 4 ] [Hu, Yongli]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligent, Beijing 100124, Peoples R China
  • [ 5 ] [Yin, Baocai]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligent, Beijing 100124, Peoples R China

通讯作者信息:

  • [Li, Xiaoyan]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligent, Beijing 100124, Peoples R China

查看成果更多字段

相关关键词:

相关文章:

来源 :

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

ISSN: 2162-237X

年份: 2024

1 0 . 4 0 0

JCR@2022

被引次数:

WoS核心集被引频次: 3

SCOPUS被引频次: 2

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 1

归属院系:

在线人数/总访问数:539/4953539
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司