Indexed by:
Abstract:
Image-text retrieval has drawn much attention in recent years, where similarity measure between im-age and text plays an important role. Most existing works focus on learning global coarse-grained or local fine-grained features for similarity computation. However, the large domain gap between different modalities is often neglected, which makes it difficult to match the images and texts effectively. In order to deal with this problem, we propose to use auxiliary information to release the domain gap, where the image captions are generated. Then, a Caption-Assisted Graph Neural Network(CGNN) is designed to learn the structured relationships among images, captions, and texts. Since the captions and the texts are from the same domain, the domain gap between images and texts can be effectively released. With the help of caption information, our model achieves excellent performance on two cross-modal retrieval datasets, Flickr30K and MS-COCO, which shows the effectiveness of our framework.(c) 2022 Elsevier B.V. All rights reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
PATTERN RECOGNITION LETTERS
ISSN: 0167-8655
Year: 2022
Volume: 161
Page: 137-142
5 . 1
JCR@2022
5 . 1 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:49
JCR Journal Grade:2
CAS Journal Grade:3
Cited Count:
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: