• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Zhang, Yuqing (Zhang, Yuqing.) | Zhang, Yong (Zhang, Yong.) (学者:张勇) | Piao, Xinglin (Piao, Xinglin.) | Yuan, Peng (Yuan, Peng.) | Hu, Yongli (Hu, Yongli.) | Yin, Baocai (Yin, Baocai.)

收录:

EI Scopus SCIE

摘要:

Referring image segmentation identifies the object masks from images with the guidance of input natural language expressions. Nowadays, many remarkable cross-modal decoder are devoted to this task. But there are mainly two key challenges in these models. One is that these models usually lack to extract fine-grained boundary information and gradient information of images. The other is that these models usually lack to explore language associations among image pixels. In this work, a Multi-scale Gradient balanced Central Difference Convolution (MG-CDC) and a Graph convolutional network-based Language and Image Fusion (GLIF) for cross-modal encoder, called Graph-RefSeg, are designed. Specifically, in the shallow layer of the encoder, the MG-CDC captures comprehensive fine-grained image features. It could enhance the perception of target boundaries and provide effective guidance for deeper encoding layers. In each encoder layer, the GLIF is used for cross-modal fusion. It could explore the correlation of every pixel and its corresponding language vectors by a graph neural network. Since the encoder achieves robust cross-modal alignment and context mining, a light-weight decoder could be used for segmentation prediction. Extensive experiments show that the proposed Graph-RefSeg outperforms the state-of-the-art methods on three public datasets. Code and models will be made publicly available at . In this work, we design a Multi-scale Gradient balanced Central Difference Convolution (MG-CDC) and a Graph convolutional network-based Language and Image Fusion (GLIF) for cross-modal encoder. Since our encoder achieves robust cross-modal alignment and context mining, we could use a light-weight decoder for segmentation prediction. Extensive experiments show that our method outperforms the state-of-the-art methods on three public datasets.image

关键词:

image fusion image segmentation

作者机构:

  • [ 1 ] [Zhang, Yuqing]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 2 ] [Zhang, Yong]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 3 ] [Piao, Xinglin]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 4 ] [Hu, Yongli]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 5 ] [Yin, Baocai]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 6 ] [Yuan, Peng]China Elect Technol Grp Taiji Co Ltd, Beijing, Peoples R China
  • [ 7 ] [Zhang, Yong]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China

通讯作者信息:

  • 张勇

    [Zhang, Yong]Beijing Univ Technol, Beijing Artificial Intelligence Inst, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing 100124, Peoples R China

电子邮件地址:

查看成果更多字段

相关关键词:

相关文章:

来源 :

IET IMAGE PROCESSING

ISSN: 1751-9659

年份: 2024

期: 4

卷: 18

页码: 1083-1095

2 . 3 0 0

JCR@2022

被引次数:

WoS核心集被引频次:

SCOPUS被引频次: 1

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 0

归属院系:

在线人数/总访问数:515/4981386
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司