• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Jia, Xibin (Jia, Xibin.) (学者:贾熹滨) | Liu, Yunfeng (Liu, Yunfeng.) | Yang, Zhenghan (Yang, Zhenghan.) | Yang, Dawei (Yang, Dawei.)

收录:

CPCI-S SCIE PubMed

摘要:

BackgroundDeep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single type of medical images from the corresponding examining method. Considering of practical clinic application of the radiology examination for diseases, the multiple image examination methods are normally required for final diagnosis especially in some severe diseases like cancers. Therefore, by considering the cases of employing multi-modal images and exploring the effective multi-modality fusion based on deep networks, we do the research to make full use of complementary information of multi-modal images referring to the clinic experiences of radiologists in image analysis.MethodsReferring to the human radiologist diagnosis experience, we discuss and propose a new self-attention aware mechanism to improve the segmentation performance by paying the different attention on different modal images and different symptoms. Firstly, we propose a multi-path encoder and decoder deep network for 3D biomedical segmentation. Secondly, to leverage the complementary information among different modalities, we introduce a structure of attention mechanism called the Multi-Modality Self-Attention Aware (MMSA) convolution. Multi-modal images we used in the paper are different modalities of MR scanning images, which are input into the network separately. Then self-attention weight fusion of multi-modal features is performed with our proposed MMSA, which can adaptively adjust the fusion weights according to the learned contribution degree of different modalities and different features revealing the different symptoms from the labeled data.ResultsExperiments have been done on the public competition dataset BRATS-2015. The results show that our proposed method achieves dice scores of 0.8726, 0.6563, 0.8313 for the whole tumor, the tumor core and the enhancing tumor core, respectively. Comparing with the U-Net with SE block, the scores are increased by 0.0212,0.031,0.0304.ConclusionsWe present a multi-modality self-attention aware convolution, which have better segmentation results based on the adaptive weighting fusion mechanism with exploiting the multiple medical image modalities. Experimental results demonstrate the effectiveness of our method and prominent application in the multi-modality fusion based medical image analysis.

关键词:

3D biomedical segmentation Attention mechanism Multi-modal fusion

作者机构:

  • [ 1 ] [Jia, Xibin]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 2 ] [Liu, Yunfeng]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 3 ] [Yang, Zhenghan]Capital Med Univ, Beijing Friendship Hosp, Dept Radiol, Beijing, Peoples R China
  • [ 4 ] [Yang, Dawei]Capital Med Univ, Beijing Friendship Hosp, Dept Radiol, Beijing, Peoples R China

通讯作者信息:

  • [Yang, Zhenghan]Capital Med Univ, Beijing Friendship Hosp, Dept Radiol, Beijing, Peoples R China

电子邮件地址:

查看成果更多字段

相关关键词:

相关文章:

来源 :

BMC MEDICAL INFORMATICS AND DECISION MAKING

年份: 2020

卷: 20

3 . 5 0 0

JCR@2022

ESI学科: CLINICAL MEDICINE;

ESI高被引阀值:33

JCR分区:3

被引次数:

WoS核心集被引频次: 4

SCOPUS被引频次: 6

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 2

归属院系:

在线人数/总访问数:443/2903239
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司