• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Bai, Yunkun (Bai, Yunkun.) | Sun, Guangmin (Sun, Guangmin.) | Li, Yu (Li, Yu.) | Le Shen (Le Shen.) | Li Zhang (Li Zhang.)

Indexed by:

CPCI-S EI Scopus

Abstract:

Deep learning based segmentation algorithms for medical image require massive training datasets with accurate annotations, which is costly since it takes much human effort to manually labeling from scratch. Therefore, interactive image segmentation is important and may greatly improve the efficiency and accuracy of medical image labeling. Some interactive segmentation methods (e.g. Deep Extreme Cut and Deepgrow) may improve the labeling through minimal interactive input. However these methods only utilize the initial manually input information, while existing segmentation results (such as annotations produced by nonprofessionals or conventional segmentation algorithms) cannot be utilized In this paper, an interactive segmentation method is proposed to make use of both existing segmentation results and human interactive information to optimize the segmentation results progressively. In this framework, the user only needs to click on the foreground or background of the target individual on the medical image, the algorithm could adaptively learn the correlation between them, and automatically completes the segmentation of the target. The main contributions of this paper are: (1) We adjusted and applied a convolutional neural network which takes medical image data and user's clicks information as input to achieve more accurate segmentation of medical images. (2) We designed an iterative training strategy to realize the applicability of the model to deal with different number of clicks data input. (3) We designed an algorithm based on false positive and false negative regions to simulate the user's clicks, so as to provide enough training data. By applying the proposed method, users can easily extract the region of interest or modify the segmentation results by multiple clicks. The experimental results of 6 medical image segmentation tasks show that the proposed method achieves more accurate segmentation results by at most five clicks.

Keyword:

convolutional neural network medical image annotation interactive image segmentation

Author Community:

  • [ 1 ] [Bai, Yunkun]Beijing Univ Technol, Fac Informat Technol, 100 PingLeYuan, Beijing 100124, Peoples R China
  • [ 2 ] [Sun, Guangmin]Beijing Univ Technol, Fac Informat Technol, 100 PingLeYuan, Beijing 100124, Peoples R China
  • [ 3 ] [Li, Yu]Beijing Univ Technol, Fac Informat Technol, 100 PingLeYuan, Beijing 100124, Peoples R China
  • [ 4 ] [Le Shen]Tsinghua Univ, Minist Educ, Key Lab Particle & Radiat Imaging, Beijing, Peoples R China
  • [ 5 ] [Li Zhang]Tsinghua Univ, Minist Educ, Key Lab Particle & Radiat Imaging, Beijing, Peoples R China
  • [ 6 ] [Le Shen]Tsinghua Univ, Dept Engn Phys, Beijing 100084, Peoples R China
  • [ 7 ] [Li Zhang]Tsinghua Univ, Dept Engn Phys, Beijing 100084, Peoples R China

Reprint Author's Address:

  • [Le Shen]Tsinghua Univ, Minist Educ, Key Lab Particle & Radiat Imaging, Beijing, Peoples R China;;[Le Shen]Tsinghua Univ, Dept Engn Phys, Beijing 100084, Peoples R China

Show more details

Related Keywords:

Related Article:

Source :

MEDICAL IMAGING 2021: IMAGE PROCESSING

ISSN: 0277-786X

Year: 2021

Volume: 11596

Language: English

Cited Count:

WoS CC Cited Count: 0

SCOPUS Cited Count: 2

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:675/5296024
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.