收录:
摘要:
Deep learning-based segmentation algorithms for medical image require massive training datasets with accurate annotations, which is costly since it takes much human effort to manually labeling from scratch. Therefore, interactive image segmentation is important and may greatly improve the efficiency and accuracy of medical image labeling. Some interactive segmentation methods (e.g. Deep Extreme Cut and Deepgrow) may improve the labeling through minimal interactive input. However these methods only utilize the initial manually input information, while existing segmentation results (such as annotations produced by nonprofessionals or conventional segmentation algorithms) cannot be utilized. In this paper, an interactive segmentation method is proposed to make use of both existing segmentation results and human interactive information to optimize the segmentation results progressively. In this framework, the user only needs to click on the foreground or background of the target individual on the medical image, the algorithm could adaptively learn the correlation between them, and automatically completes the segmentation of the target. The main contributions of this paper are: (1) We adjusted and applied a convolutional neural network which takes medical image data and user's clicks information as input to achieve more accurate segmentation of medical images. (2) We designed an iterative training strategy to realize the applicability of the model to deal with different number of clicks data input. (3) We designed an algorithm based on false positive and false negative regions to simulate the user's clicks, so as to provide enough training data. By applying the proposed method, users can easily extract the region of interest or modify the segmentation results by multiple clicks. The experimental results of 6 medical image segmentation tasks show that the proposed method achieves more accurate segmentation results by at most five clicks. © 2021 SPIE.
关键词:
通讯作者信息:
电子邮件地址: