收录:
摘要:
Due to diversity among tumor lesions and less difference between surroundings, to extract the discriminative features of a medical image is still a challenging job. In order to improve the ability in the representation of these complex objects, the type of approach has been proposed with the encoderdecoder architecture models for biomedical segmentation. However, most of them fuse coarse-grained and fine-grained features directly which will cause a semantic gap. In order to bridge the semantic gap and fuse features better, we propose a style consistency loss to constrain semantic similarity when combing the encoder and decoder features. The comparison experiments are done between our proposed UNet with style consistency loss constraint in with the state-of-art segmentation deep networks including FCN, original U-Net and U-Net with residual block. Experimental results on LiTS-2017 show that our method achieves a liver dice gain of 1.7% and a tumor dice gain of 3.11% points over U-Net. © Springer Nature Switzerland AG 2019.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
ISSN: 0302-9743
年份: 2019
卷: 11859 LNCS
页码: 390-396
语种: 英文
归属院系: