Skip to main content

Multi-organ Segmentation over Partially Labeled Datasets with Multi-scale Feature Abstraction

Shortage of fully annotated datasets has been a limiting factor in developing deep learning based image segmentation algorithms and the problem becomes more pronounced in multi-organ segmentation. In this paper, we propose a unified training strategy that enables a novel multi-scale deep neural network to be trained on multiple partially labeled datasets for multi-organ segmentation. In addition, a new network architecture for multi-scale feature abstraction is proposed to integrate pyramid input and feature analysis into a U-shape pyramid structure. To bridge the semantic gap caused by directly merging features from different scales, an equal convolutional depth mechanism is introduced. Furthermore, we employ a deep supervision mechanism to refine the outputs in different scales. To fully leverage the segmentation features from all the scales, we design an adaptive weighting layer to fuse the outputs in an automatic fashion. All these mechanisms together are integrated into a Pyramid Input Pyramid Output Feature Abstraction Network (PIPO-FAN). Our proposed method was evaluated on four publicly available datasets, including BTCV, LiTS, KiTS and Spleen, where very promising performance has been achieved. The source code of this work is publicly shared at https://github.com/DIAL-RPI/PIPO-FAN to facilitate others to reproduce the work and build their own models using the introduced mechanisms.

Reference

X. Fang and P. Yan, "Multi-organ Segmentation over Partially Labeled Datasets with Multi-scale Feature Abstraction ,"

IEEE Transactions on Medical Imaging, Vol. 39, No. 11, pp. 3619-3629 (2020)

Bibtex

@ARTICLE{fang_tmi_multi-organ,
author={Fang, Xi and Yan, Pingkun},
journal={IEEE Transactions on Medical Imaging},
title={Multi-organ Segmentation over Partially Labeled Datasets with Multi-scale Feature Abstraction},
year={2020},
volume={39},
number={11},
pages={3619-3629},
}