A Model-Driven Stack-Based Fully Convolutional Network for Pancreas Segmentation

3 Mar 2019  ·  Hao Li, Jun Li, Xiaozhu Lin, Xiaohua Qian ·

The irregular geometry and high inter-slice variability in computerized tomography (CT) scans of the human pancreas make an accurate segmentation of this crucial organ a challenging task for existing data-driven deep learning methods. To address this problem, we present a novel model-driven stack-based fully convolutional network with a sliding window fusion algorithm for pancreas segmentation, termed MDS-Net. The MDS-Net's cost function includes a data approximation term and a prior knowledge regularization term combined with a stack scheme for capturing and fusing the two-dimensional (2D) and local three-dimensional (3D) context information. Specifically, 3D CT scans are divided into multiple stacks to capture the local spatial context feature. To highlight the importance of single slices, the inter-slice relationships in the stack data are also incorporated in the MDS-Net framework. For implementing this proposed model-driven method, we create a stack-based U-Net architecture and successfully derive its back-propagation procedure for end-to-end training. Furthermore, a sliding window fusion algorithm is utilized to improve the consistency of adjacent CT slices and intra-stack. Finally, extensive quantitative assessments on the NIH Pancreas-CT dataset demonstrated higher pancreatic segmentation accuracy and reliability of MDS-Net compared to other state-of-the-art methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods