Paper

Cylindrical Transform: 3D Semantic Segmentation of Kidneys With Limited Annotated Images

In this paper, we propose a novel technique for sampling sequential images using a cylindrical transform in a cylindrical coordinate system for kidney semantic segmentation in abdominal computed tomography (CT). The images generated from a cylindrical transform augment a limited annotated set of images in three dimensions. This approach enables us to train contemporary classification deep convolutional neural networks (DCNNs) instead of fully convolutional networks (FCNs) for semantic segmentation. Typical semantic segmentation models segment a sequential set of images (e.g. CT or video) by segmenting each image independently. However, the proposed method not only considers the spatial dependency in the x-y plane, but also the spatial sequential dependency along the z-axis. The results show that classification DCNNs, trained on cylindrical transformed images, can achieve a higher segmentation performance value than FCNs using a limited number of annotated images.

Results in Papers With Code
(↓ scroll down to see all results)