Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction

We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of about 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.

PDF Abstract IEEE Transactions 2018 PDF IEEE Transactions 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Brain Image Segmentation CREMI U-NET MALA VOI 0.606 # 1
CREMI Score 0.289 # 1
Brain Image Segmentation FIB-25 Synaptic Sites U-NET MALA VOI 2.151 # 1
Brain Image Segmentation FIB-25 Whole Test U-NET MALA VOI 1.071 # 1
Brain Image Segmentation SegEM U-NET MALA IED 4.839 # 1

Methods