Paper

One weird trick for parallelizing convolutional neural networks

I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural networks.

Results in Papers With Code
(↓ scroll down to see all results)