Search Results for author: John Tran

Found 5 papers, 3 papers with code

DSD: Dense-Sparse-Dense Training for Deep Neural Networks

2 code implementations15 Jul 2016 Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, William J. Dally

We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance.

8k Caption Generation +3

Learning both Weights and Connections for Efficient Neural Network

no code implementations NeurIPS 2015 Song Han, Jeff Pool, John Tran, William Dally

On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9×, from 61 million to 6. 7 million, without incurring accuracy loss.

Efficient Neural Network

Learning both Weights and Connections for Efficient Neural Networks

7 code implementations NeurIPS 2015 Song Han, Jeff Pool, John Tran, William J. Dally

On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6. 7 million, without incurring accuracy loss.

cuDNN: Efficient Primitives for Deep Learning

3 code implementations3 Oct 2014 Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer

To address this problem, we have created a library similar in intent to BLAS, with optimized routines for deep learning workloads.

Parallel Support Vector Machines in Practice

no code implementations3 Apr 2014 Stephen Tyree, Jacob R. Gardner, Kilian Q. Weinberger, Kunal Agrawal, John Tran

In particular, we provide the first comparison of algorithms with explicit and implicit parallelization.

Cannot find the paper you are looking for? You can Submit a new open access paper.