Search Results for author: Maximilian Golub

Found 5 papers, 2 papers with code

Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network Training

no code implementations23 Sep 2020 Dingqing Yang, Amin Ghasemazar, Xiaowei Ren, Maximilian Golub, Guy Lemieux, Mieszko Lis

The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors.

Full deep neural network training on a pruned weight budget

1 code implementation11 Jun 2018 Maximilian Golub, Guy Lemieux, Mieszko Lis

We introduce a DNN training technique that learns only a fraction of the full parameter set without incurring an accuracy penalty.

Cannot find the paper you are looking for? You can Submit a new open access paper.