no code implementations • 23 Sep 2020 • Dingqing Yang, Amin Ghasemazar, Xiaowei Ren, Maximilian Golub, Guy Lemieux, Mieszko Lis
The success of DNN pruning has led to the development of energy-efficient inference accelerators that support pruned models with sparse weight and activation tensors.
1 code implementation • 11 Jun 2018 • Maximilian Golub, Guy Lemieux, Mieszko Lis
We introduce a DNN training technique that learns only a fraction of the full parameter set without incurring an accuracy penalty.