1 code implementation • 5 Dec 2019 • Justin Cosentino, Federico Zaiter, Dan Pei, Jun Zhu
Recent work on deep neural network pruning has shown there exist sparse subnetworks that achieve equal or improved accuracy, training time, and loss using fewer network parameters when compared to their dense counterparts.