Feather: An Elegant Solution to Effective DNN Sparsification

3 Oct 2023  ·  Athanasios Glentis Georgoulakis, George Retsinas, Petros Maragos ·

Neural Network pruning is an increasingly popular way for producing compact and efficient models, suitable for resource-limited environments, while preserving high performance. While the pruning can be performed using a multi-cycle training and fine-tuning process, the recent trend is to encompass the sparsification process during the standard course of training. To this end, we introduce Feather, an efficient sparse training module utilizing the powerful Straight-Through Estimator as its core, coupled with a new thresholding operator and a gradient scaling technique, enabling robust, out-of-the-box sparsification performance. Feather's effectiveness and adaptability is demonstrated using various architectures on the CIFAR dataset, while on ImageNet it achieves state-of-the-art Top-1 validation accuracy using the ResNet-50 architecture, surpassing existing methods, including more complex and computationally heavy ones, by a considerable margin. Code is publicly available at https://github.com/athglentis/feather .

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Network Pruning ImageNet - ResNet 50 - 90% sparsity Feather Top-1 Accuracy 76.93 # 1

Methods