Browse SoTA > Methodology > Model Compression > Neural Network Compression

Neural Network Compression

26 papers with code · Methodology
Subtask of Model Compression

Benchmarks

Greatest papers with code

Improving Neural Network Quantization without Retraining using Outlier Channel Splitting

28 Jan 2019NervanaSystems/distiller

The majority of existing literature focuses on training quantized DNNs, while this work examines the less-studied topic of quantizing a floating-point model without (re)training.

LANGUAGE MODELLING NEURAL NETWORK COMPRESSION QUANTIZATION

Forward and Backward Information Retention for Accurate Binary Neural Networks

CVPR 2020 JDAI-CV/dabnn

Our empirical study indicates that the quantization brings information loss in both forward and backward propagation, which is the bottleneck of training accurate binary neural networks.

NEURAL NETWORK COMPRESSION QUANTIZATION

Data-Free Learning of Student Networks

ICCV 2019 huawei-noah/DAFL

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

NEURAL NETWORK COMPRESSION

Data-Free Learning of Student Networks

ICCV 2019 huawei-noah/DAFL

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors.

NEURAL NETWORK COMPRESSION

Learning Sparse Networks Using Targeted Dropout

31 May 2019for-ai/TD

Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

Soft Weight-Sharing for Neural Network Compression

13 Feb 2017KarenUllrich/Tutorial_BayesianCompressionForDL

The success of deep learning in numerous application domains created the de- sire to run and train them on mobile devices.

NEURAL NETWORK COMPRESSION QUANTIZATION

ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression

ICCV 2017 Roll920/ThiNet

Similar experiments with ResNet-50 reveal that even for a compact network, ThiNet can also reduce more than half of the parameters and FLOPs, at the cost of roughly 1$\%$ top-5 accuracy drop.

NEURAL NETWORK COMPRESSION

A Closer Look at Structured Pruning for Neural Network Compression

10 Oct 2018BayesWatch/pytorch-prunes

Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

ZeroQ: A Novel Zero Shot Quantization Framework

CVPR 2020 amirgholami/ZeroQ

Importantly, ZeroQ has a very low computational overhead, and it can finish the entire quantization process in less than 30s (0. 5\% of one epoch training time of ResNet50 on ImageNet).

NEURAL NETWORK COMPRESSION QUANTIZATION

Focused Quantization for Sparse CNNs

NeurIPS 2019 deep-fry/mayo

In ResNet-50, we achieved a 18. 08x CR with only 0. 24% loss in top-5 accuracy, outperforming existing compression methods.

NEURAL NETWORK COMPRESSION QUANTIZATION