Network Pruning

213 papers with code • 5 benchmarks • 5 datasets

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Libraries

Use these libraries to find Network Pruning models and implementations

Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch

xidongwu/autotrainonce 21 Mar 2024

Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging.

3
21 Mar 2024

Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency

saintslab/pepr 14 Mar 2024

We present experiments on two benchmark datasets showing that adversarial fine-tuning of compressed models can achieve robustness performance comparable to adversarially trained models, while also improving computational efficiency.

0
14 Mar 2024

FALCON: FLOP-Aware Combinatorial Optimization for Neural Network Pruning

mazumder-lab/falcon 11 Mar 2024

In this paper, we propose FALCON, a novel combinatorial-optimization-based framework for network pruning that jointly takes into account model accuracy (fidelity), FLOPs, and sparsity constraints.

1
11 Mar 2024

What to Do When Your Discrete Optimization Is the Size of a Neural Network?

hsilva664/discrete_nn 15 Feb 2024

Oftentimes, machine learning applications using neural networks involve solving discrete optimization problems, such as in pruning, parameter-isolation-based continual learning and training of binary networks.

0
15 Feb 2024

Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language Models

itsmattei/ken 5 Feb 2024

This approach maintains model performance while allowing storage of only the optimized subnetwork, leading to significant memory savings.

0
05 Feb 2024

Fluctuation-based Adaptive Structured Pruning for Large Language Models

casia-iva-lab/flap 19 Dec 2023

Retraining-free is important for LLMs' pruning methods.

23
19 Dec 2023

Towards Higher Ranks via Adversarial Weight Pruning

huawei-noah/Efficient-Computing NeurIPS 2023

To this end, we propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner.

1,111
29 Nov 2023

LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS

VITA-Group/LightGaussian 28 Nov 2023

Recent advancements in real-time neural rendering using point-based techniques have paved the way for the widespread adoption of 3D representations.

437
28 Nov 2023

Filter-Pruning of Lightweight Face Detectors Using a Geometric Median Criterion

idt-iti/lightweight-face-detector-pruning 28 Nov 2023

Face detectors are becoming a crucial component of many applications, including surveillance, that often have to run on edge devices with limited processing power and memory.

8
28 Nov 2023

Neural Network Pruning by Gradient Descent

3riccc/neural_pruning 21 Nov 2023

The rapid increase in the parameters of deep learning models has led to significant costs, challenging computational efficiency and model interpretability.

2
21 Nov 2023