Low-rank compression
8 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Low-rank compression
Libraries
Use these libraries to find Low-rank compression models and implementationsMost implemented papers
Domain-adaptive deep network compression
We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression.
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks
Compressing DNNs is important for the real-world applications operating on resource-constrained devices.
A flexible, extensible software framework for model compression based on the LC algorithm
We propose a software framework based on the ideas of the Learning-Compression (LC) algorithm, that allows a user to compress a neural network or other machine learning model using different compression schemes with minimal effort.
Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer
Neural net compression can be achieved by approximating each layer's weight matrix by a low-rank matrix.
Model compression as constrained optimization, with application to neural nets. Part V: combining compressions
However, VGG nets can be better compressed by combining low-rank with a few floating point weights.
Compact Model Training by Low-Rank Projection with Energy Transfer
In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance.
TT-NF: Tensor Train Neural Fields
Learning neural fields has been an active topic in deep learning research, focusing, among other issues, on finding more compact and easy-to-fit representations.