Binarization
150 papers with code • 16 benchmarks • 17 datasets
Libraries
Use these libraries to find Binarization models and implementationsDatasets
Most implemented papers
SiMaN: Sign-to-Magnitude Network Binarization
In this paper, we show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.
A comprehensive review of Binary Neural Network
This article provides a complete overview of recent developments in BNN.
BiT: Robustly Binarized Multi-distilled Transformer
Modern pre-trained transformers have rapidly advanced the state-of-the-art in machine learning, but have also grown in parameters and computational complexity, making them increasingly difficult to deploy in resource-constrained environments.
Basic Binary Convolution Unit for Binarized Image Restoration Network
In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for IR tasks.
Binarized Spectral Compressive Imaging
Finally, our BiSRNet is derived by using the proposed techniques to binarize the base model.
PB-LLM: Partially Binarized Large Language Models
This paper explores network binarization, a radical form of quantization, compressing model weights to a single bit, specifically for Large Language Models (LLMs) compression.
NAF-DPM: A Nonlinear Activation-Free Diffusion Probabilistic Model for Document Enhancement
Real-world documents may suffer various forms of degradation, often resulting in lower accuracy in optical character recognition (OCR) systems.
Recurrent Neural Networks With Limited Numerical Precision
We present results from the use of different stochastic and deterministic reduced precision training methods applied to three major RNN types which are then tested on several datasets.
Loss-aware Binarization of Deep Networks
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time.
Learning Convolutional Networks for Content-weighted Image Compression
Therefore, the encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner by using a subset of the ImageNet database.