Search Results for author: Moran Shkolnik

Found 4 papers, 3 papers with code

Neural gradients are near-lognormal: improved quantized and sparse training

no code implementations ICLR 2021 Brian Chmiel, Liad Ben-Uri, Moran Shkolnik, Elad Hoffer, Ron Banner, Daniel Soudry

While training can mostly be accelerated by reducing the time needed to propagate neural gradients back throughout the model, most previous works focus on the quantization/pruning of weights and activations.

Neural Network Compression Quantization

Robust Quantization: One Model to Rule Them All

1 code implementation NeurIPS 2020 Moran Shkolnik, Brian Chmiel, Ron Banner, Gil Shomron, Yury Nahshan, Alex Bronstein, Uri Weiser

Neural network quantization methods often involve simulating the quantization process during training, making the trained model highly dependent on the target bit-width and precise way quantization is performed.

Quantization

Thanks for Nothing: Predicting Zero-Valued Activations with Lightweight Convolutional Neural Networks

1 code implementation ECCV 2020 Gil Shomron, Ron Banner, Moran Shkolnik, Uri Weiser

Convolutional neural networks (CNNs) introduce state-of-the-art results for various tasks with the price of high computational demands.

Cannot find the paper you are looking for? You can Submit a new open access paper.