Training with Quantization Noise for Extreme Model Compression

15 Apr 2020Angela FanPierre StockBenjamin GrahamEdouard GraveRemi GribonvalHerve JegouArmand Joulin

We tackle the problem of producing compact models, maximizing their accuracy for a given model size. A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper