no code implementations • 1 Jul 2019 • Wen-Pu Cai, Wu-Jun Li
WNQ adopts weight normalization to avoid the long-tail distribution of network weights and subsequently reduces the quantization error.
Neural Network Compression Quantization