no code implementations • 10 Mar 2021 • Sedigh Ghamari, Koray Ozcan, Thu Dinh, Andrey Melnikov, Juan Carvajal, Jan Ernst, Sek Chai
We propose a Quantization Guided Training (QGT) method to guide DNN training towards optimized low-bit-precision targets and reach extreme compression levels below 8-bit precision.
no code implementations • 4 Nov 2020 • Thu Dinh, Andrey Melnikov, Vasilios Daskalopoulos, Sek Chai
Quantization for deep neural networks (DNN) have enabled developers to deploy models with less memory and more efficient low-power inference.
no code implementations • 2 Mar 2020 • Thu Dinh, Bao Wang, Andrea L. Bertozzi, Stanley J. Osher
In this paper, we focus on a co-design of efficient DNN compression algorithms and sparse neural architectures for robust and accurate deep learning.
2 code implementations • 25 Jan 2019 • Thu Dinh, Jack Xin
In this paper, we study the problem of coarse gradient descent (CGD) learning of a one hidden layer convolutional neural network (CNN) with binarized activation function and sparse weights.
Optimization and Control 90C26, 97R40, 68T05