1 code implementation • 4 Sep 2019 • Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia
In this work, we introduce Learned Intermediate representation Training (LIT), a novel model compression technique that outperforms a range of recent model compression techniques by leveraging the highly repetitive structure of modern DNNs (e. g., ResNet).
no code implementations • ICLR 2019 • Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia
Knowledge distillation (KD) is a popular method for reducing the computational overhead of deep network inference, in which the output of a teacher model is used to train a smaller, faster student model.