Search Results for author: Govinda M Kamath

Found 1 papers, 1 papers with code

Knowledge Distillation as Semiparametric Inference

1 code implementation ICLR 2021 Tri Dao, Govinda M Kamath, Vasilis Syrgkanis, Lester Mackey

A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model.

Knowledge Distillation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.