no code implementations • NeurIPS 2023 • Vasilis Kontonis, Fotis Iliopoulos, Khoa Trinh, Cenk Baykal, Gaurav Menghani, Erik Vee
Knowledge distillation with unlabeled examples is a powerful training paradigm for generating compact and lightweight student models in applications where the amount of labeled data is limited but one has access to a large pool of unlabeled data.
no code implementations • 13 Oct 2022 • Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, Erik Vee
Our method is hyper-parameter free, data-agnostic, and simple to implement.
no code implementations • 3 Oct 2022 • Cenk Baykal, Khoa Trinh, Fotis Iliopoulos, Gaurav Menghani, Erik Vee
Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available.
1 code implementation • 16 Jun 2021 • Gaurav Menghani
Our hope is that this survey would provide the reader with the mental model and the necessary understanding of the field to apply generic efficiency techniques to immediately get significant improvements, and also equip them with ideas for further research and experimentation to achieve additional gains.
no code implementations • 5 Oct 2020 • Anirduddha Laud, Gaurav Menghani, Madhava Keralapura
Being able to store and transmit human genome sequences is an important part in genomic research and industrial applications.
1 code implementation • 13 Nov 2019 • Gaurav Menghani, Sujith Ravi
Knowledge distillation is a widely used technique for model compression.