Search Results for author: Gaurav Menghani

Found 6 papers, 2 papers with code

SLaM: Student-Label Mixing for Distillation with Unlabeled Examples

no code implementations NeurIPS 2023 Vasilis Kontonis, Fotis Iliopoulos, Khoa Trinh, Cenk Baykal, Gaurav Menghani, Erik Vee

Knowledge distillation with unlabeled examples is a powerful training paradigm for generating compact and lightweight student models in applications where the amount of labeled data is limited but one has access to a large pool of unlabeled data.

Knowledge Distillation

Weighted Distillation with Unlabeled Examples

no code implementations13 Oct 2022 Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, Erik Vee

Our method is hyper-parameter free, data-agnostic, and simple to implement.

Robust Active Distillation

no code implementations3 Oct 2022 Cenk Baykal, Khoa Trinh, Fotis Iliopoulos, Gaurav Menghani, Erik Vee

Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available.

Active Learning Informativeness +1

Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better

1 code implementation16 Jun 2021 Gaurav Menghani

Our hope is that this survey would provide the reader with the mental model and the necessary understanding of the field to apply generic efficiency techniques to immediately get significant improvements, and also equip them with ideas for further research and experimentation to achieve additional gains.

Information Retrieval Natural Language Understanding +4

Genome Compression Against a Reference

no code implementations5 Oct 2020 Anirduddha Laud, Gaurav Menghani, Madhava Keralapura

Being able to store and transmit human genome sequences is an important part in genomic research and industrial applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.