no code implementations • NeurIPS 2023 • Vasilis Kontonis, Fotis Iliopoulos, Khoa Trinh, Cenk Baykal, Gaurav Menghani, Erik Vee
Knowledge distillation with unlabeled examples is a powerful training paradigm for generating compact and lightweight student models in applications where the amount of labeled data is limited but one has access to a large pool of unlabeled data.
no code implementations • 31 Jan 2023 • Cenk Baykal, Dylan J Cutler, Nishanth Dikkala, Nikhil Ghosh, Rina Panigrahy, Xin Wang
One way of introducing sparsity into deep networks is by attaching an external table of parameters that is sparsely looked up at different layers of the network.
no code implementations • 13 Oct 2022 • Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, Erik Vee
Our method is hyper-parameter free, data-agnostic, and simple to implement.
no code implementations • 3 Oct 2022 • Cenk Baykal, Khoa Trinh, Fotis Iliopoulos, Gaurav Menghani, Erik Vee
Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available.
no code implementations • 8 Aug 2022 • Cenk Baykal, Nishanth Dikkala, Rina Panigrahy, Cyrus Rashtchian, Xin Wang
After representing LSH-based sparse networks with our model, we prove that sparse networks can match the approximation power of dense networks on Lipschitz functions.
no code implementations • 8 Feb 2022 • Cenk Baykal, Vamsi K. Potluru, Sameena Shah, Manuela M. Veloso
Most of the existing work focuses primarily on the monoplex setting where we have access to a network with only a single type of connection between entities.
1 code implementation • 6 Jun 2021 • Junteng Jia, Cenk Baykal, Vamsi K. Potluru, Austin R. Benson
With the wide-spread availability of complex relational data, semi-supervised node classification in graphs has become a central machine learning problem.
no code implementations • 6 Apr 2021 • Cenk Baykal, Lucas Liebenwein, Dan Feldman, Daniela Rus
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i. e., active learning).
1 code implementation • 4 Mar 2021 • Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus
Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.
no code implementations • 15 Feb 2020 • Murad Tukan, Cenk Baykal, Dan Feldman, Daniela Rus
A coreset is a small, representative subset of the original data points such that a models trained on the coreset are provably competitive with those trained on the original data set.
2 code implementations • ICLR 2020 • Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, Daniela Rus
We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network.
2 code implementations • 11 Oct 2019 • Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy.
no code implementations • ICLR 2019 • Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output.
no code implementations • ICLR 2018 • Cenk Baykal, Murad Tukan, Dan Feldman, Daniela Rus
Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis.
no code implementations • 13 Aug 2017 • Cenk Baykal, Lucas Liebenwein, Wilko Schwarting
We present a novel coreset construction algorithm for solving classification tasks using Support Vector Machines (SVMs) in a computationally efficient manner.