1 code implementation • International Conference on Learning Representations 2023 • Anil Kag, Durmus Alp Emre Acar, Aditya Gangrade, Venkatesh Saligrama
We propose a novel knowledge distillation (KD) method to selectively instill teacher knowledge into a student model motivated by situations where the student's capacity is significantly smaller than that of the teachers.
no code implementations • 7 Jul 2022 • Durmus Alp Emre Acar, Venkatesh Saligrama
We propose a novel training recipe for federated learning with heterogeneous networks where each device can have different architectures.
3 code implementations • ICLR 2021 • Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama
We propose a novel federated learning method for distributively training neural network models, where the server orchestrates cooperation between a subset of randomly chosen devices in each round.
no code implementations • 2 Nov 2021 • Ali Siahkamari, Durmus Alp Emre Acar, Christopher Liao, Kelly Geyer, Venkatesh Saligrama, Brian Kulis
For the task of convex Lipschitz regression, we establish that our proposed algorithm converges with iteration complexity of $ O(n\sqrt{d}/\epsilon)$ for a dataset $\bm X \in \mathbb R^{n\times d}$ and $\epsilon > 0$.
no code implementations • 14 Apr 2020 • Aditya Gangrade, Durmus Alp Emre Acar, Venkatesh Saligrama
We propose a new formulation for the BL problem via the concept of bracketings.
2 code implementations • NeurIPS 2019 • Don Dennis, Durmus Alp Emre Acar, Vikram Mandikal, Vinu Sankar Sadasivan, Venkatesh Saligrama, Harsha Vardhan Simhadri, Prateek Jain
The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies.