1 code implementation • NeurIPS 2023 • Shankar Padmanabhan, Yasumasa Onoe, Michael J. Q. Zhang, Greg Durrett, Eunsol Choi
Then, we update the model parameters so that the distribution of the LM (the student) matches the distribution of the LM conditioned on the definition (the teacher) on the transfer set.
1 code implementation • 2 May 2023 • Yasumasa Onoe, Michael J. Q. Zhang, Shankar Padmanabhan, Greg Durrett, Eunsol Choi
Pre-trained language models (LMs) are used for knowledge intensive tasks like question answering, but their knowledge gets continuously outdated as the world changes.
no code implementations • 30 Apr 2022 • Alex Sheng, Shankar Padmanabhan
Prior work in meta-learning and neural architecture search has led to substantial successes across various task domains, spawning myriad approaches for algorithmically optimizing the design and learning dynamics of deep learning models.
no code implementations • 17 Aug 2021 • Shankar Padmanabhan, Aidan Petratos, Allen Ting, Kristina Zhou, Dylan Hageman, Jesse R. Pisel, Michael J. Pyrcz
The placement of charging stations in areas with developing charging infrastructure is a critical component of the future success of electric vehicles (EVs).