1 code implementation • 11 Feb 2023 • Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar
Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP.
no code implementations • NeurIPS 2021 • Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, Ameet Talwalkar
Tuning hyperparameters is a crucial but arduous part of the machine learning pipeline.
3 code implementations • NeurIPS 2021 • Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar
An important goal of AutoML is to automate-away the design of neural networks on new tasks in under-explored domains.
no code implementations • 30 Jan 2021 • Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar
Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.
no code implementations • 1 Jan 2021 • Nicholas Carl Roberts, Mikhail Khodak, Tri Dao, Liam Li, Nina Balcan, Christopher Re, Ameet Talwalkar
An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains, thus helping to democratize machine learning.
1 code implementation • ICLR 2021 • Liam Li, Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar
Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood.
no code implementations • 25 Sep 2019 • Mikhail Khodak, Liam Li, Maria-Florina Balcan, Ameet Talwalkar
Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.
no code implementations • 12 Mar 2019 • Liam Li, Evan Sparks, Kevin Jamieson, Ameet Talwalkar
Hyperparameter tuning of multi-stage pipelines introduces a significant computational burden.
4 code implementations • 20 Feb 2019 • Liam Li, Ameet Talwalkar
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.
Ranked #30 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
5 code implementations • ICLR 2018 • Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar
Modern learning models are characterized by large hyperparameter spaces and long training times.