Search Results for author: Liam Li

Found 10 papers, 5 papers with code

Cross-Modal Fine-Tuning: Align then Refine

1 code implementation11 Feb 2023 Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar

Fine-tuning large-scale pretrained models has led to tremendous progress in well-studied modalities such as vision and NLP.

AutoML

On Data Efficiency of Meta-learning

no code implementations30 Jan 2021 Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar

Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.

Meta-Learning Personalized Federated Learning

Searching for Convolutions and a More Ambitious NAS

no code implementations1 Jan 2021 Nicholas Carl Roberts, Mikhail Khodak, Tri Dao, Liam Li, Nina Balcan, Christopher Re, Ameet Talwalkar

An important goal of neural architecture search (NAS) is to automate-away the design of neural networks on new tasks in under-explored domains, thus helping to democratize machine learning.

Neural Architecture Search

Geometry-Aware Gradient Algorithms for Neural Architecture Search

1 code implementation ICLR 2021 Liam Li, Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood.

Neural Architecture Search

On Weight-Sharing and Bilevel Optimization in Architecture Search

no code implementations25 Sep 2019 Mikhail Khodak, Liam Li, Maria-Florina Balcan, Ameet Talwalkar

Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.

Bilevel Optimization feature selection +1

Exploiting Reuse in Pipeline-Aware Hyperparameter Tuning

no code implementations12 Mar 2019 Liam Li, Evan Sparks, Kevin Jamieson, Ameet Talwalkar

Hyperparameter tuning of multi-stage pipelines introduces a significant computational burden.

Random Search and Reproducibility for Neural Architecture Search

4 code implementations20 Feb 2019 Liam Li, Ameet Talwalkar

Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.

Hyperparameter Optimization Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.