Search Results for author: Oscar Li

Found 8 papers, 7 papers with code

OmniPred: Language Models as Universal Regressors

1 code implementation22 Feb 2024 Xingyou Song, Oscar Li, Chansoo Lee, Bangding Yang, Daiyi Peng, Sagi Perel, Yutian Chen

Over the broad landscape of experimental design, regression has been a powerful tool to accurately predict the outcome metrics of a system or model given a set of parameters, but has been traditionally restricted to methods which are only applicable to a specific task.

Experimental Design regression

Variance-Reduced Gradient Estimation via Noise-Reuse in Online Evolution Strategies

1 code implementation NeurIPS 2023 Oscar Li, James Harrison, Jascha Sohl-Dickstein, Virginia Smith, Luke Metz

Unrolled computation graphs are prevalent throughout machine learning but present challenges to automatic differentiation (AD) gradient estimation methods when their loss functions exhibit extreme local sensitivtiy, discontinuity, or blackbox characteristics.

Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution

1 code implementation NeurIPS 2021 Amrith Setlur, Oscar Li, Virginia Smith

We categorize meta-learning evaluation into two settings: $\textit{in-distribution}$ [ID], in which the train and test tasks are sampled $\textit{iid}$ from the same underlying task distribution, and $\textit{out-of-distribution}$ [OOD], in which they are not.

Few-Shot Learning Learning Theory +2

Is Support Set Diversity Necessary for Meta-Learning?

no code implementations28 Nov 2020 Amrith Setlur, Oscar Li, Virginia Smith

Meta-learning is a popular framework for learning with limited data in which an algorithm is produced by training over multiple few-shot learning tasks.

Few-Shot Learning

Interpretable Image Recognition with Hierarchical Prototypes

1 code implementation25 Jun 2019 Peter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin

Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy.

General Classification

This Looks Like That: Deep Learning for Interpretable Image Recognition

3 code implementations NeurIPS 2019 Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin

In this work, we introduce a deep network architecture -- prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification.

General Classification Image Classification

Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

5 code implementations13 Oct 2017 Oscar Li, Hao liu, Chaofan Chen, Cynthia Rudin

This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.