1 code implementation • 22 Feb 2024 • Xingyou Song, Oscar Li, Chansoo Lee, Bangding Yang, Daiyi Peng, Sagi Perel, Yutian Chen
Over the broad landscape of experimental design, regression has been a powerful tool to accurately predict the outcome metrics of a system or model given a set of parameters, but has been traditionally restricted to methods which are only applicable to a specific task.
1 code implementation • NeurIPS 2023 • Oscar Li, James Harrison, Jascha Sohl-Dickstein, Virginia Smith, Luke Metz
Unrolled computation graphs are prevalent throughout machine learning but present challenges to automatic differentiation (AD) gradient estimation methods when their loss functions exhibit extreme local sensitivtiy, discontinuity, or blackbox characteristics.
1 code implementation • NeurIPS 2021 • Amrith Setlur, Oscar Li, Virginia Smith
We categorize meta-learning evaluation into two settings: $\textit{in-distribution}$ [ID], in which the train and test tasks are sampled $\textit{iid}$ from the same underlying task distribution, and $\textit{out-of-distribution}$ [OOD], in which they are not.
2 code implementations • ICLR 2022 • Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, Chong Wang
Two-party split learning is a popular technique for learning a model across feature-partitioned data.
no code implementations • 28 Nov 2020 • Amrith Setlur, Oscar Li, Virginia Smith
Meta-learning is a popular framework for learning with limited data in which an algorithm is produced by training over multiple few-shot learning tasks.
1 code implementation • 25 Jun 2019 • Peter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin
Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy.
3 code implementations • NeurIPS 2019 • Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin
In this work, we introduce a deep network architecture -- prototypical part network (ProtoPNet), that reasons in a similar way: the network dissects the image by finding prototypical parts, and combines evidence from the prototypes to make a final classification.
5 code implementations • 13 Oct 2017 • Oscar Li, Hao liu, Chaofan Chen, Cynthia Rudin
This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input.