Search Results for author: Yoonho Lee

Found 22 papers, 13 papers with code

Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning

1 code implementation22 Feb 2024 Johnathan Xie, Yoonho Lee, Annie S. Chen, Chelsea Finn

Self-supervised learning excels in learning representations from large amounts of unlabeled data, demonstrating success across multiple data modalities.

Property Prediction Self-Supervised Learning

Clarify: Improving Model Robustness With Natural Language Corrections

no code implementations6 Feb 2024 Yoonho Lee, Michelle S. Lam, Helena Vasconcelos, Michael S. Bernstein, Chelsea Finn

Additionally, we use Clarify to find and rectify 31 novel hard subpopulations in the ImageNet dataset, improving minority-split accuracy from 21. 1% to 28. 7%.

Misconceptions

AutoFT: Learning an Objective for Robust Fine-Tuning

no code implementations18 Jan 2024 Caroline Choi, Yoonho Lee, Annie Chen, Allan Zhou, aditi raghunathan, Chelsea Finn

Given a task, AutoFT searches for a fine-tuning procedure that enhances out-of-distribution (OOD) generalization.

Confidence-Based Model Selection: When to Take Shortcuts for Subpopulation Shifts

no code implementations19 Jun 2023 Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn

Effective machine learning models learn both robust features that directly determine the outcome of interest (e. g., an object with wheels is more likely to be a car), and shortcut features (e. g., an object on a road is more likely to be a car).

Model Selection

Conservative Prediction via Data-Driven Confidence Minimization

1 code implementation8 Jun 2023 Caroline Choi, Fahim Tajwar, Yoonho Lee, Huaxiu Yao, Ananya Kumar, Chelsea Finn

Taking inspiration from this result, we present data-driven confidence minimization (DCM), which minimizes confidence on an uncertainty dataset containing examples that the model is likely to misclassify at test time.

Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

no code implementations10 Feb 2023 Annie S. Chen, Yoonho Lee, Amrith Setlur, Sergey Levine, Chelsea Finn

Transfer learning with a small amount of target data is an effective and common approach to adapting a pre-trained model to distribution shifts.

Domain Adaptation Transfer Learning

DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

2 code implementations26 Jan 2023 Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, Chelsea Finn

In this paper, we identify a property of the structure of an LLM's probability function that is useful for such detection.

Language Modelling Text Detection

Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time

1 code implementation25 Nov 2022 Huaxiu Yao, Caroline Choi, Bochuan Cao, Yoonho Lee, Pang Wei Koh, Chelsea Finn

Temporal shifts -- distribution shifts arising from the passage of time -- often occur gradually and have the additional structure of timestamp metadata.

Continual Learning Domain Generalization +3

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

1 code implementation20 Oct 2022 Yoonho Lee, Annie S. Chen, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn

A common approach to transfer learning under distribution shift is to fine-tune the last few layers of a pre-trained model, preserving learned features while also adapting to the new task.

Transfer Learning

On Divergence Measures for Bayesian Pseudocoresets

1 code implementation12 Oct 2022 Balhae Kim, JungWon Choi, Seanie Lee, Yoonho Lee, Jung-Woo Ha, Juho Lee

Finally, we propose a novel Bayesian pseudocoreset algorithm based on minimizing forward KL divergence.

Bayesian Inference Image Classification

Diversify and Disambiguate: Learning From Underspecified Data

1 code implementation7 Feb 2022 Yoonho Lee, Huaxiu Yao, Chelsea Finn

Many datasets are underspecified: there exist multiple equally viable solutions to a given task.

Image Classification

Diversity Matters When Learning From Ensembles

no code implementations NeurIPS 2021 Giung Nam, Jongmin Yoon, Yoonho Lee, Juho Lee

We propose a simple approach for reducing this gap, i. e., making the distilled performance close to the full ensemble.

Image Classification

Amortized Probabilistic Detection of Communities in Graphs

2 code implementations29 Oct 2020 Yueqi Wang, Yoonho Lee, Pallab Basu, Juho Lee, Yee Whye Teh, Liam Paninski, Ari Pakman

While graph neural networks (GNNs) have been successful in encoding graph structures, existing GNN-based methods for community detection are limited by requiring knowledge of the number of communities in advance, in addition to lacking a proper probabilistic formulation to handle uncertainty.

Clustering Community Detection

Neural Complexity Measures

1 code implementation NeurIPS 2020 Yoonho Lee, Juho Lee, Sung Ju Hwang, Eunho Yang, Seungjin Choi

While various complexity measures for deep neural networks exist, specifying an appropriate measure capable of predicting and explaining generalization in deep networks has proven challenging.

Meta-Learning regression

Bootstrapping Neural Processes

1 code implementation NeurIPS 2020 Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh

While this "data-driven" way of learning stochastic processes has proven to handle various types of data, NPs still rely on an assumption that uncertainty in stochastic processes is modeled by a single latent variable, which potentially limits the flexibility.

Deep Amortized Clustering

no code implementations ICLR 2020 Juho Lee, Yoonho Lee, Yee Whye Teh

We propose a deep amortized clustering (DAC), a neural architecture which learns to cluster datasets efficiently using a few forward passes.

Clustering

Discrete Infomax Codes for Supervised Representation Learning

no code implementations28 May 2019 Yoonho Lee, Wonjae Kim, Wonpyo Park, Seungjin Choi

In this paper we present a model that produces Discrete InfoMax Codes (DIMCO); we learn a probabilistic encoder that yields k-way d-dimensional codes associated with input data.

Meta-Learning Metric Learning +2

Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks

9 code implementations1 Oct 2018 Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye Teh

Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few-shot image classification are defined on sets of instances.

3D Shape Recognition Few-Shot Image Classification +1

Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace

1 code implementation ICML 2018 Yoonho Lee, Seungjin Choi

Our primary contribution is the {\em MT-net}, which enables the meta-learner to learn on each layer's activation space a subspace that the task-specific learner performs gradient descent on.

Few-Shot Image Classification Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.