1 code implementation • 4 Nov 2022 • Dong Hoon Lee, Sungik Choi, Hyunwoo Kim, Sae-Young Chung
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization.
2 code implementations • 8 Jul 2022 • Minguk Jang, Sae-Young Chung, Hye Won Chung
To overcome this limitation, we propose a novel test-time adaptation method, called Test-time Adaptation via Self-Training with nearest neighbor information (TAST), which is composed of the following procedures: (1) adds trainable adaptation modules on top of the trained feature extractor; (2) newly defines a pseudo-label distribution for the test data by using the nearest neighbor information; (3) trains these modules only a few times during test time to match the nearest neighbor-based pseudo label distribution and a prototype-based class distribution for the test data; and (4) predicts the label of test data using the average predicted class distribution from these modules.
no code implementations • 8 Jul 2022 • Minguk Jang, Sae-Young Chung
We propose Few-Example Clustering (FEC), a novel algorithm that performs contrastive learning to cluster few examples.
1 code implementation • 22 Jun 2021 • Dong Hoon Lee, Sae-Young Chung
We propose unsupervised embedding adaptation for the downstream few-shot classification task.
1 code implementation • NeurIPS 2021 • Suyoung Lee, Sae-Young Chung
By training a policy on mixture tasks along with original training tasks, LDM allows the agent to prepare for unseen test tasks during training and prevents the agent from overfitting the training tasks.
no code implementations • ICLR 2020 • Sungik Choi, Sae-Young Chung
Conventional out-of-distribution (OOD) detection schemes based on variational autoencoder or Random Network Distillation (RND) have been observed to assign lower uncertainty to the OOD than the target distribution.
no code implementations • ICLR 2020 • Jisoo Lee, Sae-Young Chung
Since deep neural networks are over-parameterized, they can memorize noisy examples.
no code implementations • 3 Apr 2019 • Kyung-Su Kim, Sae-Young Chung
We consider the problem of sparse phase retrieval from Fourier transform magnitudes to recover the $k$-sparse signal vector and its support $\mathcal{T}$.
no code implementations • 1 Apr 2019 • Kyung-Su Kim, Sae-Young Chung
We consider the classical sparse regression problem of recovering a sparse signal $x_0$ given a measurement vector $y = \Phi x_0+w$.
1 code implementation • ICLR 2018 • Su Young Lee, Sungik Choi, Sae-Young Chung
We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation.