1 code implementation • 26 May 2023 • Daeho Um, Jiwoong Park, Seulki Park, Jin Young Choi
To overcome this limitation, we introduce a novel concept of channel-wise confidence in a node feature, which is assigned to each imputed channel feature of a node for reflecting certainty of the imputation.
1 code implementation • 21 Apr 2023 • Seulki Park, Daeho Um, Hajung Yoon, Sanghyuk Chun, Sangdoo Yun, Jin Young Choi
In this paper, we propose a robustness benchmark for image-text matching models to assess their vulnerabilities.
2 code implementations • CVPR 2022 • Jihwan Bang, Hyunseo Koh, Seulki Park, Hwanjun Song, Jung-Woo Ha, Jonghyun Choi
A large body of continual learning (CL) methods, however, assumes data streams with clean labels, and online learning scenarios under noisy data streams are yet underexplored.
1 code implementation • CVPR 2022 • Jongin Lim, Sangdoo Yun, Seulki Park, Jin Young Choi
In this paper, we propose Hypergraph-Induced Semantic Tuplet (HIST) loss for deep metric learning that leverages the multilateral semantic relations of multiple samples to multiple classes via hypergraph modeling.
1 code implementation • CVPR 2022 • Seulki Park, Youngkyu Hong, Byeongho Heo, Sangdoo Yun, Jin Young Choi
The problem of class imbalanced data is that the generalization performance of the classifier deteriorates due to the lack of data from minority classes.
Ranked #20 on Long-tail Learning on ImageNet-LT
1 code implementation • ICCV 2021 • Seulki Park, Jongin Lim, Younghan Jeon, Jin Young Choi
In this paper, we propose a balancing training method to address problems in imbalanced data learning.
Ranked #46 on Long-tail Learning on CIFAR-10-LT (ρ=10)
no code implementations • 29 Sep 2021 • Jihwan Bang, Hyunseo Koh, Seulki Park, Hwanjun Song, Jung-Woo Ha, Jonghyun Choi
Specifically, we argue the importance of both diversity and purity of examples in the episodic memory of continual learning models.
no code implementations • 14 Jun 2021 • Seulki Park, Hwanjun Song, Daeho Um, Dae Ung Jo, Sangdoo Yun, Jin Young Choi
Deep neural network can easily overfit to even noisy labels due to its high capacity, which degrades the generalization performance of a model.