1 code implementation • ICCV 2023 • Seong Min Kye, Kwanghee Choi, Hyeongmin Byun, Buru Chang
Active learning (AL) aims to select the most useful data samples from an unlabeled data pool and annotate them to expand the labeled dataset under a limited budget.
1 code implementation • 29 Nov 2021 • Seong Min Kye, Kwanghee Choi, Joonyoung Yi, Buru Chang
Recent studies on learning with noisy labels have shown remarkable performance by exploiting a small clean dataset.
no code implementations • 1 Jan 2021 • Seong Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples, or confidence-weighted average of all the query samples.
no code implementations • 7 Apr 2020 • Youngmoon Jung, Seong Min Kye, Yeunju Choi, Myunghun Jung, Hoirin Kim
In this approach, we obtain a speaker embedding vector by pooling single-scale features that are extracted from the last layer of a speaker feature extractor.
1 code implementation • 6 Apr 2020 • Seong Min Kye, Youngmoon Jung, Hae Beom Lee, Sung Ju Hwang, Hoirin Kim
By combining these two learning schemes, our model outperforms existing state-of-the-art speaker verification models learned with a standard supervised learning framework on short utterance (1-2 seconds) on the VoxCeleb datasets.
1 code implementation • 27 Feb 2020 • Seong Min Kye, Hae Beom Lee, Hoirin Kim, Sung Ju Hwang
To tackle this issue, we propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries such that they improve the model's transductive inference performance on unseen tasks.