1 code implementation • 2 Apr 2024 • Keon-Hee Park, Kyungwoo Song, Gyeong-Moon Park
In this paper, we argue that large models such as vision and language transformers pre-trained on large datasets can be excellent few-shot incremental learners.
2 code implementations • 21 Nov 2023 • Hyogun Lee, Kyungho Bae, Seong Jong Ha, Yumin Ko, Gyeong-Moon Park, Jinwoo Choi
We specifically focus on scenarios with a substantial domain gap, in contrast to existing works primarily deal with small domain gaps between labeled source domains and unlabeled target domains.
no code implementations • 5 Sep 2023 • Dongyeun Lee, Chaewon Kim, Sangjoon Yu, Jaejun Yoo, Gyeong-Moon Park
One of the most challenging problems in audio-driven talking head generation is achieving high-fidelity detail while ensuring precise synchronization.
1 code implementation • ICCV 2023 • Juwon Seo, Ji-Su Kang, Gyeong-Moon Park
Surprisingly, we find that our LFS-GAN even outperforms the existing few-shot GANs in the few-shot image generation task.
1 code implementation • ICCV 2023 • Jun-Yeong Moon, Keon-Hee Park, Jung Uk Kim, Gyeong-Moon Park
In addition, to alleviate the class imbalance problem, we introduce a new gradient similarity-based focal loss and adaptive feature scaling to ease overfitting to the major classes and underfitting to the minor classes.
no code implementations • 4 Apr 2023 • Chaoning Zhang, Chenshuang Zhang, Chenghao Li, Yu Qiao, Sheng Zheng, Sumit Kumar Dam, Mengchun Zhang, Jung Uk Kim, Seong Tae Kim, Jinwoo Choi, Gyeong-Moon Park, Sung-Ho Bae, Lik-Hang Lee, Pan Hui, In So Kweon, Choong Seon Hong
Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges.
1 code implementation • CVPR 2023 • Yong Hyun Ahn, Gyeong-Moon Park, Seong Tae Kim
In this study, from the perspective of neurons in the deep layer of the model representing high-level features, we introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
Ranked #1 on Out-of-Distribution Detection on ImageNet-1k vs SUN
no code implementations • 18 Oct 2022 • Seung-Jun Moon, Chaewon Kim, Gyeong-Moon Park
Especially, we prove that the widely-used loss term in GAN inversion, i. e., L2, is biased to reconstruct low-frequency features mainly.
no code implementations • 22 Sep 2022 • Seungjun Moon, Gyeong-Moon Park
In this paper, we point out that the existing encoders try to lower the distortion not only on the interest region, e. g., human facial region but also on the uninterest region, e. g., background patterns and obstacles.
1 code implementation • 19 Oct 2020 • Joonhyuk Kim, Sahng-Min Yoo, Gyeong-Moon Park, Jong-Hwan Kim
Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting.