no code implementations • 16 Mar 2024 • Yeongtak Oh, Jonghyun Lee, Jooyoung Choi, Dahuin Jung, Uiwon Hwang, Sungroh Yoon
To address this, we propose a novel TTA method by leveraging a latent diffusion model (LDM) based image editing model and fine-tuning it with our newly introduced corruption modeling scheme.
1 code implementation • 12 Mar 2024 • Jonghyun Lee, Dahuin Jung, Saehyung Lee, Junsung Park, Juhyeon Shin, Uiwon Hwang, Sungroh Yoon
To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error.
Ranked #1 on Test-time Adaptation on ImageNet-C
1 code implementation • 8 Jun 2023 • Seungryong Yoo, Eunji Kim, Dahuin Jung, Jungbeom Lee, Sungroh Yoon
Visual Prompt Tuning (VPT) is an effective tuning method for adapting pretrained Vision Transformers (ViTs) to downstream tasks.
Ranked #2 on Visual Prompt Tuning on VTAB-1k(Natural<7>)
2 code implementations • 2 Jun 2023 • Eunji Kim, Dahuin Jung, Sangha Park, Siwon Kim, Sungroh Yoon
To provide a reliable interpretation against this ambiguity, we propose Probabilistic Concept Bottleneck Models (ProbCBM).
no code implementations • 30 May 2023 • Daegyu Kim, Chaehun Shin, Jooyoung Choi, Dahuin Jung, Sungroh Yoon
Diffusion-Stego achieved a high capacity of messages (3. 0 bpp of binary messages with 98% accuracy, and 6. 0 bpp with 90% accuracy) as well as high quality (with a FID score of 2. 77 for 1. 0 bpp on the FFHQ 64$\times$64 dataset) that makes it challenging to distinguish from real images in the PNG format.
no code implementations • 14 Mar 2023 • Dahuin Jung, Hyungyu Lee, Sungroh Yoon
In particular, in comparison with existing self-supervised learning methods for tabular data, we propose a different corruption method for state and action representations that is robust to diverse distortions.
1 code implementation • 17 Feb 2023 • Dahuin Jung, Dongjin Lee, Sunwon Hong, Hyemi Jang, Ho Bae, Sungroh Yoon
The aim of continual learning is to learn new tasks continuously (i. e., plasticity) without forgetting previously learned knowledge from old tasks (i. e., stability).
no code implementations • ICCV 2023 • Dahuin Jung, Dongyoon Han, Jihwan Bang, Hwanjun Song
However, we observe that the use of a prompt pool creates a domain scalability problem between pre-training and continual learning.
1 code implementation • 25 Oct 2022 • Jaehee Jang, Heonseok Ha, Dahuin Jung, Sungroh Yoon
While the existing methods require the collection of auxiliary data or model weights to generate a counterpart, FedClassAvg only requires clients to communicate with a couple of fully connected layers, which is highly communication-efficient.
1 code implementation • 14 Jun 2022 • Jonghyun Lee, Dahuin Jung, Junho Yim, Sungroh Yoon
Unlike existing confidence scores that use only one of the source or target domain knowledge, the JMDS score uses both knowledge.
no code implementations • 29 Sep 2021 • Jonghyun Lee, Dahuin Jung, Junho Yim, Sungroh Yoon
Unsupervised domain adaptation (UDA) aims to achieve high performance within the unlabeled target domain by leveraging the labeled source domain.
1 code implementation • ICLR 2022 • Uiwon Hwang, Heeseung Kim, Dahuin Jung, Hyemi Jang, Hyungyu Lee, Sungroh Yoon
Generative adversarial networks (GANs) with clustered latent spaces can perform conditional generation in a completely unsupervised manner.
no code implementations • ECCV 2020 • Dahuin Jung, Jonghyun Lee, Jihun Yi, Sungroh Yoon
We propose an interpretable Capsule Network, iCaps, for image classification.
no code implementations • 28 Feb 2019 • Dahuin Jung, Ho Bae, Hyun-Soo Choi, Sungroh Yoon
We propose a DL based steganalysis technique that effectively removes secret images by restoring the distribution of the original images.
1 code implementation • 26 Feb 2019 • Uiwon Hwang, Dahuin Jung, Sungroh Yoon
We evaluate the classification performance (F1-score) of the proposed method with 20% missingness and confirm up to a 5% improvement in comparison with the performance of combinations of state-of-the-art methods.
no code implementations • 31 Jan 2019 • Ho Bae, Dahuin Jung, Sungroh Yoon
We compared our method to state-of-the-art techniques and observed that our method preserves the same level of privacy as differential privacy (DP), but had better prediction results.
no code implementations • 30 Jan 2019 • Dahuin Jung, Ho Bae, Hyun-Soo Choi, Sungroh Yoon
The cover image with the secret message is called a stego image.
no code implementations • 31 Jul 2018 • Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
Furthermore, the privacy of the data involved in model training is also threatened by attacks such as the model-inversion attack, or by dishonest service providers of AI applications.