no code implementations • 11 Jan 2024 • Kumara Kahatapitiya, Adil Karjauv, Davide Abati, Fatih Porikli, Yuki M. Asano, Amirhossein Habibian
Diffusion-based video editing have reached impressive quality and can transform either the global style, local structure, and attributes of given video inputs, following textual edit prompts.
no code implementations • 30 Mar 2022 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.
no code implementations • CVPR 2022 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon
It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.
1 code implementation • 6 Oct 2021 • Philipp Benz, Soomin Ham, Chaoning Zhang, Adil Karjauv, In So Kweon
Thus, it is critical for the community to know whether the newly proposed ViT and MLP-Mixer are also vulnerable to adversarial attacks.
no code implementations • 7 Apr 2021 • Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
The SOTA universal adversarial training (UAT) method optimizes a single perturbation for all training samples in the mini-batch.
1 code implementation • 2 Mar 2021 • Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In So Kweon
The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of universal adversarial perturbations (UAPs), i. e. a single perturbation to fool the target DNN for most images.
no code implementations • 12 Feb 2021 • Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon
We perform task-specific and joint analysis and reveal that (a) frequency is a key factor that influences their performance based on the proposed entropy metric for quantifying the frequency distribution; (b) their success can be attributed to a DNN being highly sensitive to high-frequency content.
no code implementations • ICCV 2021 • Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon
For a more practical universal attack, our investigation of untargeted UAP focuses on alleviating the dependence on the original training samples, from removing the need for sample labels to limiting the sample size.
1 code implementation • 30 Dec 2020 • Chaoning Zhang, Adil Karjauv, Philipp Benz, In So Kweon
Recently, deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.
1 code implementation • NeurIPS 2020 • Chaoning Zhang, Philipp Benz, Adil Karjauv, Geng Sun, In Kweon
This is the first work demonstrating the success of (DNN-based) hiding a full image for watermarking and LFM.
no code implementations • 26 Oct 2020 • Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
Adversarial training is the most widely used technique for improving adversarial robustness to strong white-box attacks.
no code implementations • 7 Oct 2020 • Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures.