Search Results for author: Adil Karjauv

Found 12 papers, 4 papers with code

Object-Centric Diffusion for Efficient Video Editing

no code implementations11 Jan 2024 Kumara Kahatapitiya, Adil Karjauv, Davide Abati, Fatih Porikli, Yuki M. Asano, Amirhossein Habibian

Diffusion-based video editing have reached impressive quality and can transform either the global style, local structure, and attributes of given video inputs, following textual edit prompts.

Object Video Editing

Investigating Top-$k$ White-Box and Transferable Black-box Attack

no code implementations30 Mar 2022 Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon

It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.

Investigating Top-k White-Box and Transferable Black-Box Attack

no code implementations CVPR 2022 Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon

It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

1 code implementation6 Oct 2021 Philipp Benz, Soomin Ham, Chaoning Zhang, Adil Karjauv, In So Kweon

Thus, it is critical for the community to know whether the newly proposed ViT and MLP-Mixer are also vulnerable to adversarial attacks.

Adversarial Attack Adversarial Robustness

Universal Adversarial Training with Class-Wise Perturbations

no code implementations7 Apr 2021 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

The SOTA universal adversarial training (UAT) method optimizes a single perturbation for all training samples in the mini-batch.

Adversarial Robustness

A Survey On Universal Adversarial Attack

1 code implementation2 Mar 2021 Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In So Kweon

The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of universal adversarial perturbations (UAPs), i. e. a single perturbation to fool the target DNN for most images.

Adversarial Attack

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

no code implementations12 Feb 2021 Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon

We perform task-specific and joint analysis and reveal that (a) frequency is a key factor that influences their performance based on the proposed entropy metric for quantifying the frequency distribution; (b) their success can be attributed to a DNN being highly sensitive to high-frequency content.

Data-Free Universal Adversarial Perturbation and Black-Box Attack

no code implementations ICCV 2021 Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon

For a more practical universal attack, our investigation of untargeted UAP focuses on alleviating the dependence on the original training samples, from removing the need for sample labels to limiting the sample size.

Towards Robust Data Hiding Against (JPEG) Compression: A Pseudo-Differentiable Deep Learning Approach

1 code implementation30 Dec 2020 Chaoning Zhang, Adil Karjauv, Philipp Benz, In So Kweon

Recently, deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.

UDH: Universal Deep Hiding for Steganography, Watermarking, and Light Field Messaging

1 code implementation NeurIPS 2020 Chaoning Zhang, Philipp Benz, Adil Karjauv, Geng Sun, In Kweon

This is the first work demonstrating the success of (DNN-based) hiding a full image for watermarking and LFM.

Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

no code implementations26 Oct 2020 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

Adversarial training is the most widely used technique for improving adversarial robustness to strong white-box attacks.

Adversarial Robustness Autonomous Driving +1

Revisiting Batch Normalization for Improving Corruption Robustness

no code implementations7 Oct 2020 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures.

Cannot find the paper you are looking for? You can Submit a new open access paper.