Search Results for author: Philipp Benz

Found 23 papers, 8 papers with code

Booster-SHOT: Boosting Stacked Homography Transformations for Multiview Pedestrian Detection with Attention

no code implementations19 Aug 2022 Jinwoo Hwang, Philipp Benz, Tae-hoon Kim

Improving multi-view aggregation is integral for multi-view pedestrian detection, which aims to obtain a bird's-eye-view pedestrian occupancy map from images captured through a set of calibrated cameras.

Multiview Detection Pedestrian Detection

Investigating Top-$k$ White-Box and Transferable Black-box Attack

no code implementations30 Mar 2022 Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon

It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.

Investigating Top-k White-Box and Transferable Black-Box Attack

no code implementations CVPR 2022 Chaoning Zhang, Philipp Benz, Adil Karjauv, Jae Won Cho, Kang Zhang, In So Kweon

It is widely reported that stronger I-FGSM transfers worse than simple FGSM, leading to a popular belief that transferability is at odds with the white-box attack strength.

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

1 code implementation6 Oct 2021 Philipp Benz, Soomin Ham, Chaoning Zhang, Adil Karjauv, In So Kweon

Thus, it is critical for the community to know whether the newly proposed ViT and MLP-Mixer are also vulnerable to adversarial attacks.

Adversarial Attack Adversarial Robustness

Early Stop And Adversarial Training Yield Better surrogate Model: Very Non-Robust Features Harm Adversarial Transferability

no code implementations29 Sep 2021 Chaoning Zhang, Gyusang Cho, Philipp Benz, Kang Zhang, Chenshuang Zhang, Chan-Hyun Youn, In So Kweon

The transferability of adversarial examples (AE); known as adversarial transferability, has attracted significant attention because it can be exploited for TransferableBlack-box Attacks (TBA).

Attribute

Universal Adversarial Training with Class-Wise Perturbations

no code implementations7 Apr 2021 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

The SOTA universal adversarial training (UAT) method optimizes a single perturbation for all training samples in the mini-batch.

Adversarial Robustness

A Survey On Universal Adversarial Attack

1 code implementation2 Mar 2021 Chaoning Zhang, Philipp Benz, Chenguo Lin, Adil Karjauv, Jing Wu, In So Kweon

The intriguing phenomenon of adversarial examples has attracted significant attention in machine learning and what might be more surprising to the community is the existence of universal adversarial perturbations (UAPs), i. e. a single perturbation to fool the target DNN for most images.

Adversarial Attack

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

no code implementations12 Feb 2021 Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon

We perform task-specific and joint analysis and reveal that (a) frequency is a key factor that influences their performance based on the proposed entropy metric for quantifying the frequency distribution; (b) their success can be attributed to a DNN being highly sensitive to high-frequency content.

Data-Free Universal Adversarial Perturbation and Black-Box Attack

no code implementations ICCV 2021 Chaoning Zhang, Philipp Benz, Adil Karjauv, In So Kweon

For a more practical universal attack, our investigation of untargeted UAP focuses on alleviating the dependence on the original training samples, from removing the need for sample labels to limiting the sample size.

Towards Robust Data Hiding Against (JPEG) Compression: A Pseudo-Differentiable Deep Learning Approach

1 code implementation30 Dec 2020 Chaoning Zhang, Adil Karjauv, Philipp Benz, In So Kweon

Recently, deep learning has shown large success in data hiding, while non-differentiability of JPEG makes it challenging to train a deep pipeline for improving robustness against lossy compression.

UDH: Universal Deep Hiding for Steganography, Watermarking, and Light Field Messaging

1 code implementation NeurIPS 2020 Chaoning Zhang, Philipp Benz, Adil Karjauv, Geng Sun, In Kweon

This is the first work demonstrating the success of (DNN-based) hiding a full image for watermarking and LFM.

Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

no code implementations26 Oct 2020 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

Adversarial training is the most widely used technique for improving adversarial robustness to strong white-box attacks.

Adversarial Robustness Autonomous Driving +1

Double Targeted Universal Adversarial Perturbations

1 code implementation7 Oct 2020 Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In So Kweon

This universal perturbation attacks one targeted source class to sink class, while having a limited adversarial effect on other non-targeted source classes, for avoiding raising suspicions.

Autonomous Driving

CD-UAP: Class Discriminative Universal Adversarial Perturbation

no code implementations7 Oct 2020 Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In So Kweon

Since the proposed attack generates a universal adversarial perturbation that is discriminative to targeted and non-targeted classes, we term it class discriminative universal adversarial perturbation (CD-UAP).

Revisiting Batch Normalization for Improving Corruption Robustness

no code implementations7 Oct 2020 Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon

We find that simply estimating and adapting the BN statistics on a few (32 for instance) representation samples, without retraining the model, improves the corruption robustness by a large margin on several benchmark datasets with a wide range of model architectures.

Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations

1 code implementation CVPR 2020 Chaoning Zhang, Philipp Benz, Tooba Imtiaz, In-So Kweon

We utilize this vector representation to understand adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.

Data from Model: Extracting Data from Non-robust and Robust Models

no code implementations13 Jul 2020 Philipp Benz, Chaoning Zhang, Tooba Imtiaz, In-So Kweon

We repeat the process of Data to Model (DtM) and Data from Model (DfM) in sequence and explore the loss of feature mapping information by measuring the accuracy drop on the original validation dataset.

Propose-and-Attend Single Shot Detector

no code implementations30 Jul 2019 Ho-Deok Jang, Sanghyun Woo, Philipp Benz, Jinsun Park, In So Kweon

We present a simple yet effective prediction module for a one-stage detector.

Cannot find the paper you are looking for? You can Submit a new open access paper.