Search Results for author: Zhengyu Zhao

Found 22 papers, 17 papers with code

Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving

1 code implementation26 Mar 2024 Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen

Deep learning-based monocular depth estimation (MDE), extensively applied in autonomous driving, is known to be vulnerable to adversarial attacks.

Adversarial Attack Autonomous Driving +1

Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval

no code implementations12 Dec 2023 Qiwei Tian, Chenhao Lin, Zhengyu Zhao, Qian Li, Chao Shen

Furthermore, CA prevents the consequential model collapse, based on a novel metric, collapseness, which is incorporated into the optimization of perturbation.

Adversarial Defense Image Retrieval +2

Prompt Backdoors in Visual Prompt Learning

no code implementations11 Oct 2023 Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Specifically, the VPPTaaS provider optimizes a visual prompt given downstream data, and downstream users can use this prompt together with the large pre-trained model for prediction.

Backdoor Attack

Composite Backdoor Attacks Against Large Language Models

1 code implementation11 Oct 2023 Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Such a Composite Backdoor Attack (CBA) is shown to be stealthier than implanting the same multiple trigger keys in only a single component.

Backdoor Attack

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

no code implementations3 Sep 2023 Weijie Wang, Zhengyu Zhao, Nicu Sebe, Bruno Lepri

Although effective deepfake detectors have been proposed, they are substantially vulnerable to adversarial attacks.

DeepFake Detection Face Swapping

Generative Watermarking Against Unauthorized Subject-Driven Image Synthesis

no code implementations13 Jun 2023 Yihan Ma, Zhengyu Zhao, Xinlei He, Zheng Li, Michael Backes, Yang Zhang

In particular, to help the watermark survive the subject-driven synthesis, we incorporate the synthesis process in learning GenWatermark by fine-tuning the detector with synthesized images for a specific subject.

Image Generation

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

1 code implementation31 Jan 2023 Zhuoran Liu, Zhengyu Zhao, Martha Larson

Perturbative availability poisons (PAPs) add small changes to images to prevent their use for model training.

Towards Good Practices in Evaluating Transfer Adversarial Attacks

1 code implementation17 Nov 2022 Zhengyu Zhao, Hanwei Zhang, Renjue Li, Ronan Sicre, Laurent Amsaleg, Michael Backes

In this work, we design good practices to address these limitations, and we present the first comprehensive evaluation of transfer attacks, covering 23 representative attacks against 9 defenses on ImageNet.

Generative Poisoning Using Random Discriminators

1 code implementation2 Nov 2022 Dirren van Vlijmen, Alex Kolmus, Zhuoran Liu, Zhengyu Zhao, Martha Larson

We introduce ShortcutGen, a new data poisoning attack that generates sample-dependent, error-minimizing perturbations by learning a generator.

Data Poisoning

Membership Inference Attacks by Exploiting Loss Trajectory

1 code implementation31 Aug 2022 Yiyong Liu, Zhengyu Zhao, Michael Backes, Yang Zhang

Machine learning models are vulnerable to membership inference attacks in which an adversary aims to predict whether or not a particular sample was contained in the target model's training dataset.

Knowledge Distillation

The Importance of Image Interpretation: Patterns of Semantic Misclassification in Real-World Adversarial Images

1 code implementation3 Jun 2022 Zhengyu Zhao, Nga Dang, Martha Larson

In this paper, we propose that adversarial images should be evaluated based on semantic mismatch, rather than label mismatch, as used in current work.

Level Up with RealAEs: Leveraging Domain Constraints in Feature Space to Strengthen Robustness of Android Malware Detection

no code implementations30 May 2022 Hamid Bostani, Zhengyu Zhao, Zhuoran Liu, Veelasha Moonsamy

Realistic attacks in the Android malware domain create Realizable Adversarial Examples (RealAEs), i. e., AEs that satisfy the domain constraints of Android malware.

Adversarial Robustness Android Malware Detection +2

Going Grayscale: The Road to Understanding and Improving Unlearnable Examples

1 code implementation25 Nov 2021 Zhuoran Liu, Zhengyu Zhao, Alex Kolmus, Tijn Berns, Twan van Laarhoven, Tom Heskes, Martha Larson

Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i. e. images whose content cannot be used to improve a classifier during training.

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

4 code implementations NeurIPS 2021 Zhengyu Zhao, Zhuoran Liu, Martha Larson

In particular, we, for the first time, identify that a simple logit loss can yield competitive results with the state of the arts.

Adversarial Image Color Transformations in Explicit Color Filter Space

1 code implementation12 Nov 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

In particular, our color filter space is explicitly specified so that we are able to provide a systematic analysis of model robustness against adversarial color transformations, from both the attack and defense perspectives.

Adversarial Robustness

Profile Consistency Identification for Open-domain Dialogue Agents

1 code implementation EMNLP 2020 Haoyu Song, Yan Wang, Wei-Nan Zhang, Zhengyu Zhao, Ting Liu, Xiaojiang Liu

Maintaining a consistent attribute profile is crucial for dialogue agents to naturally converse with humans.

Attribute

Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

1 code implementation3 Feb 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

We introduce an approach that enhances images using a color filter in order to create adversarial effects, which fool neural networks into misclassification.

Image Enhancement

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

2 code implementations CVPR 2020 Zhengyu Zhao, Zhuoran Liu, Martha Larson

The success of image perturbations that are designed to fool image classifier is assessed in terms of both adversarial effect and visual imperceptibility.

Image Classification

Who's Afraid of Adversarial Queries? The Impact of Image Modifications on Content-based Image Retrieval

1 code implementation29 Jan 2019 Zhuoran Liu, Zhengyu Zhao, Martha Larson

An adversarial query is an image that has been modified to disrupt content-based image retrieval (CBIR) while appearing nearly untouched to the human eye.

Blocking Content-Based Image Retrieval +1

From Volcano to Toyshop: Adaptive Discriminative Region Discovery for Scene Recognition

1 code implementation23 Jul 2018 Zhengyu Zhao, Martha Larson

As deep learning approaches to scene recognition emerge, they have continued to leverage discriminative regions at multiple scales, building on practices established by conventional image classification research.

Attribute Image Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.