Search Results for author: Chenan Wang

Found 7 papers, 2 papers with code

Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning

no code implementations30 Jan 2024 Chenan Wang, Pu Zhao, Siyue Wang, Xue Lin

Deep Neural Network (DNN) models when implemented on executing devices as the inference engines are susceptible to Fault Injection Attacks (FIAs) that manipulate model parameters to disrupt inference execution with disastrous performance.

Contrastive Learning Self-Supervised Learning

Dynamic Adversarial Attacks on Autonomous Driving Systems

no code implementations10 Dec 2023 Amirhosein Chahe, Chenan Wang, Abhishek Jeyapratap, Kaidi Xu, Lifeng Zhou

Moreover, our method utilizes dynamic patches displayed on a screen, allowing for adaptive changes and movement, enhancing the flexibility and performance of the attack.

Adversarial Attack Autonomous Driving +3

Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?

no code implementations30 Nov 2023 Zhengyue Zhao, Jinhao Duan, Kaidi Xu, Chenan Wang, Rui Zhangp Zidong Dup Qi Guo, Xing Hu

Although these studies have demonstrated the ability to protect images, it is essential to consider that these methods may not be entirely applicable in real-world scenarios.

Semantic Adversarial Attacks via Diffusion Models

1 code implementation14 Sep 2023 Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew Stamm, Kaidi Xu

Then there are two variants of this framework: 1) the Semantic Transformation (ST) approach fine-tunes the latent space of the generated image and/or the diffusion model itself; 2) the Latent Masking (LM) approach masks the latent space with another target image and local backpropagation-based interpretation methods.

Adversarial Attack

Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models

1 code implementation3 Jul 2023 Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, Kaidi Xu

While Large Language Models (LLMs) have demonstrated remarkable potential in natural language generation and instruction following, a persistent challenge lies in their susceptibility to "hallucinations", which erodes trust in their outputs.

Instruction Following Question Answering +4

Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

no code implementations2 Jun 2023 Zhengyue Zhao, Jinhao Duan, Xing Hu, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

This imperceptible protective noise makes the data almost unlearnable for diffusion models, i. e., diffusion models trained or fine-tuned on the protected data cannot generate high-quality and diverse images related to the protected training data.

Denoising Image Generation

Mixture of Robust Experts (MoRE):A Robust Denoising Method towards multiple perturbations

no code implementations21 Apr 2021 Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn

To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.

Adversarial Robustness Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.