Adversarial Defense
176 papers with code • 10 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsLatest papers
A Simple and Yet Fairly Effective Defense for Graph Neural Networks
Successful combinations of our NoisyGNN approach with existing defense techniques demonstrate even further improved adversarial defense results.
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
The CLIP model, or one of its variants, is used as a frozen vision encoder in many vision-language models (VLMs), e. g. LLaVA and OpenFlamingo.
Detection and Defense of Unlearnable Examples
Detectability of unlearnable examples with simple networks motivates us to design a novel defense method.
Robust MRI Reconstruction by Smoothed Unrolling (SMUG)
To address this problem, we propose a novel image reconstruction framework, termed Smoothed Unrolling (SMUG), which advances a deep unrolling-based MRI reconstruction model using a randomized smoothing (RS)-based robust learning approach.
Defense Against Adversarial Attacks using Convolutional Auto-Encoders
Deep learning models, while achieving state-of-the-art performance on many tasks, are susceptible to adversarial attacks that exploit inherent vulnerabilities in their architectures.
Learn from the Past: A Proxy Guided Adversarial Defense Framework with Self Distillation Regularization
Adversarial Training (AT), pivotal in fortifying the robustness of deep learning models, is extensively adopted in practical applications.
Enhancing Robust Representation in Adversarial Training: Alignment and Exclusion Criteria
Deep neural networks are vulnerable to adversarial noise.
DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training
Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time.
Language Guided Adversarial Purification
Adversarial purification using generative models demonstrates strong adversarial defense performance.
Robust Physics-based Deep MRI Reconstruction Via Diffusion Purification
In particular, we present a robustification strategy that improves the resilience of DL-based MRI reconstruction methods by utilizing pretrained diffusion models as noise purifiers.