Adversarial Defense
179 papers with code • 10 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsLatest papers
Language Guided Adversarial Purification
Adversarial purification using generative models demonstrates strong adversarial defense performance.
Robust Physics-based Deep MRI Reconstruction Via Diffusion Purification
In particular, we present a robustification strategy that improves the resilience of DL-based MRI reconstruction methods by utilizing pretrained diffusion models as noise purifiers.
DAD++: Improved Data-free Test Time Adversarial Defense
With the increasing deployment of deep neural networks in safety-critical applications such as self-driving cars, medical imaging, anomaly detection, etc., adversarial robustness has become a crucial concern in the reliability of these networks in real-world scenarios.
DiffDefense: Defending against Adversarial Attacks via Diffusion Models
This paper presents a novel reconstruction method that leverages Diffusion Models to protect machine learning classifiers against adversarial attacks, all without requiring any modifications to the classifiers themselves.
Robustifying Point Cloud Networks by Refocusing
In this study, we develop a general mechanism to increase neural network robustness based on focus analysis.
AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models
Unrestricted adversarial attacks present a serious threat to deep learning models and adversarial defense techniques.
Making Pre-trained Language Models both Task-solvers and Self-calibrators
In this work, we consider the practical scenario that we need to effectively utilize training samples to make PLMs both task-solvers and self-calibrators.
Erasing, Transforming, and Noising Defense Network for Occluded Person Re-Identification
Occlusion perturbation presents a significant challenge in person re-identification (re-ID), and existing methods that rely on external visual cues require additional computational resources and only consider the issue of missing information caused by occlusion.
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models
Deep equilibrium models (DEQs) refrain from the traditional layer-stacking paradigm and turn to find the fixed point of a single layer.
CARSO: Blending Adversarial Training and Purification Improves Adversarial Robustness
In this work, we propose a novel adversarial defence mechanism for image classification - CARSO - blending the paradigms of adversarial training and adversarial purification in a mutually-beneficial, robustness-enhancing way.