Adversarial Defense

179 papers with code • 10 benchmarks • 5 datasets

Competitions with currently unpublished results:

Libraries

Use these libraries to find Adversarial Defense models and implementations

Latest papers with no code

AED-PADA:Improving Generalizability of Adversarial Example Detection via Principal Adversarial Domain Adaptation

no code yet • 19 Apr 2024

Specifically, our approach identifies the Principal Adversarial Domains (PADs), i. e., a combination of features of the adversarial examples from different attacks, which possesses large coverage of the entire adversarial feature space.

Efficiently Adversarial Examples Generation for Visual-Language Models under Targeted Transfer Scenarios using Diffusion Models

no code yet • 16 Apr 2024

Specifically, AdvDiffVLM employs Adaptive Ensemble Gradient Estimation to modify the score during the diffusion model's reverse generation process, ensuring the adversarial examples produced contain natural adversarial semantics and thus possess enhanced transferability.

Struggle with Adversarial Defense? Try Diffusion

no code yet • 12 Apr 2024

Unlike data-driven classifiers, TMDC, guided by Bayesian principles, utilizes the conditional likelihood from diffusion models to determine the class probabilities of input images, thereby insulating against the influences of data shift and the limitations of adversarial training.

Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks

no code yet • 4 Apr 2024

Despite providing high-performance solutions for computer vision tasks, the deep neural network (DNN) model has been proved to be extremely vulnerable to adversarial attacks.

Adversarial Attacks and Dimensionality in Text Classifiers

no code yet • 3 Apr 2024

For all of the aforementioned studies, we have run tests on multiple models with varying dimensionality and used a word-vector level adversarial attack to substantiate the findings.

Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay

no code yet • 2 Apr 2024

In this paper, we discuss for the first time the concept of continual adversarial defense under a sequence of attacks, and propose a lifelong defense baseline called Anisotropic \& Isotropic Replay (AIR), which offers three advantages: (1) Isotropic replay ensures model consistency in the neighborhood distribution of new data, indirectly aligning the output preference between old and new tasks.

Ensemble Adversarial Defense via Integration of Multiple Dispersed Low Curvature Models

no code yet • 25 Mar 2024

In this work, we aim to enhance ensemble diversity by reducing attack transferability.

Subspace Defense: Discarding Adversarial Perturbations by Learning a Subspace for Clean Signals

no code yet • 24 Mar 2024

We first empirically show that the features of either clean signals or adversarial perturbations are redundant and span in low-dimensional linear subspaces respectively with minimal overlap, and the classical low-dimensional subspace projection can suppress perturbation features out of the subspace of clean signals.

Adversarial Defense Teacher for Cross-Domain Object Detection under Poor Visibility Conditions

no code yet • 23 Mar 2024

Existing object detectors encounter challenges in handling domain shifts between training and real-world data, particularly under poor visibility conditions like fog and night.

ADAPT to Robustify Prompt Tuning Vision Transformers

no code yet • 19 Mar 2024

The performance of deep models, including Vision Transformers, is known to be vulnerable to adversarial attacks.