Adversarial Defense
179 papers with code • 10 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsLatest papers
Robust Classification via a Single Diffusion Model
Since our method does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats.
Decoupled Kullback-Leibler Divergence Loss
In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and observe that it is equivalent to the Doupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss and 2) a Cross-Entropy loss incorporating soft labels.
Mist: Towards Improved Adversarial Examples for Diffusion Models
Diffusion Models (DMs) have empowered great success in artificial-intelligence-generated content, especially in artwork creation, yet raising new concerns in intellectual properties and copyright.
Masked Language Model Based Textual Adversarial Example Detection
To explore how to use the masked language model in adversarial detection, we propose a novel textual adversarial example detection method, namely Masked Language Model-based Detection (MLMD), which can produce clearly distinguishable signals between normal examples and adversarial examples by exploring the changes in manifolds induced by the masked language model.
Robust Mode Connectivity-Oriented Adversarial Defense: Enhancing Neural Network Robustness Against Diversified $\ell_p$ Attacks
Adversarial robustness is a key concept in measuring the ability of neural networks to defend against adversarial attacks during the inference phase.
Among Us: Adversarially Robust Collaborative Perception by Consensus
This leads to our hypothesize-and-verify framework: perception results with and without collaboration from a random subset of teammates are compared until reaching a consensus.
SMUG: Towards robust MRI reconstruction by smoothed unrolling
To address this problem, we propose a novel image reconstruction framework, termed SMOOTHED UNROLLING (SMUG), which advances a deep unrolling-based MRI reconstruction model using a randomized smoothing (RS)-based robust learning operation.
Language-Driven Anchors for Zero-Shot Adversarial Robustness
Previous researches mainly focus on improving adversarial robustness in the fully supervised setting, leaving the challenging domain of zero-shot adversarial robustness an open question.
TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP).
Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial Detection
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.