Adversarial Defense
177 papers with code • 10 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsMost implemented papers
AOGNets: Compositional Grammatical Architectures for Deep Learning
This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs.
Certified Defenses against Adversarial Examples
While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs.
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples.
On Evaluating Adversarial Robustness
Correctly evaluating defenses against adversarial examples has proven to be extremely difficult.
Decoupled Kullback-Leibler Divergence Loss
In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and observe that it is equivalent to the Doupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss and 2) a Cross-Entropy loss incorporating soft labels.
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
advertorch is a toolbox for adversarial robustness research.
Robust Decision Trees Against Adversarial Examples
Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
In this paper, we employ adversarial training to improve the performance of randomized smoothing.
Testing Robustness Against Unforeseen Adversaries
To narrow in on this discrepancy between research and reality we introduce ImageNet-UA, a framework for evaluating model robustness against a range of unforeseen adversaries, including eighteen new non-L_p attacks.
ATHENA: A Framework based on Diverse Weak Defenses for Building Adversarial Defense
There has been extensive research on developing defense techniques against adversarial attacks; however, they have been mainly designed for specific model families or application domains, therefore, they cannot be easily extended.