Adversarial Defense
179 papers with code • 10 benchmarks • 5 datasets
Competitions with currently unpublished results:
Libraries
Use these libraries to find Adversarial Defense models and implementationsMost implemented papers
Learnable Boundary Guided Adversarial Training
Previous adversarial training raises model robustness under the compromise of accuracy on natural data.
Safety Verification of Deep Neural Networks
Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations.
Delving into Transferable Adversarial Examples and Black-box Attacks
In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels.
Mitigating Adversarial Effects Through Randomization
Convolutional neural networks have demonstrated high accuracy on various tasks in recent years.
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks.
Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations
Then we propose a new dataset called Icons-50 which opens research on a new kind of robustness, surface variation robustness.
Efficient Formal Safety Analysis of Neural Networks
Our approach can check different safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing analysis techniques.
Feature Denoising for Improving Adversarial Robustness
This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks.
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
In this paper, we propose a new threat model for adversarial attacks based on the Wasserstein distance.
Adversarial Examples on Graph Data: Deep Insights into Attack and Defense
Based on this observation, we propose a defense approach which inspects the graph and recovers the potential adversarial perturbations.