Browse SoTA > Adversarial > Adversarial Defense

Adversarial Defense

57 papers with code ยท Adversarial

Competitions with currently unpublished results:

Benchmarks

Latest papers without code

Benchmarking adversarial attacks and defenses for time-series data

30 Aug 2020

This paves the way for future research in the direction of adversarial attacks and defenses, particularly for time-series data.

ADVERSARIAL DEFENSE TIME SERIES

Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses

25 Aug 2020

Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability.

ADVERSARIAL DEFENSE

Cassandra: Detecting Trojaned Networks from Adversarial Perturbations

28 Jul 2020

We also propose an anomaly detection method to identify the target class in a Trojaned network.

ADVERSARIAL DEFENSE ANOMALY DETECTION

Multitask Learning Strengthens Adversarial Robustness

ECCV 2020

Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network.

ADVERSARIAL DEFENSE

Defending against adversarial attacks on medical imaging AI system, classification or detection?

24 Jun 2020

Medical imaging AI systems such as disease classification and segmentation are increasingly inspired and transformed from computer vision based AI systems.

ADVERSARIAL DEFENSE ADVERSARIAL TRAINING

Adversarial Defense by Latent Style Transformations

17 Jun 2020

The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e. g., slightly changing the facial expression of human portraits.

ADVERSARIAL DEFENSE

Tricking Adversarial Attacks To Fail

8 Jun 2020

Our Target Training defense tricks the minimization at the core of untargeted, gradient-based adversarial attacks: minimize the sum of (1) perturbation and (2) classifier adversarial loss.

ADVERSARIAL DEFENSE