Paper

Adversarial Attack with Pattern Replacement

We propose a generative model for adversarial attack. The model generates subtle but predictive patterns from the input. To perform an attack, it replaces the patterns of the input with those generated based on examples from some other class. We demonstrate our model by attacking CNN on MNIST.

Results in Papers With Code
(↓ scroll down to see all results)