Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

ICCV 2019 Aamir MustafaSalman KhanMunawar HayatRoland GoeckeJianbing ShenLing Shao

Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images. The robustness of existing defenses suffers greatly under white-box attack settings, where an adversary has full knowledge about the network and can iterate several times to find strong perturbations... (read more)

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Adversarial Defense CIFAR-10 PCL (against PGD, white box) Accuracy 46.7 # 1

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet