Feature Denoising for Improving Adversarial Robustness

CVPR 2019 Cihang XieYuxin WuLaurens van der MaatenAlan YuilleKaiming He

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks... (read more)

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK RESULT BENCHMARK
Adversarial Defense CAAD 2018 Feature Denoising Accuracy 50.6% # 1
Adversarial Defense ImageNet Feature Denoising Accuracy 49.5% # 1
Adversarial Defense ImageNet (targeted PGD, max perturbation=16) ResNeXt-101 DenoiseAll Accuracy 40.4 # 2
Adversarial Defense ImageNet (targeted PGD, max perturbation=16) ResNet-152 Accuracy 39.0 # 3
Adversarial Defense ImageNet (targeted PGD, max perturbation=16) ResNet-152 Denoise Accuracy 42.8 # 1

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet