Adversarial Visual Robustness by Causal Intervention

17 Jun 2021  ·  Kaihua Tang, Mingyuan Tao, Hanwang Zhang ·

Adversarial training is the de facto most promising defense against adversarial examples. Yet, its passive nature inevitably prevents it from being immune to unknown attackers. To achieve a proactive defense, we need a more fundamental understanding of adversarial examples, beyond the popular bounded threat model. In this paper, we provide a causal viewpoint of adversarial vulnerability: the cause is the spurious correlation ubiquitously existing in learning, i.e., the confounding effect, where attackers are precisely exploiting these effects. Therefore, a fundamental solution for adversarial robustness is by causal intervention. As these visual confounders are imperceptible in general, we propose to use the instrumental variable that achieves causal intervention without the need for confounder observation. We term our robust training method as Causal intervention by instrumental Variable (CiiV). It's a causal regularization that 1) augments the image with multiple retinotopic centers and 2) encourages the model to learn causal features, rather than local confounding patterns, by favoring features linearly responding to spatial interpolations. Extensive experiments on a wide spectrum of attackers and settings applied in CIFAR-10, CIFAR-100, and mini-ImageNet demonstrate that CiiV is robust to adaptive attacks, including the recent AutoAttack. Besides, as a general causal regularization, it can be easily plugged into other methods to further boost the robustness.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here