Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training

14 Mar 2018  ·  Derek Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang ·

Deep neural networks (DNNs) are known to be vulnerable to adversarial examples which contain human-imperceptible perturbations. A series of defending methods, either proactive defence or reactive defence, have been proposed in the recent years. However, most of the methods can only handle specific attacks. For example, proactive defending methods are invalid against grey-box or white-box attacks, while reactive defending methods are challenged by low-distortion adversarial examples or transferring adversarial examples. This becomes a critical problem since a defender usually does not have the type of the attack as a priori knowledge. Moreover, existing two-pronged defences (e.g., MagNet), which take advantages of both proactive and reactive methods, have been reported as broken under transferring attacks. To address this problem, this paper proposed a novel defensive framework based on collaborative multi-task training, aiming at providing defence for different types of attacks. The proposed defence first encodes training labels into label pairs and counters black-box attacks leveraging adversarial training supervised by the encoded label pairs. The defence further constructs a detector to identify and reject high-confidence adversarial examples that bypass the black-box defence. In addition, the proposed collaborative architecture can prevent adversaries from finding valid adversarial examples when the defence strategy is exposed. In the experiments, we evaluated our defence against four state-of-the-art attacks on $MNIST$ and $CIFAR10$ datasets. The results showed that our defending method achieved up to $96.3\%$ classification accuracy on black-box adversarial examples, and detected up to $98.7\%$ of the high confidence adversarial examples. It only decreased the model accuracy on benign example classification by $2.1\%$ for the $CIFAR10$ dataset.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here