Improving Local Effectiveness for Global Robustness Training

1 Jan 2021  ·  Jingyue Lu, M. Pawan Kumar ·

Despite its increasing popularity, deep neural networks are easily fooled. Toalleviate this deficiency, researchers are actively developing new training strategies,which encourage models that are robust to small input perturbations. Severalsuccessful robust training methods have been proposed. However, many of themrely on strong adversaries, which can be prohibitively expensive to generate whenthe input dimension is high and the model structure is complicated. We adopt a newperspective on robustness and propose a novel training algorithm that allows a moreeffective use of adversaries. Our method improves the model robustness at eachlocal patch and then, by combining these patches through a global term, achievesoverall robustness. We demonstrate that, by maximizing the use of adversaries,we achieve high robust accuracy with weak adversaries. Specifically, our methodreaches a similar robust accuracy level to the state of the art approaches trained onstrong adversaries on MNIST, CIFAR-10 and CIFAR-100. As a result, the overalltraining time is reduced. Furthermore, when trained with strong adversaries, ourmethod matches with the current state of the art on MNIST and outperforms themon CIFAR-10 and CIFAR-100

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here