Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

9 Nov 2019  ·  Marc Khoury ·

Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. In this paper we study how the choice of optimization algorithm influences the robustness of the resulting classifier to adversarial examples. Specifically we show an example of a learning problem for which the solution found by adaptive optimization algorithms exhibits qualitatively worse robustness properties against both $L_{2}$- and $L_{\infty}$-adversaries than the solution found by non-adaptive algorithms. Then we fully characterize the geometry of the loss landscape of $L_{2}$-adversarial training in least-squares linear regression. The geometry of the loss landscape is subtle and has important consequences for optimization algorithms. Finally we provide experimental evidence which suggests that non-adaptive methods consistently produce more robust models than adaptive methods.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here