The Limitations of Model Uncertainty in Adversarial Settings

6 Dec 2018  ·  Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes ·

Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification. While an obvious security threat, adversarial examples yield as well insights about the applied model itself. We investigate adversarial examples in the context of Bayesian neural network's (BNN's) uncertainty measures. As these measures are highly non-smooth, we use a smooth Gaussian process classifier (GPC) as substitute. We show that both confidence and uncertainty can be unsuspicious even if the output is wrong. Intriguingly, we find subtle differences in the features influencing uncertainty and confidence for most tasks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods