no code implementations • 6 Dec 2018 • Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes
Machine learning models are vulnerable to adversarial examples: minor perturbations to input samples intended to deliberately cause misclassification.
no code implementations • 17 Nov 2017 • Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes
In this paper, we leverage Gaussian Processes to investigate adversarial examples in the framework of Bayesian inference.