Search Results for author: Erik Poll

Found 4 papers, 0 papers with code

Deep Repulsive Prototypes for Adversarial Robustness

no code implementations26 May 2021 Alex Serban, Erik Poll, Joost Visser

For example, we obtained over 50% robustness for CIFAR-10, with 92% accuracy on natural samples and over 20% robustness for CIFAR-100, with 71% accuracy on natural samples without adversarial training.

Adversarial Robustness

Learning to Learn from Mistakes: Robust Optimization for Adversarial Noise

no code implementations12 Aug 2020 Alex Serban, Erik Poll, Joost Visser

Sensitivity to adversarial noise hinders deployment of machine learning algorithms in security-critical applications.

Adversarial Examples - A Complete Characterisation of the Phenomenon

no code implementations2 Oct 2018 Alexandru Constantin Serban, Erik Poll, Joost Visser

We provide a complete characterisation of the phenomenon of adversarial examples - inputs intentionally crafted to fool machine learning models.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.