no code implementations • 26 May 2021 • Alex Serban, Erik Poll, Joost Visser
For example, we obtained over 50% robustness for CIFAR-10, with 92% accuracy on natural samples and over 20% robustness for CIFAR-100, with 71% accuracy on natural samples without adversarial training.
no code implementations • 12 Aug 2020 • Alex Serban, Erik Poll, Joost Visser
Sensitivity to adversarial noise hinders deployment of machine learning algorithms in security-critical applications.
no code implementations • 7 Aug 2020 • Alex Serban, Erik Poll, Joost Visser
Deep neural networks are at the forefront of machine learning research.
no code implementations • 2 Oct 2018 • Alexandru Constantin Serban, Erik Poll, Joost Visser
We provide a complete characterisation of the phenomenon of adversarial examples - inputs intentionally crafted to fool machine learning models.