Search Results for author: Jonathan Aigrain

Found 4 papers, 2 papers with code

Imperceptible Adversarial Attacks on Tabular Data

1 code implementation8 Nov 2019 Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki

Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions.

BIG-bench Machine Learning

How the Softmax Activation Hinders the Detection of Adversarial and Out-of-Distribution Examples in Neural Networks

no code implementations25 Sep 2019 Jonathan Aigrain, Marcin Detyniecki

Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a prediction with a reliable confidence estimate which would allow to detect misclassifications.

Concept Tree: High-Level Representation of Variables for More Interpretable Surrogate Decision Trees

no code implementations4 Jun 2019 Xavier Renard, Nicolas Woloszko, Jonathan Aigrain, Marcin Detyniecki

Interpretable surrogates of black-box predictors trained on high-dimensional tabular datasets can struggle to generate comprehensible explanations in the presence of correlated variables.

Detecting Adversarial Examples and Other Misclassifications in Neural Networks by Introspection

1 code implementation22 May 2019 Jonathan Aigrain, Marcin Detyniecki

Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a reliable confidence value allowing to detect misclassifications.

Cannot find the paper you are looking for? You can Submit a new open access paper.