Search Results for author: Thibault Laugel

Found 14 papers, 6 papers with code

On the Fairness ROAD: Robust Optimization for Adversarial Debiasing

1 code implementation27 Oct 2023 Vincent Grari, Thibault Laugel, Tatsunori Hashimoto, Sylvain Lamprier, Marcin Detyniecki

In the field of algorithmic fairness, significant attention has been put on group fairness criteria, such as Demographic Parity and Equalized Odds.

Attribute Fairness

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

no code implementations10 May 2023 Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model by indicating the modifications to be made to the instance so as to change its associated prediction.

counterfactual Explainable artificial intelligence +1

When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

1 code implementation14 Feb 2023 Natasa Krco, Thibault Laugel, Jean-Michel Loubes, Marcin Detyniecki

With comparable performances in fairness and accuracy, are the different bias mitigation approaches impacting a similar number of individuals?

Fairness

Integrating Prior Knowledge in Post-hoc Explanations

no code implementations25 Apr 2022 Adulam Jeyasothy, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

In the field of eXplainable Artificial Intelligence (XAI), post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.

counterfactual Counterfactual Explanation +2

How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice

no code implementations9 Jul 2021 Tom Vermeire, Thibault Laugel, Xavier Renard, David Martens, Marcin Detyniecki

Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness.

Decision Making Explainable Artificial Intelligence (XAI)

Understanding surrogate explanations: the interplay between complexity, fidelity and coverage

no code implementations9 Jul 2021 Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

This paper analyses the fundamental ingredients behind surrogate explanations to provide a better understanding of their inner workings.

On the overlooked issue of defining explanation objectives for local-surrogate explainers

no code implementations10 Jun 2021 Rafael Poyiadzi, Xavier Renard, Thibault Laugel, Raul Santos-Rodriguez, Marcin Detyniecki

In this work we review the similarities and differences amongst multiple methods, with a particular focus on what information they extract from the model, as this has large impact on the output: the explanation.

Understanding Prediction Discrepancies in Machine Learning Classifiers

no code implementations12 Apr 2021 Xavier Renard, Thibault Laugel, Marcin Detyniecki

This paper proposes to address this question by analyzing the prediction discrepancies in a pool of best-performing models trained on the same data.

BIG-bench Machine Learning Fairness

Imperceptible Adversarial Attacks on Tabular Data

1 code implementation8 Nov 2019 Vincent Ballet, Xavier Renard, Jonathan Aigrain, Thibault Laugel, Pascal Frossard, Marcin Detyniecki

Security of machine learning models is a concern as they may face adversarial attacks for unwarranted advantageous decisions.

BIG-bench Machine Learning

The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations

1 code implementation22 Jul 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

Post-hoc interpretability approaches have been proven to be powerful tools to generate explanations for the predictions made by a trained black-box model.

counterfactual

Issues with post-hoc counterfactual explanations: a discussion

no code implementations11 Jun 2019 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Counterfactual post-hoc interpretability approaches have been proven to be useful tools to generate explanations for the predictions of a trained blackbox classifier.

counterfactual

Defining Locality for Surrogates in Post-hoc Interpretablity

1 code implementation19 Jun 2018 Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki

Local surrogate models, to approximate the local decision boundary of a black-box classifier, constitute one approach to generate explanations for the rationale behind an individual prediction made by the back-box.

Inverse Classification for Comparison-based Interpretability in Machine Learning

6 code implementations22 Dec 2017 Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, Marcin Detyniecki

In the context of post-hoc interpretability, this paper addresses the task of explaining the prediction of a classifier, considering the case where no information is available, neither on the classifier itself, nor on the processed data (neither the training nor the test data).

BIG-bench Machine Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.