Search Results for author: Louis Béthune

Found 9 papers, 6 papers with code

TaCo: Targeted Concept Removal in Output Embeddings for NLP via Information Theory and Explainability

1 code implementation11 Dec 2023 Fanny Jourdan, Louis Béthune, Agustin Picard, Laurent Risser, Nicholas Asher

In evaluation, we show that the proposed post-hoc approach significantly reduces gender-related associations in NLP models while preserving the overall performance and functionality of the models.

Fairness

Gaussian Processes on Distributions based on Regularized Optimal Transport

no code implementations12 Oct 2022 François Bachoc, Louis Béthune, Alberto Gonzalez-Sanz, Jean-Michel Loubes

We present a novel kernel over the space of probability measures based on the dual formulation of optimal regularized transport.

Gaussian Processes valid

On the explainable properties of 1-Lipschitz Neural Networks: An Optimal Transport Perspective

no code implementations NeurIPS 2023 Mathieu Serrurier, Franck Mamalet, Thomas Fel, Louis Béthune, Thibaut Boissin

Input gradients have a pivotal role in a variety of applications, including adversarial attack algorithms for evaluating model robustness, explainable AI techniques for generating Saliency Maps, and counterfactual explanations. However, Saliency Maps generated by traditional neural networks are often noisy and provide limited insights.

Adversarial Attack counterfactual +1

GAN Estimation of Lipschitz Optimal Transport Maps

no code implementations16 Feb 2022 Alberto González-Sanz, Lucas de Lara, Louis Béthune, Jean-Michel Loubes

This paper introduces the first statistically consistent estimator of the optimal transport map between two probability distributions, based on neural networks.

Generative Adversarial Network

Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs

1 code implementation25 Nov 2020 Carlos Lassance, Louis Béthune, Myriam Bontonou, Mounia Hamidouche, Vincent Gripon

Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task.

Predicting the Accuracy of a Few-Shot Classifier

1 code implementation8 Jul 2020 Myriam Bontonou, Louis Béthune, Vincent Gripon

In the context of few-shot learning, one cannot measure the generalization ability of a trained classifier using validation sets, due to the small number of labeled samples.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.