no code implementations • 13 Sep 2022 • Thomas Cordier, Victor Bouvier, Gilles Hénaff, Céline Hudelot
Machine Learning models are prone to fail when test data are different from training data, a situation often encountered in real applications known as distribution shift.
no code implementations • 27 Jul 2022 • Victor Bouvier, Simona Maggio, Alexandre Abraham, Léo Dreyfus-Schmidt
If Uncertainty Quantification (UQ) is crucial to achieve trustworthy Machine Learning (ML), most UQ methods suffer from disparate and inconsistent evaluation protocols.
1 code implementation • 21 Jun 2022 • Simona Maggio, Victor Bouvier, Léo Dreyfus-Schmidt
ML models deployed in production often have to face unknown domain changes, fundamentally different from their training settings.
1 code implementation • 25 May 2021 • Etienne Bennequin, Victor Bouvier, Myriam Tami, Antoine Toubhans, Céline Hudelot
To classify query instances from novel classes encountered at test-time, they only require a support set composed of a few labelled samples.
no code implementations • 3 Dec 2020 • Victor Bouvier, Philippe Very, Clément Chastagnol, Myriam Tami, Céline Hudelot
First, we select for annotation target samples that are likely to improve the representations' transferability by measuring the variation, before and after annotation, of the transferability loss gradient.
no code implementations • 25 Jun 2020 • Yassine Ouali, Victor Bouvier, Myriam Tami, Céline Hudelot
Learning Invariant Representations has been successfully applied for reconciling a source and a target domain for Unsupervised Domain Adaptation.
no code implementations • 24 Jun 2020 • Victor Bouvier, Philippe Very, Clément Chastagnol, Myriam Tami, Céline Hudelot
The emergence of Domain Invariant Representations (IR) has improved drastically the transferability of representations from a labelled source domain to a new and unlabelled target domain.
no code implementations • 25 Sep 2019 • Victor Bouvier, Céline Hudelot, Clément Chastagnol, Philippe Very, Myriam Tami
Second, we show that learning weighted representations plays a key role in relaxing the constraint of invariance and then preserving the risk of compression.
no code implementations • 29 Jul 2019 • Victor Bouvier, Philippe Very, Céline Hudelot, Clément Chastagnol
Learning representations which remain invariant to a nuisance factor has a great interest in Domain Adaptation, Transfer Learning, and Fair Machine Learning.
no code implementations • 29 Jul 2019 • Victor Bouvier, Philippe Very, Céline Hudelot, Clément Chastagnol
Such approach consists in learning a representation of the data such that the label distribution conditioned on this representation is domain invariant.