no code implementations • 4 Apr 2024 • Mokhtar Z. Alaya, Alain Rakotomamonjy, Maxime Berar, Gilles Gasso
We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it converges with a rate $O(n^{-1/2})$.
no code implementations • 12 Feb 2024 • Mohamad Dhaini, Maxime Berar, Paul Honeine, Antonin Van Exem
Contrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks.
1 code implementation • 8 Sep 2022 • Paul Peseux, Maxime Berar, Thierry Paquet, Victor Nicollet
Categorical data are present in key areas such as health or supply chain, and this data require specific treatment.
no code implementations • 20 Oct 2021 • Alain Rakotomamonjy, Mokhtar Z. Alaya, Maxime Berar, Gilles Gasso
In this paper, we analyze the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian smoothed sliced divergences.
no code implementations • 4 Jun 2021 • Mokhtar Z. Alaya, Gilles Gasso, Maxime Berar, Alain Rakotomamonjy
We provide a theoretical analysis of this new divergence, called $\textit{heterogeneous Wasserstein discrepancy (HWD)}$, and we show that it preserves several interesting properties including rotation-invariance.
1 code implementation • 15 Jun 2020 • Alain Rakotomamonjy, Rémi Flamary, Gilles Gasso, Mokhtar Z. Alaya, Maxime Berar, Nicolas Courty
We address the problem of unsupervised domain adaptation under the setting of generalized target shift (joint class-conditional and label shifts).
no code implementations • NeurIPS 2019 • Abraham Traore, Maxime Berar, Alain Rakotomamonjy
This paper introduces a new approach for the scalable Tucker decomposition problem.
no code implementations • 1 Mar 2018 • Alain Rakotomamonjy, Abraham Traoré, Maxime Berar, Rémi Flamary, Nicolas Courty
This paper presents a distance-based discriminative framework for learning with probability distributions.