no code implementations • 3 Feb 2024 • Hugues van Assel, Cédric Vincent-Cuaz, Nicolas Courty, Rémi Flamary, Pascal Frossard, Titouan Vayer
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
1 code implementation • 8 Nov 2023 • Titouan Vayer, Etienne Lasalle, Rémi Gribonval, Paulo Gonçalves
We consider the problem of learning a graph modeling the statistical relations of the $d$ variables from a dataset with $n$ samples $X \in \mathbb{R}^{n \times d}$.
no code implementations • 5 Oct 2023 • Hugues van Assel, Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Nicolas Courty
We present a versatile adaptation of existing dimensionality reduction (DR) objectives, enabling the simultaneous reduction of both sample and feature sizes.
no code implementations • 4 Oct 2023 • Hugues van Assel, Titouan Vayer, Remi Flamary, Nicolas Courty
Regularising the primal formulation of optimal transport (OT) with a strictly convex term leads to enhanced numerical complexity and a denser transport plan.
no code implementations • 5 Jul 2023 • Can Pouliquen, Paulo Gonçalves, Mathurin Massias, Titouan Vayer
We provide a framework and algorithm for tuning the hyperparameters of the Graphical Lasso via a bilevel optimization problem solved with a first-order method.
1 code implementation • 9 Mar 2023 • Antoine Collas, Titouan Vayer, Rémi Flamary, Arnaud Breloy
Dimension reduction (DR) methods provide systematic approaches for analyzing high-dimensional data.
1 code implementation • 21 Oct 2022 • Alexandre Hippert-Ferrer, Florent Bouchard, Ammar Mian, Titouan Vayer, Arnaud Breloy
Graphical models and factor analysis are well-established tools in multivariate statistics.
1 code implementation • 31 May 2022 • Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty
Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.
Ranked #1 on Graph Classification on NCI1
no code implementations • 1 Dec 2021 • Titouan Vayer, Rémi Gribonval
Based on the relations between the MMD and the Wasserstein distances, we provide guarantees for compressive statistical learning by introducing and studying the concept of Wasserstein regularity of the learning task, that is when some task-specific metric between probability distributions can be bounded by a Wasserstein distance.
1 code implementation • 6 Oct 2021 • Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty
To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects.
no code implementations • ICLR 2022 • Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty
To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects.
1 code implementation • 29 Apr 2021 • Sibylle Marcotte, Amélie Barbe, Rémi Gribonval, Titouan Vayer, Marc Sebban, Pierre Borgnat, Paulo Gonçalves
Diffusing a graph signal at multiple scales requires computing the action of the exponential of several multiples of the Laplacian matrix.
1 code implementation • 12 Feb 2021 • Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Marco Corneli, Nicolas Courty
Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements.
Ranked #1 on Graph Classification on BZR
no code implementations • 9 Nov 2020 • Titouan Vayer
Optimal Transport is a theory that allows to define geometrical notions of distance between probability distributions and to find correspondences, relationships, between sets of points.
1 code implementation • NeurIPS 2020 • Ievgen Redko, Titouan Vayer, Rémi Flamary, Nicolas Courty
Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions.
1 code implementation • 10 Feb 2020 • Titouan Vayer, Romain Tavenard, Laetitia Chapel, Nicolas Courty, Rémi Flamary, Yann Soullard
Multivariate time series are ubiquitous objects in signal processing.
1 code implementation • NeurIPS 2019 • Titouan Vayer, Rémi Flamary, Romain Tavenard, Laetitia Chapel, Nicolas Courty
Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions whose supports do not necessarily lie in the same metric space.
1 code implementation • 7 Nov 2018 • Titouan Vayer, Laetita Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty
Optimal transport theory has recently found many applications in machine learning thanks to its capacity for comparing various machine learning objects considered as distributions.
2 code implementations • 23 May 2018 • Titouan Vayer, Laetitia Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty
This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space.
Ranked #3 on Graph Classification on NCI1