Search Results for author: Titouan Vayer

Found 19 papers, 12 papers with code

Compressive Recovery of Sparse Precision Matrices

1 code implementation8 Nov 2023 Titouan Vayer, Etienne Lasalle, Rémi Gribonval, Paulo Gonçalves

We consider the problem of learning a graph modeling the statistical relations of the $d$ variables from a dataset with $n$ samples $X \in \mathbb{R}^{n \times d}$.

2k

Interpolating between Clustering and Dimensionality Reduction with Gromov-Wasserstein

no code implementations5 Oct 2023 Hugues van Assel, Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Nicolas Courty

We present a versatile adaptation of existing dimensionality reduction (DR) objectives, enabling the simultaneous reduction of both sample and feature sizes.

Clustering Dimensionality Reduction

Optimal Transport with Adaptive Regularisation

no code implementations4 Oct 2023 Hugues van Assel, Titouan Vayer, Remi Flamary, Nicolas Courty

Regularising the primal formulation of optimal transport (OT) with a strictly convex term leads to enhanced numerical complexity and a denser transport plan.

Domain Adaptation

Implicit Differentiation for Hyperparameter Tuning the Weighted Graphical Lasso

no code implementations5 Jul 2023 Can Pouliquen, Paulo Gonçalves, Mathurin Massias, Titouan Vayer

We provide a framework and algorithm for tuning the hyperparameters of the Graphical Lasso via a bilevel optimization problem solved with a first-order method.

Bilevel Optimization

Entropic Wasserstein Component Analysis

1 code implementation9 Mar 2023 Antoine Collas, Titouan Vayer, Rémi Flamary, Arnaud Breloy

Dimension reduction (DR) methods provide systematic approaches for analyzing high-dimensional data.

Dimensionality Reduction

Template based Graph Neural Network with Optimal Transport Distances

1 code implementation31 May 2022 Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty

Current Graph Neural Networks (GNN) architectures generally rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.

Graph Classification Graph Matching

Controlling Wasserstein Distances by Kernel Norms with Application to Compressive Statistical Learning

no code implementations1 Dec 2021 Titouan Vayer, Rémi Gribonval

Based on the relations between the MMD and the Wasserstein distances, we provide guarantees for compressive statistical learning by introducing and studying the concept of Wasserstein regularity of the learning task, that is when some task-specific metric between probability distributions can be bounded by a Wasserstein distance.

Semi-relaxed Gromov-Wasserstein divergence with applications on graphs

1 code implementation6 Oct 2021 Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty

To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects.

Dictionary Learning

Semi-relaxed Gromov-Wasserstein divergence and applications on graphs

no code implementations ICLR 2022 Cédric Vincent-Cuaz, Rémi Flamary, Marco Corneli, Titouan Vayer, Nicolas Courty

To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects.

Dictionary Learning

Fast Multiscale Diffusion on Graphs

1 code implementation29 Apr 2021 Sibylle Marcotte, Amélie Barbe, Rémi Gribonval, Titouan Vayer, Marc Sebban, Pierre Borgnat, Paulo Gonçalves

Diffusing a graph signal at multiple scales requires computing the action of the exponential of several multiples of the Laplacian matrix.

Online Graph Dictionary Learning

1 code implementation12 Feb 2021 Cédric Vincent-Cuaz, Titouan Vayer, Rémi Flamary, Marco Corneli, Nicolas Courty

Dictionary learning is a key tool for representation learning, that explains the data as linear combination of few basic elements.

Dictionary Learning Graph Classification +2

A contribution to Optimal Transport on incomparable spaces

no code implementations9 Nov 2020 Titouan Vayer

Optimal Transport is a theory that allows to define geometrical notions of distance between probability distributions and to find correspondences, relationships, between sets of points.

BIG-bench Machine Learning Domain Adaptation

CO-Optimal Transport

1 code implementation NeurIPS 2020 Ievgen Redko, Titouan Vayer, Rémi Flamary, Nicolas Courty

Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions.

Clustering Data Summarization +1

Sliced Gromov-Wasserstein

1 code implementation NeurIPS 2019 Titouan Vayer, Rémi Flamary, Romain Tavenard, Laetitia Chapel, Nicolas Courty

Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions whose supports do not necessarily lie in the same metric space.

Fused Gromov-Wasserstein distance for structured objects: theoretical foundations and mathematical properties

1 code implementation7 Nov 2018 Titouan Vayer, Laetita Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty

Optimal transport theory has recently found many applications in machine learning thanks to its capacity for comparing various machine learning objects considered as distributions.

BIG-bench Machine Learning

Optimal Transport for structured data with application on graphs

2 code implementations23 May 2018 Titouan Vayer, Laetitia Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty

This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space.

Clustering Graph Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.