no code implementations • 30 Apr 2024 • Jérôme Bolte, Tam Le, Éric Moulines, Edouard Pauwels
Motivated by the widespread use of approximate derivatives in machine learning and optimization, we study inexact subgradient methods with non-vanishing additive errors and step sizes.
no code implementations • 19 Feb 2024 • Tam Le, Jérôme Malick
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts.
1 code implementation • 7 Feb 2024 • Tam Le, Truyen Nguyen, Kenji Fukumizu
In connection with the OW, we show that one only needs to simply solve a univariate optimization problem to compute the GST, unlike the complex two-level optimization problem in OW.
1 code implementation • 29 Jan 2024 • Khai Nguyen, Shujian Zhang, Tam Le, Nhat Ho
From the RPD, we derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i. e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW).
no code implementations • 2 Nov 2023 • Thong Pham, Shohei Shimizu, Hideitsu Hino, Tam Le
We consider the problem of estimating the counterfactual joint distribution of multiple quantities of interests (e. g., outcomes) in a multivariate causal model extended from the classical difference-in-difference design.
1 code implementation • 20 Oct 2023 • Tam Le, Truyen Nguyen, Kenji Fukumizu
It is known that such OT problem (i. e., tree-Wasserstein (TW)) admits a closed-form expression, but depends fundamentally on the underlying tree structure over supports of input measures.
no code implementations • 29 Aug 2023 • Anh-Khoa Nguyen Vu, Thanh-Toan Do, Vinh-Tiep Nguyen, Tam Le, Minh-Triet Tran, Tam V. Nguyen
Our overarching goal is to train a generator that captures the data variations of the base dataset.
1 code implementation • 24 Feb 2023 • Tam Le, Truyen Nguyen, Kenji Fukumizu
We show that the proposed unbalanced Sobolev transport (UST) admits a closed-form formula for fast computation, and it is also negative definite.
no code implementations • 31 Jan 2023 • Xinru Hua, Truyen Nguyen, Tam Le, Jose Blanchet, Viet Anh Nguyen
The scarcity of labeled data is a long-standing challenge for many machine learning tasks.
1 code implementation • 22 Feb 2022 • Tam Le, Truyen Nguyen, Dinh Phung, Viet Anh Nguyen
In this work, we consider probability measures supported on a graph metric space and propose a novel Sobolev transport metric.
no code implementations • NeurIPS 2021 • Tam Le, Truyen Nguyen, Makoto Yamada, Jose Blanchet, Viet Anh Nguyen
In this paper, we propose a novel and coherent scheme for kernel-reweighted regression by reparametrizing the sample weights using a doubly non-negative matrix.
no code implementations • 29 Sep 2021 • Truyen Nguyen, Xinru Hua, Tam Le, Jose Blanchet, Viet Anh Nguyen
The scarcity of labeled data is a long-standing challenge for cross-domain machine learning tasks.
no code implementations • NeurIPS 2021 • Jérôme Bolte, Tam Le, Edouard Pauwels, Antonio Silveti-Falls
In view of training increasingly complex learning architectures, we establish a nonsmooth implicit function theorem with an operational calculus.
no code implementations • NeurIPS 2021 • Jerome Bolte, Tam Le, Edouard Pauwels, Antonio Silveti-Falls
In view of training increasingly complex learning architectures, we establish a nonsmooth implicit function theorem with an operational calculus.
1 code implementation • ICCV 2021 • Trung Nguyen, Quang-Hieu Pham, Tam Le, Tung Pham, Nhat Ho, Binh-Son Hua
From this study, we propose to use sliced Wasserstein distance and its variants for learning representations of 3D point clouds.
no code implementations • 24 Jan 2021 • Tam Le, Truyen Nguyen
In this work, we consider an \textit{entropy partial transport} (EPT) problem for nonnegative measures on a tree having different masses.
no code implementations • 1 Jan 2021 • ZiHao Wang, Xu Zhao, Tam Le, Hao Wu, Yong Zhang, Makoto Yamada
In this work, we consider OT over tree metrics, which is more general than the sliced Wasserstein and includes the sliced Wasserstein as a special case, and we propose a fast minimization algorithm in $O(n)$ for the optimal Wasserstein-1 transport plan between two distributions in the tree structure.
1 code implementation • 13 Jun 2020 • Vu Nguyen, Tam Le, Makoto Yamada, Michael A. Osborne
Building upon tree-Wasserstein (TW), which is a negative definite variant of OT, we develop a novel discrepancy for neural architectures, and demonstrate it within a Gaussian process surrogate model for the sequential NAS settings.
no code implementations • 10 Oct 2019 • Tam Le, Viet Huynh, Nhat Ho, Dinh Phung, Makoto Yamada
We study in this paper a variant of Wasserstein barycenter problem, which we refer to as tree-Wasserstein barycenter, by leveraging a specific class of ground metrics, namely tree metrics, for Wasserstein distance.
1 code implementation • 10 Oct 2019 • Tam Le, Nhat Ho, Makoto Yamada
By leveraging a tree structure, we propose to align \textit{flows} from a root to each support instead of pair-wise tree metrics of supports, i. e., flows from a support to another, in GW.
1 code implementation • 5 Sep 2019 • Yanbin Liu, Makoto Yamada, Yao-Hung Hubert Tsai, Tam Le, Ruslan Salakhutdinov, Yi Yang
To estimate the mutual information from data, a common practice is preparing a set of paired samples $\{(\mathbf{x}_i,\mathbf{y}_i)\}_{i=1}^n \stackrel{\mathrm{i. i. d.
no code implementations • 26 Feb 2019 • Tatsuya Shiraishi, Tam Le, Hisashi Kashima, Makoto Yamada
In this paper, we propose the topological Bayesian optimization, which can efficiently find an optimal solution from structured data using \emph{topological information}.
2 code implementations • NeurIPS 2019 • Tam Le, Makoto Yamada, Kenji Fukumizu, Marco Cuturi
Optimal transport (\OT) theory defines a powerful set of tools to compare probability distributions.
1 code implementation • 12 Oct 2018 • Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi
Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.
1 code implementation • NeurIPS 2018 • Tam Le, Makoto Yamada
To deal with it, an emerged approach is to use kernel methods, and an appropriate geometry for PDs is an important factor to measure the similarity of PDs.