no code implementations • 6 May 2024 • Sharath Raghvendra, Pouyan Shirzadian, Kaiyi Zhang
We show that (1) $k$-RPW satisfies the metric properties, (2) $k$-RPW is robust to small outlier mass while retaining the sensitivity of $2$-Wasserstein distance to minor geometric differences, and (3) when $k$ is a constant, $k$-RPW distance between empirical distributions on $n$ samples in $\mathbb{R}^2$ converges to the true distance at a rate of $n^{-1/3}$, which is faster than the convergence rate of $n^{-1/4}$ for the $2$-Wasserstein distance.
1 code implementation • 7 Mar 2022 • Nathaniel Lahn, Sharath Raghvendra, Kaiyi Zhang
Interestingly, unlike the Sinkhorn algorithm, our method also readily provides a compact transport plan as well as a solution to an approximate version of the dual formulation of the OT problem, both of which have numerous applications in Machine Learning.
no code implementations • NeurIPS 2021 • Nathaniel Lahn, Sharath Raghvendra, Jiacheng Ye
In this paper, we present a simplification of a recent algorithm (Lahn and Raghvendra, JoCG 2021) for the maximum cardinality matching problem and describe how a maximum cardinality matching in a $\delta$-disc graph can be computed asymptotically faster than $O(n^{3/2})$ time for any moderately dense point set.
no code implementations • 15 Jul 2020 • Nathaniel Lahn, Sharath Raghvendra
For discrete distributions, the problem of computing this distance can be expressed in terms of finding a minimum-cost perfect matching on a complete bipartite graph given by two multisets of points $A, B \subset \mathbb{R}^2$, with $|A|=|B|=n$, where the ground distance between any two points is the squared Euclidean distance between them.
2 code implementations • NeurIPS 2019 • Nathaniel Lahn, Deepika Mulchandani, Sharath Raghvendra
We also provide empirical results that suggest our algorithm is competitive with respect to a sequential implementation of the Sinkhorn algorithm in execution time.
no code implementations • 8 Dec 2014 • Vikram Nathan, Sharath Raghvendra
A widely-used tool for binary classification is the Support Vector Machine (SVM), a supervised learning technique that finds the "maximum margin" linear separator between the two classes.