Search Results for author: Ainesh Bakshi

Found 18 papers, 0 papers with code

A quasi-polynomial time algorithm for Multi-Dimensional Scaling via LP hierarchies

no code implementations29 Nov 2023 Ainesh Bakshi, Vincent Cohen-Addad, Samuel B. Hopkins, Rajesh Jayaram, Silvio Lattanzi

Multi-dimensional Scaling (MDS) is a family of methods for embedding an $n$-point metric into low-dimensional Euclidean space.

Data Visualization

Learning quantum Hamiltonians at any temperature in polynomial time

no code implementations3 Oct 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Ewin Tang

Anshu, Arunachalam, Kuwahara, and Soleimanifar (arXiv:2004. 07266) gave an algorithm to learn a Hamiltonian on $n$ qubits to precision $\epsilon$ with only polynomially many copies of the Gibbs state, but which takes exponential time.

Tensor Decompositions Meet Control Theory: Learning General Mixtures of Linear Dynamical Systems

no code implementations13 Jul 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

In this work we give a new approach to learning mixtures of linear dynamical systems that is based on tensor decompositions.

Tensor Decomposition Time Series

A Near-Linear Time Algorithm for the Chamfer Distance

no code implementations6 Jul 2023 Ainesh Bakshi, Piotr Indyk, Rajesh Jayaram, Sandeep Silwal, Erik Waingarten

For any two point sets $A, B \subset \mathbb{R}^d$ of size up to $n$, the Chamfer distance from $A$ to $B$ is defined as $\text{CH}(A, B)=\sum_{a \in A} \min_{b \in B} d_X(a, b)$, where $d_X$ is the underlying distance measure (e. g., the Euclidean or Manhattan distance).

Krylov Methods are (nearly) Optimal for Low-Rank Approximation

no code implementations6 Apr 2023 Ainesh Bakshi, Shyam Narayanan

In particular, for Spectral LRA, we show that any algorithm requires $\Omega\left(\log(n)/\varepsilon^{1/2}\right)$ matrix-vector products, exactly matching the upper bound obtained by Krylov methods [MM15, BCW22].

Open-Ended Question Answering

A New Approach to Learning Linear Dynamical Systems

no code implementations23 Jan 2023 Ainesh Bakshi, Allen Liu, Ankur Moitra, Morris Yau

Linear dynamical systems are the foundational statistical model upon which control theory is built.

Sub-quadratic Algorithms for Kernel Matrices via Kernel Density Estimation

no code implementations1 Dec 2022 Ainesh Bakshi, Piotr Indyk, Praneeth Kacham, Sandeep Silwal, Samson Zhou

We build on the recent Kernel Density Estimation framework, which (after preprocessing in time subquadratic in $n$) can return estimates of row/column sums of the kernel matrix.

Density Estimation

Low-Rank Approximation with $1/ε^{1/3}$ Matrix-Vector Products

no code implementations10 Feb 2022 Ainesh Bakshi, Kenneth L. Clarkson, David P. Woodruff

For the special cases of $p=2$ (Frobenius norm) and $p = \infty$ (Spectral norm), Musco and Musco (NeurIPS 2015) obtained an algorithm based on Krylov methods that uses $\tilde{O}(k/\sqrt{\epsilon})$ matrix-vector products, improving on the na\"ive $\tilde{O}(k/\epsilon)$ dependence obtainable by the power method, where $\tilde{O}$ suppresses poly$(\log(dk/\epsilon))$ factors.

Learning a Latent Simplex in Input-Sparsity Time

no code implementations17 May 2021 Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David P. Woodruff, Samson Zhou

We consider the problem of learning a latent $k$-vertex simplex $K\subset\mathbb{R}^d$, given access to $A\in\mathbb{R}^{d\times n}$, which can be viewed as a data matrix with $n$ points that are obtained by randomly perturbing latent points in the simplex $K$ (potentially beyond $K$).

Topic Models

Learning a Latent Simplex in Input Sparsity Time

no code implementations ICLR 2021 Ainesh Bakshi, Chiranjib Bhattacharyya, Ravi Kannan, David Woodruff, Samson Zhou

Bhattacharyya and Kannan (SODA 2020) give an algorithm for learning such a $k$-vertex latent simplex in time roughly $O(k\cdot\text{nnz}(\mathbf{A}))$, where $\text{nnz}(\mathbf{A})$ is the number of non-zeros in $\mathbf{A}$.

Clustering Topic Models

Robustly Learning Mixtures of $k$ Arbitrary Gaussians

no code implementations3 Dec 2020 Ainesh Bakshi, Ilias Diakonikolas, He Jia, Daniel M. Kane, Pravesh K. Kothari, Santosh S. Vempala

We give a polynomial-time algorithm for the problem of robustly estimating a mixture of $k$ arbitrary Gaussians in $\mathbb{R}^d$, for any fixed $k$, in the presence of a constant fraction of arbitrary corruptions.

Clustering Tensor Decomposition

Robust Linear Regression: Optimal Rates in Polynomial Time

no code implementations29 Jun 2020 Ainesh Bakshi, Adarsh Prasad

We obtain robust and computationally efficient estimators for learning several linear models that achieve statistically optimal convergence rate under minimal distributional assumptions.

Open-Ended Question Answering regression

Outlier-Robust Clustering of Non-Spherical Mixtures

no code implementations6 May 2020 Ainesh Bakshi, Pravesh Kothari

Concretely, our algorithm takes input an $\epsilon$-corrupted sample from a $k$-GMM and whp in $d^{\text{poly}(k/\eta)}$ time, outputs an approximate clustering that misclassifies at most $k^{O(k)}(\epsilon+\eta)$ fraction of the points whenever every pair of mixture components are separated by $1-\exp(-\text{poly}(k/\eta)^k)$ in total variation (TV) distance.

Clustering

List-Decodable Subspace Recovery: Dimension Independent Error in Polynomial Time

no code implementations12 Feb 2020 Ainesh Bakshi, Pravesh K. Kothari

As a result, in addition to Gaussians, our algorithm applies to the uniform distribution on the hypercube and $q$-ary cubes and arbitrary product distributions with subgaussian marginals.

Robust and Sample Optimal Algorithms for PSD Low-Rank Approximation

no code implementations9 Dec 2019 Ainesh Bakshi, Nadiia Chepurko, David P. Woodruff

Our main result is to resolve this question by obtaining an optimal algorithm that queries $O(nk/\epsilon)$ entries of $A$ and outputs a relative-error low-rank approximation in $O(n(k/\epsilon)^{\omega-1})$ time.

Learning Two Layer Rectified Neural Networks in Polynomial Time

no code implementations5 Nov 2018 Ainesh Bakshi, Rajesh Jayaram, David P. Woodruff

Given $n$ samples as a matrix $\mathbf{X} \in \mathbb{R}^{d \times n}$ and the (possibly noisy) labels $\mathbf{U}^* f(\mathbf{V}^* \mathbf{X}) + \mathbf{E}$ of the network on these samples, where $\mathbf{E}$ is a noise matrix, our goal is to recover the weight matrices $\mathbf{U}^*$ and $\mathbf{V}^*$.

Vocal Bursts Valence Prediction

Robust Communication-Optimal Distributed Clustering Algorithms

no code implementations2 Mar 2017 Pranjal Awasthi, Ainesh Bakshi, Maria-Florina Balcan, Colin White, David Woodruff

In this work, we study the $k$-median and $k$-means clustering problems when the data is distributed across many servers and can contain outliers.

Clustering

A Novel Feature Selection and Extraction Technique for Classification

no code implementations26 Dec 2014 Kratarth Goel, Raunaq Vohra, Ainesh Bakshi

This paper presents a versatile technique for the purpose of feature selection and extraction - Class Dependent Features (CDFs).

Classification feature selection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.