no code implementations • 22 Feb 2024 • Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan
Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.
no code implementations • 12 Jun 2022 • Tomer Galanti, Zachary S. Siegel, Aparna Gupte, Tomaso Poggio
We study the bias of Stochastic Gradient Descent (SGD) to learn low-rank weight matrices when training deep neural networks.
no code implementations • 6 Apr 2022 • Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan
Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for $n^{\epsilon}$ Gaussians for any constant $\epsilon > 0$, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least $\sqrt{n}$ Gaussians under polynomial (quantum) hardness assumptions.
no code implementations • 6 Jun 2021 • Aparna Gupte, Vinod Vaikuntanathan
Sparse linear regression is the well-studied inference problem where one is given a design matrix $\mathbf{A} \in \mathbb{R}^{M\times N}$ and a response vector $\mathbf{b} \in \mathbb{R}^M$, and the goal is to find a solution $\mathbf{x} \in \mathbb{R}^{N}$ which is $k$-sparse (that is, it has at most $k$ non-zero coordinates) and minimizes the prediction error $\|\mathbf{A} \mathbf{x} - \mathbf{b}\|_2$.