Search Results for author: Shyam Narayanan

Found 18 papers, 1 papers with code

Better and Simpler Lower Bounds for Differentially Private Statistical Estimation

no code implementations10 Oct 2023 Shyam Narayanan

First, we prove that for any $\alpha \le O(1)$, estimating the covariance of a Gaussian up to spectral error $\alpha$ requires $\tilde{\Omega}\left(\frac{d^{3/2}}{\alpha \varepsilon} + \frac{d}{\alpha^2}\right)$ samples, which is tight up to logarithmic factors.

SPAIC: A sub-$μ$W/Channel, 16-Channel General-Purpose Event-Based Analog Front-End with Dual-Mode Encoders

no code implementations31 Aug 2023 Shyam Narayanan, Matteo Cartiglia, Arianna Rubino, Charles Lego, Charlotte Frenkel, Giacomo Indiveri

Low-power event-based analog front-ends (AFE) are a crucial component required to build efficient end-to-end neuromorphic processing systems for edge computing.

Edge-computing

A faster and simpler algorithm for learning shallow networks

no code implementations24 Jul 2023 Sitan Chen, Shyam Narayanan

We revisit the well-studied problem of learning a linear combination of $k$ ReLU activations given labeled examples drawn from the standard $d$-dimensional Gaussian measure.

Data Structures for Density Estimation

1 code implementation20 Jun 2023 Anders Aamand, Alexandr Andoni, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal

We study statistical/computational tradeoffs for the following density estimation problem: given $k$ distributions $v_1, \ldots, v_k$ over a discrete domain of size $n$, and sampling access to a distribution $p$, identify $v_i$ that is "close" to $p$.

Density Estimation

Learned Interpolation for Better Streaming Quantile Approximation with Worst-Case Guarantees

no code implementations15 Apr 2023 Nicholas Schiefer, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal, Tal Wagner

An $\varepsilon$-approximate quantile sketch over a stream of $n$ inputs approximates the rank of any query point $q$ - that is, the number of input points less than $q$ - up to an additive error of $\varepsilon n$, generally with some probability of at least $1 - 1/\mathrm{poly}(n)$, while consuming $o(n)$ space.

Krylov Methods are (nearly) Optimal for Low-Rank Approximation

no code implementations6 Apr 2023 Ainesh Bakshi, Shyam Narayanan

In particular, for Spectral LRA, we show that any algorithm requires $\Omega\left(\log(n)/\varepsilon^{1/2}\right)$ matrix-vector products, exactly matching the upper bound obtained by Krylov methods [MM15, BCW22].

Open-Ended Question Answering

Query lower bounds for log-concave sampling

no code implementations5 Apr 2023 Sinho Chewi, Jaume de Dios Pont, Jerry Li, Chen Lu, Shyam Narayanan

Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one.

Robustness Implies Privacy in Statistical Estimation

no code implementations9 Dec 2022 Samuel B. Hopkins, Gautam Kamath, Mahbod Majid, Shyam Narayanan

We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics.

Adversarial Robustness

Exponentially Improving the Complexity of Simulating the Weisfeiler-Lehman Test with Graph Neural Networks

no code implementations6 Nov 2022 Anders Aamand, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Nicholas Schiefer, Sandeep Silwal, Tal Wagner

However, those simulations involve neural networks for the 'combine' function of size polynomial or even exponential in the number of graph nodes $n$, as well as feature vectors of length linear in $n$.

Improved Approximations for Euclidean $k$-means and $k$-median, via Nested Quasi-Independent Sets

no code implementations11 Apr 2022 Vincent Cohen-Addad, Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan

Motivated by data analysis and machine learning applications, we consider the popular high-dimensional Euclidean $k$-median and $k$-means problems.

Triangle and Four Cycle Counting with Predictions in Graph Streams

no code implementations ICLR 2022 Justin Y. Chen, Talya Eden, Piotr Indyk, Honghao Lin, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, Tal Wagner, David P. Woodruff, Michael Zhang

We propose data-driven one-pass streaming algorithms for estimating the number of triangles and four cycles, two fundamental problems in graph analytics that are widely studied in the graph data stream literature.

Private High-Dimensional Hypothesis Testing

no code implementations3 Mar 2022 Shyam Narayanan

Our results improve over the previous best work of Canonne et al.~\cite{CanonneKMUZ20} for both computationally efficient and inefficient algorithms, and even our computationally efficient algorithm matches the optimal \emph{non-private} sample complexity of $O\left(\frac{\sqrt{d}}{\alpha^2}\right)$ in many standard parameter settings.

Vocal Bursts Intensity Prediction

Tight and Robust Private Mean Estimation with Few Users

no code implementations22 Oct 2021 Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan

In particular, we provide a nearly optimal trade-off between the number of users and the number of samples per user required for private mean estimation, even when the number of users is as low as $O(\frac{1}{\varepsilon}\log\frac{1}{\delta})$.

Almost Tight Approximation Algorithms for Explainable Clustering

no code implementations1 Jul 2021 Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan

Next, we study the $k$-means problem in this context and provide an $O(k \log k)$-approximation algorithm for explainable $k$-means, improving over the $O(k^2)$ bound of Dasgupta et al. and the $O(d k \log k)$ bound of \cite{laber2021explainable}.

Clustering

Learning-based Support Estimation in Sublinear Time

no code implementations ICLR 2021 Talya Eden, Piotr Indyk, Shyam Narayanan, Ronitt Rubinfeld, Sandeep Silwal, Tal Wagner

We consider the problem of estimating the number of distinct elements in a large data set (or, equivalently, the support size of the distribution induced by the data set) from a random sample of its elements.

Optimal terminal dimensionality reduction in Euclidean space

no code implementations22 Oct 2018 Shyam Narayanan, Jelani Nelson

$$ We show that a strictly stronger version of this statement holds, answering one of the main open questions of [MMMR18]: "$\forall y\in X$" in the above statement may be replaced with "$\forall y\in\mathbb R^d$", so that $f$ not only preserves distances within $X$, but also distances to $X$ from the rest of space.

Dimensionality Reduction LEMMA

Cannot find the paper you are looking for? You can Submit a new open access paper.