Search Results for author: Jelani Nelson

Found 20 papers, 1 papers with code

Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages

no code implementations16 Apr 2024 Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Kunal Talwar, Samson Zhou

We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in\mathbb{R}^d$.

Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries

no code implementations28 Feb 2024 Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer

One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010).

Hot PATE: Private Aggregation of Distributions for Diverse Task

no code implementations4 Dec 2023 Edith Cohen, Xin Lyu, Jelani Nelson, Tamas Sarlos, Uri Stemmer

Until now, PATE has primarily been explored with classification-like tasks, where each example possesses a ground-truth label, and knowledge is transferred to the student by labeling public examples.

Privacy Preserving valid

Tricking the Hashing Trick: A Tight Lower Bound on the Robustness of CountSketch to Adaptive Inputs

no code implementations3 Jul 2022 Edith Cohen, Jelani Nelson, Tamás Sarlós, Uri Stemmer

When inputs are adaptive, however, an adversarial input can be constructed after $O(\ell)$ queries with the classic estimator and the best known robust estimator only supports $\tilde{O}(\ell^2)$ queries.

Dimensionality Reduction

Estimation of Entropy in Constant Space with Improved Sample Complexity

no code implementations19 May 2022 Maryam Aliakbarpour, Andrew Mcgregor, Jelani Nelson, Erik Waingarten

Recent work of Acharya et al. (NeurIPS 2019) showed how to estimate the entropy of a distribution $\mathcal D$ over an alphabet of size $k$ up to $\pm\epsilon$ additive error by streaming over $(k/\epsilon^3) \cdot \text{polylog}(1/\epsilon)$ i. i. d.

Uniform Approximations for Randomized Hadamard Transforms with Applications

no code implementations3 Mar 2022 Yeshwanth Cherapanamjeri, Jelani Nelson

We use our inequality to then derive improved guarantees for two applications in the high-dimensional regime: 1) kernel approximation and 2) distance estimation.

Dimensionality Reduction

Private Frequency Estimation via Projective Geometry

1 code implementation1 Mar 2022 Vitaly Feldman, Jelani Nelson, Huy Lê Nguyen, Kunal Talwar

In many parameter settings used in practice this is a significant improvement over the $ O(n+k^2)$ computation cost that is achieved by the recent PI-RAPPOR algorithm (Feldman and Talwar; 2021).

On the Robustness of CountSketch to Adaptive Inputs

no code implementations28 Feb 2022 Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Moshe Shechner, Uri Stemmer

CountSketch is a popular dimensionality reduction technique that maps vectors to a lower dimension using randomized linear measurements.

Dimensionality Reduction

Terminal Embeddings in Sublinear Time

no code implementations17 Oct 2021 Yeshwanth Cherapanamjeri, Jelani Nelson

\end{equation*} When $X, Y$ are both Euclidean metrics with $Y$ being $m$-dimensional, recently (Narayanan, Nelson 2019), following work of (Mahabadi, Makarychev, Makarychev, Razenshteyn 2018), showed that distortion $1+\epsilon$ is achievable via such a terminal embedding with $m = O(\epsilon^{-2}\log n)$ for $n := |T|$.

LEMMA

On Adaptive Distance Estimation

no code implementations NeurIPS 2020 Yeshwanth Cherapanamjeri, Jelani Nelson

Our memory consumption is $\tilde O((n+d)d/\epsilon^2)$, slightly more than the $O(nd)$ required to store $X$ in memory explicitly, but with the benefit that our time to answer queries is only $\tilde O(\epsilon^{-2}(n + d))$, much faster than the naive $\Theta(nd)$ time obtained from a linear scan in the case of $n$ and $d$ very large.

Data Structures and Algorithms

Optimal terminal dimensionality reduction in Euclidean space

no code implementations22 Oct 2018 Shyam Narayanan, Jelani Nelson

$$ We show that a strictly stronger version of this statement holds, answering one of the main open questions of [MMMR18]: "$\forall y\in X$" in the above statement may be replaced with "$\forall y\in\mathbb R^d$", so that $f$ not only preserves distances within $X$, but also distances to $X$ from the rest of space.

Dimensionality Reduction LEMMA

Heavy hitters via cluster-preserving clustering

no code implementations5 Apr 2016 Kasper Green Larsen, Jelani Nelson, Huy L. Nguyen, Mikkel Thorup

Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster.

Clustering

An improved analysis of the ER-SpUD dictionary learning algorithm

no code implementations18 Feb 2016 Jarosław Błasiok, Jelani Nelson

Then, given some small number $p$ of samples, i. e.\ columns of $Y$, the goal is to learn the dictionary $A$ up to small error, as well as $X$.

Dictionary Learning

Optimal approximate matrix product in terms of stable rank

no code implementations8 Jul 2015 Michael B. Cohen, Jelani Nelson, David P. Woodruff

We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having $m = O(\tilde{r}/\varepsilon^2)$ rows.

Clustering Dimensionality Reduction +1

Toward a unified theory of sparse dimensionality reduction in Euclidean space

no code implementations11 Nov 2013 Jean Bourgain, Sjoerd Dirksen, Jelani Nelson

Let $\Phi\in\mathbb{R}^{m\times n}$ be a sparse Johnson-Lindenstrauss transform [KN14] with $s$ non-zeroes per column.

Dimensionality Reduction LEMMA

Fast Moment Estimation in Data Streams in Optimal Space

no code implementations23 Jul 2010 Daniel M. Kane, Jelani Nelson, Ely Porat, David P. Woodruff

We give a space-optimal algorithm with update time O(log^2(1/eps)loglog(1/eps)) for (1+eps)-approximating the pth frequency moment, 0 < p < 2, of a length-n vector updated in a data stream.

Data Structures and Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.