no code implementations • 16 Apr 2024 • Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Kunal Talwar, Samson Zhou
We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in\mathbb{R}^d$.
no code implementations • 22 Mar 2024 • Aadirupa Saha, Hilal Asi
We consider the well-studied dueling bandit problem, where a learner aims to identify near-optimal actions using pairwise comparisons, under the constraint of differential privacy.
no code implementations • 7 Nov 2023 • Hilal Asi, Daogao Liu
We study differentially private stochastic convex optimization (DP-SCO) under user-level privacy, where each user may hold multiple data items.
no code implementations • 27 Feb 2023 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
We also develop an adaptive algorithm for the small-loss setting with regret $O(L^\star\log d + \varepsilon^{-1} \log^{1. 5}{d})$ where $L^\star$ is the total loss of the best expert.
no code implementations • 3 Feb 2023 • Hilal Asi, Jonathan Ullman, Lydia Zakynthinou
Thus, we conclude that for any low-dimensional task, the optimal error rate for $\varepsilon$-differentially private estimators is essentially the same as the optimal error rate for estimators that are robust to adversarially corrupting $1/\varepsilon$ training samples.
no code implementations • 31 Oct 2022 • Hilal Asi, Karan Chadha, Gary Cheng, John Duchi
In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting.
no code implementations • 24 Oct 2022 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.
no code implementations • 24 Jun 2022 • Chen Cheng, Hilal Asi, John Duchi
The construction of most supervised learning datasets revolves around collecting multiple labels for each instance, then aggregating the labels to form a type of ``gold-standard.''.
no code implementations • 5 May 2022 • Hilal Asi, Vitaly Feldman, Kunal Talwar
We show that PrivUnit (Bhowmick et al. 2018) with optimized parameters achieves the optimal variance among a large family of locally private randomizers.
no code implementations • NeurIPS 2021 • Hilal Asi, Daniel Levy, John Duchi
We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize.
no code implementations • 25 Jun 2021 • Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar
We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.
no code implementations • NeurIPS 2021 • Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford
We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function.
no code implementations • 2 Mar 2021 • Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar
Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy.
no code implementations • NeurIPS 2020 • Hilal Asi, John C. Duchi
We study and provide instance-optimal algorithms in differential privacy by extending and approximating the inverse sensitivity mechanism.
no code implementations • NeurIPS 2020 • Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi
In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions.
no code implementations • 16 May 2020 • Hilal Asi, John C. Duchi
We develop two notions of instance optimality in differential privacy, inspired by classical statistical theory: one by defining a local minimax risk and the other by considering unbiased mechanisms and analogizing the Cramer-Rao bound, and we show that the local modulus of continuity of the estimand of interest completely determines these quantities.
no code implementations • 5 Dec 2019 • Hilal Asi, John Duchi, Omid Javidbakht
Differential Privacy (DP) provides strong guarantees on the risk of compromising a user's data in statistical learning applications, though these strong protections make learning challenging and may be too stringent for some use cases.
1 code implementation • 20 Mar 2019 • Hilal Asi, John C. Duchi
Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives.
no code implementations • 12 Oct 2018 • Hilal Asi, John C. Duchi
We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods.