Search Results for author: Hilal Asi

Found 19 papers, 1 papers with code

Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages

no code implementations16 Apr 2024 Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy L. Nguyen, Kunal Talwar, Samson Zhou

We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} \in\mathbb{R}^d$.

DP-Dueling: Learning from Preference Feedback without Compromising User Privacy

no code implementations22 Mar 2024 Aadirupa Saha, Hilal Asi

We consider the well-studied dueling bandit problem, where a learner aims to identify near-optimal actions using pairwise comparisons, under the constraint of differential privacy.

Active Learning

User-level Differentially Private Stochastic Convex Optimization: Efficient Algorithms with Optimal Rates

no code implementations7 Nov 2023 Hilal Asi, Daogao Liu

We study differentially private stochastic convex optimization (DP-SCO) under user-level privacy, where each user may hold multiple data items.

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

no code implementations27 Feb 2023 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

We also develop an adaptive algorithm for the small-loss setting with regret $O(L^\star\log d + \varepsilon^{-1} \log^{1. 5}{d})$ where $L^\star$ is the total loss of the best expert.

From Robustness to Privacy and Back

no code implementations3 Feb 2023 Hilal Asi, Jonathan Ullman, Lydia Zakynthinou

Thus, we conclude that for any low-dimensional task, the optimal error rate for $\varepsilon$-differentially private estimators is essentially the same as the optimal error rate for estimators that are robust to adversarially corrupting $1/\varepsilon$ training samples.

Private optimization in the interpolation regime: faster rates and hardness results

no code implementations31 Oct 2022 Hilal Asi, Karan Chadha, Gary Cheng, John Duchi

In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting.

Private Online Prediction from Experts: Separations and Faster Rates

no code implementations24 Oct 2022 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Our lower bounds also show a separation between pure and approximate differential privacy for adaptive adversaries where the latter is necessary to achieve the non-private $O(\sqrt{T})$ regret.

How many labelers do you have? A closer look at gold-standard labels

no code implementations24 Jun 2022 Chen Cheng, Hilal Asi, John Duchi

The construction of most supervised learning datasets revolves around collecting multiple labels for each instance, then aggregating the labels to form a type of ``gold-standard.''.

Optimal Algorithms for Mean Estimation under Local Differential Privacy

no code implementations5 May 2022 Hilal Asi, Vitaly Feldman, Kunal Talwar

We show that PrivUnit (Bhowmick et al. 2018) with optimized parameters achieves the optimal variance among a large family of locally private randomizers.

Adapting to Function Difficulty and Growth Conditions in Private Optimization

no code implementations NeurIPS 2021 Hilal Asi, Daniel Levy, John Duchi

We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize.

Private Adaptive Gradient Methods for Convex Optimization

no code implementations25 Jun 2021 Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.

Stochastic Bias-Reduced Gradient Methods

no code implementations NeurIPS 2021 Hilal Asi, Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford

We develop a new primitive for stochastic optimization: a low-bias, low-cost estimator of the minimizer $x_\star$ of any Lipschitz strongly-convex function.

Stochastic Optimization

Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$ Geometry

no code implementations2 Mar 2021 Hilal Asi, Vitaly Feldman, Tomer Koren, Kunal Talwar

Stochastic convex optimization over an $\ell_1$-bounded domain is ubiquitous in machine learning applications such as LASSO but remains poorly understood when learning with differential privacy.

Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms

no code implementations NeurIPS 2020 Hilal Asi, John C. Duchi

We study and provide instance-optimal algorithms in differential privacy by extending and approximating the inverse sensitivity mechanism.

Minibatch Stochastic Approximate Proximal Point Methods

no code implementations NeurIPS 2020 Hilal Asi, Karan Chadha, Gary Cheng, John C. Duchi

In contrast to standard stochastic gradient methods, these methods may have linear speedup in the minibatch setting even for non-smooth functions.

Near Instance-Optimality in Differential Privacy

no code implementations16 May 2020 Hilal Asi, John C. Duchi

We develop two notions of instance optimality in differential privacy, inspired by classical statistical theory: one by defining a local minimax risk and the other by considering unbiased mechanisms and analogizing the Cramer-Rao bound, and we show that the local modulus of continuity of the estimand of interest completely determines these quantities.

regression

Element Level Differential Privacy: The Right Granularity of Privacy

no code implementations5 Dec 2019 Hilal Asi, John Duchi, Omid Javidbakht

Differential Privacy (DP) provides strong guarantees on the risk of compromising a user's data in statistical learning applications, though these strong protections make learning challenging and may be too stringent for some use cases.

The importance of better models in stochastic optimization

1 code implementation20 Mar 2019 Hilal Asi, John C. Duchi

Standard stochastic optimization methods are brittle, sensitive to stepsize choices and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives.

Stochastic Optimization

Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity

no code implementations12 Oct 2018 Hilal Asi, John C. Duchi

We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.