no code implementations • 16 Apr 2024 • Badih Ghazi, Cristóbal Guzmán, Pritish Kamath, Ravi Kumar, Pasin Manurangsi
Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of individual gradients.
no code implementations • 24 Mar 2024 • Jelena Diakonikolas, Cristóbal Guzmán
The resulting class of objective functions encapsulates the classes of objective functions traditionally studied in optimization, which are defined based on either Lipschitz continuity of the objective or H\"{o}lder/Lipschitz continuity of its gradient.
no code implementations • 6 Mar 2024 • Enayat Ullah, Michael Menart, Raef Bassily, Cristóbal Guzmán, Raman Arora
We also study PA-DP supervised learning with \textit{unlabeled} public samples.
no code implementations • 5 Mar 2024 • Tomás González, Cristóbal Guzmán, Courtney Paquette
For convex-concave and first-order-smooth stochastic objectives, our algorithms attain a rate of $\sqrt{\log(d)/n} + (\log(d)^{3/2}/[n\varepsilon])^{1/3}$, where $d$ is the dimension of the problem and $n$ the dataset size.
no code implementations • 22 Nov 2023 • Michael Menart, Enayat Ullah, Raman Arora, Raef Bassily, Cristóbal Guzmán
We further show that, without assuming the KL condition, the same gradient descent algorithm can achieve fast convergence to a stationary point when the gradient stays sufficiently large during the run of the algorithm.
no code implementations • 30 Jun 2023 • Clément Lezane, Cristóbal Guzmán, Alexandre d'Aspremont
For the $L$-smooth case with a feasible set bounded by $D$, we derive a convergence rate of $ O( {L^2 D^2}/{(T^{2}\sqrt{T})} + {(D_0^2+\sigma^2)}/{\sqrt{T}} )$, where $D_0$ is the starting distance to an optimal solution, and $ \sigma^2$ is the stochastic oracle variance.
no code implementations • 24 Feb 2023 • Raef Bassily, Cristóbal Guzmán, Michael Menart
We show that convex-concave Lipschitz stochastic saddle point problems (also known as stochastic minimax optimization) can be solved under the constraint of $(\epsilon,\delta)$-differential privacy with \emph{strong (primal-dual) gap} rate of $\tilde O\big(\frac{1}{\sqrt{n}} + \frac{\sqrt{d}}{n\epsilon}\big)$, where $n$ is the dataset size and $d$ is the dimension of the problem.
no code implementations • 3 Nov 2022 • Alexandre d'Aspremont, Cristóbal Guzmán, Clément Lezane
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting.
no code implementations • 2 Jun 2022 • Raman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart, Enayat Ullah
We provide a new efficient algorithm that finds an $\tilde{O}\big(\big[\frac{\sqrt{d}}{n\varepsilon}\big]^{2/3}\big)$-stationary point in the finite-sum setting, where $n$ is the number of samples.
no code implementations • 6 May 2022 • Raman Arora, Raef Bassily, Cristóbal Guzmán, Michael Menart, Enayat Ullah
For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) $\Theta\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^*\Vert}{\sqrt{n\epsilon}},\frac{\sqrt{\text{rank}}\Vert w^*\Vert}{n\epsilon}\right\}\right)$, where $\text{rank}$ is the rank of the design matrix.
1 code implementation • 17 Mar 2022 • Xufeng Cai, Chaobing Song, Cristóbal Guzmán, Jelena Diakonikolas
We study stochastic monotone inclusion problems, which widely appear in machine learning applications, including robust regression and adversarial learning.
no code implementations • 15 Feb 2022 • Sarah Sachs, Hédi Hadiji, Tim van Erven, Cristóbal Guzmán
case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case they gracefully deteriorate to match the minimax regret.
no code implementations • NeurIPS 2021 • Raef Bassily, Cristóbal Guzmán, Michael Menart
For the $\ell_1$-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, $\tilde O\big(\frac{\log^{2/3}{d}}{{(n\varepsilon)^{1/3}}}\big)$ in linear time.
no code implementations • NeurIPS 2021 • Cristóbal Guzmán, Nishant A. Mehta, Ali Mortazavi
Much of the work in online learning focuses on the study of sublinear upper bounds on the regret.
no code implementations • 7 Apr 2021 • Digvijay Boob, Cristóbal Guzmán
We show that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function; and a sampling with replacement variant achieves optimal risk bounds with respect to a weak gap function.
no code implementations • 29 Mar 2021 • Siqi Zhang, Junchi Yang, Cristóbal Guzmán, Negar Kiyavash, Niao He
In the averaged smooth finite-sum setting, our proposed algorithm improves over previous algorithms by providing a nearly-tight dependence on the condition number.
no code implementations • 1 Mar 2021 • Raef Bassily, Cristóbal Guzmán, Anupama Nandi
For $2 < p \leq \infty$, we show that existing linear-time constructions for the Euclidean setup attain a nearly optimal excess risk in the low-dimensional regime.
no code implementations • 26 Jan 2021 • Jelena Diakonikolas, Cristóbal Guzmán
We introduce a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term.
no code implementations • NeurIPS 2020 • Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, Kunal Talwar
Our work is the first to address uniform stability of SGD on {\em nonsmooth} convex losses.
no code implementations • 5 Nov 2018 • Jelena Diakonikolas, Cristóbal Guzmán
We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation.