Search Results for author: Cristóbal Guzmán

Found 20 papers, 1 papers with code

Differentially Private Optimization with Sparse Gradients

no code implementations16 Apr 2024 Badih Ghazi, Cristóbal Guzmán, Pritish Kamath, Ravi Kumar, Pasin Manurangsi

Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of individual gradients.

Optimization on a Finer Scale: Bounded Local Subgradient Variation Perspective

no code implementations24 Mar 2024 Jelena Diakonikolas, Cristóbal Guzmán

The resulting class of objective functions encapsulates the classes of objective functions traditionally studied in optimization, which are defined based on either Lipschitz continuity of the objective or H\"{o}lder/Lipschitz continuity of its gradient.

Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems

no code implementations5 Mar 2024 Tomás González, Cristóbal Guzmán, Courtney Paquette

For convex-concave and first-order-smooth stochastic objectives, our algorithms attain a rate of $\sqrt{\log(d)/n} + (\log(d)^{3/2}/[n\varepsilon])^{1/3}$, where $d$ is the dimension of the problem and $n$ the dataset size.

LEMMA

Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates

no code implementations22 Nov 2023 Michael Menart, Enayat Ullah, Raman Arora, Raef Bassily, Cristóbal Guzmán

We further show that, without assuming the KL condition, the same gradient descent algorithm can achieve fast convergence to a stationary point when the gradient stays sufficiently large during the run of the algorithm.

An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue Optimization Problems

no code implementations30 Jun 2023 Clément Lezane, Cristóbal Guzmán, Alexandre d'Aspremont

For the $L$-smooth case with a feasible set bounded by $D$, we derive a convergence rate of $ O( {L^2 D^2}/{(T^{2}\sqrt{T})} + {(D_0^2+\sigma^2)}/{\sqrt{T}} )$, where $D_0$ is the starting distance to an optimal solution, and $ \sigma^2$ is the stochastic oracle variance.

Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap

no code implementations24 Feb 2023 Raef Bassily, Cristóbal Guzmán, Michael Menart

We show that convex-concave Lipschitz stochastic saddle point problems (also known as stochastic minimax optimization) can be solved under the constraint of $(\epsilon,\delta)$-differential privacy with \emph{strong (primal-dual) gap} rate of $\tilde O\big(\frac{1}{\sqrt{n}} + \frac{\sqrt{d}}{n\epsilon}\big)$, where $n$ is the dataset size and $d$ is the dimension of the problem.

Stochastic Optimization

Optimal Algorithms for Stochastic Complementary Composite Minimization

no code implementations3 Nov 2022 Alexandre d'Aspremont, Cristóbal Guzmán, Clément Lezane

Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting.

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

no code implementations2 Jun 2022 Raman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart, Enayat Ullah

We provide a new efficient algorithm that finds an $\tilde{O}\big(\big[\frac{\sqrt{d}}{n\varepsilon}\big]^{2/3}\big)$-stationary point in the finite-sum setting, where $n$ is the number of samples.

Stochastic Optimization

Differentially Private Generalized Linear Models Revisited

no code implementations6 May 2022 Raman Arora, Raef Bassily, Cristóbal Guzmán, Michael Menart, Enayat Ullah

For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) $\Theta\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^*\Vert}{\sqrt{n\epsilon}},\frac{\sqrt{\text{rank}}\Vert w^*\Vert}{n\epsilon}\right\}\right)$, where $\text{rank}$ is the rank of the design matrix.

Model Selection

Stochastic Halpern Iteration with Variance Reduction for Stochastic Monotone Inclusions

1 code implementation17 Mar 2022 Xufeng Cai, Chaobing Song, Cristóbal Guzmán, Jelena Diakonikolas

We study stochastic monotone inclusion problems, which widely appear in machine learning applications, including robust regression and adversarial learning.

Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness

no code implementations15 Feb 2022 Sarah Sachs, Hédi Hadiji, Tim van Erven, Cristóbal Guzmán

case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case they gracefully deteriorate to match the minimax regret.

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

no code implementations NeurIPS 2021 Raef Bassily, Cristóbal Guzmán, Michael Menart

For the $\ell_1$-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, $\tilde O\big(\frac{\log^{2/3}{d}}{{(n\varepsilon)^{1/3}}}\big)$ in linear time.

Stochastic Optimization

Best-Case Lower Bounds in Online Learning

no code implementations NeurIPS 2021 Cristóbal Guzmán, Nishant A. Mehta, Ali Mortazavi

Much of the work in online learning focuses on the study of sublinear upper bounds on the regret.

Fairness

Optimal Algorithms for Differentially Private Stochastic Monotone Variational Inequalities and Saddle-Point Problems

no code implementations7 Apr 2021 Digvijay Boob, Cristóbal Guzmán

We show that a stochastic approximation variant of these algorithms attains risk bounds vanishing as a function of the dataset size, with respect to the strong gap function; and a sampling with replacement variant achieves optimal risk bounds with respect to a weak gap function.

The Complexity of Nonconvex-Strongly-Concave Minimax Optimization

no code implementations29 Mar 2021 Siqi Zhang, Junchi Yang, Cristóbal Guzmán, Negar Kiyavash, Niao He

In the averaged smooth finite-sum setting, our proposed algorithm improves over previous algorithms by providing a nearly-tight dependence on the condition number.

Non-Euclidean Differentially Private Stochastic Convex Optimization: Optimal Rates in Linear Time

no code implementations1 Mar 2021 Raef Bassily, Cristóbal Guzmán, Anupama Nandi

For $2 < p \leq \infty$, we show that existing linear-time constructions for the Euclidean setup attain a nearly optimal excess risk in the low-dimensional regime.

Complementary Composite Minimization, Small Gradients in General Norms, and Applications

no code implementations26 Jan 2021 Jelena Diakonikolas, Cristóbal Guzmán

We introduce a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term.

regression

Lower Bounds for Parallel and Randomized Convex Optimization

no code implementations5 Nov 2018 Jelena Diakonikolas, Cristóbal Guzmán

We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation.

Cannot find the paper you are looking for? You can Submit a new open access paper.