no code implementations • 7 Feb 2024 • Ahmet Alacaoglu, Donghwan Kim, Stephen J. Wright
With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for $\rho < \frac{1}{L}$.
no code implementations • 1 Nov 2023 • Ahmet Alacaoglu, Stephen J. Wright
To find a point that satisfies $\varepsilon$-approximate first-order conditions, we require $\widetilde{O}(\varepsilon^{-3})$ complexity in the first case, $\widetilde{O}(\varepsilon^{-4})$ in the second case, and $\widetilde{O}(\varepsilon^{-5})$ in the third case.
no code implementations • 4 Oct 2023 • Xufeng Cai, Ahmet Alacaoglu, Jelena Diakonikolas
Our main contributions are variants of the classical Halpern iteration that employ variance reduction to obtain improved complexity guarantees in which $n$ component operators in the finite sum are ``on average'' either cocoercive or Lipschitz continuous and monotone, with parameter $L$.
no code implementations • 28 Dec 2022 • Ahmet Alacaoglu, Axel Böhm, Yura Malitsky
We improve the understanding of the $\textit{golden ratio algorithm}$, which solves monotone variational inequalities (VI) and convex-concave min-max problems via the distinctive feature of adapting the step sizes to the local Lipschitz constants.
no code implementations • 29 Mar 2022 • Ahmet Alacaoglu, Hanbaek Lyu
As an application, we obtain first online nonnegative matrix factorization algorithms for dependent data based on stochastic projected gradient methods with adaptive step sizes and optimal rate of convergence.
no code implementations • 19 Jan 2022 • Ahmet Alacaoglu, Volkan Cevher, Stephen J. Wright
We prove complexity bounds for the primal-dual algorithm with random extrapolation and coordinate descent (PURE-CD), which has been shown to obtain good practical performance for solving convex-concave min-max problems with bilinear coupling.
no code implementations • NeurIPS 2021 • Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher
We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective.
no code implementations • 29 Sep 2021 • Ahmet Alacaoglu, Luca Viano, Niao He, Volkan Cevher
Our sample complexities also match the best-known results for global convergence of policy gradient and two time-scale actor-critic algorithms in the single agent setting.
1 code implementation • 16 Feb 2021 • Ahmet Alacaoglu, Yura Malitsky
We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.
no code implementations • ICML 2020 • Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
no code implementations • ICML 2020 • Maria-Luiza Vladarean, Ahmet Alacaoglu, Ya-Ping Hsieh, Volkan Cevher
We propose two novel conditional gradient-based methods for solving structured stochastic convex optimization problems with a large number of linear constraints.
no code implementations • 11 Jun 2020 • Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher
We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective.
no code implementations • ICML 2020 • Ahmet Alacaoglu, Yura Malitsky, Panayotis Mertikopoulos, Volkan Cevher
In this paper, we focus on a theory-practice gap for Adam and its variants (AMSgrad, AdamNC, etc.).
no code implementations • NeurIPS 2017 • Ahmet Alacaoglu, Quoc Tran-Dinh, Olivier Fercoq, Volkan Cevher
We propose a new randomized coordinate descent method for a convex optimization template with broad applications.