Search Results for author: Ahmet Alacaoglu

Found 14 papers, 1 papers with code

Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

no code implementations7 Feb 2024 Ahmet Alacaoglu, Donghwan Kim, Stephen J. Wright

With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for $\rho < \frac{1}{L}$.

Complexity of Single Loop Algorithms for Nonlinear Programming with Stochastic Objective and Constraints

no code implementations1 Nov 2023 Ahmet Alacaoglu, Stephen J. Wright

To find a point that satisfies $\varepsilon$-approximate first-order conditions, we require $\widetilde{O}(\varepsilon^{-3})$ complexity in the first case, $\widetilde{O}(\varepsilon^{-4})$ in the second case, and $\widetilde{O}(\varepsilon^{-5})$ in the third case.

Variance Reduced Halpern Iteration for Finite-Sum Monotone Inclusions

no code implementations4 Oct 2023 Xufeng Cai, Ahmet Alacaoglu, Jelena Diakonikolas

Our main contributions are variants of the classical Halpern iteration that employ variance reduction to obtain improved complexity guarantees in which $n$ component operators in the finite sum are ``on average'' either cocoercive or Lipschitz continuous and monotone, with parameter $L$.

Adversarial Robustness

Beyond the Golden Ratio for Variational Inequality Algorithms

no code implementations28 Dec 2022 Ahmet Alacaoglu, Axel Böhm, Yura Malitsky

We improve the understanding of the $\textit{golden ratio algorithm}$, which solves monotone variational inequalities (VI) and convex-concave min-max problems via the distinctive feature of adapting the step sizes to the local Lipschitz constants.

Convergence of First-Order Methods for Constrained Nonconvex Optimization with Dependent Data

no code implementations29 Mar 2022 Ahmet Alacaoglu, Hanbaek Lyu

As an application, we obtain first online nonnegative matrix factorization algorithms for dependent data based on stochastic projected gradient methods with adaptive step sizes and optimal rate of convergence.

On the Complexity of a Practical Primal-Dual Coordinate Method

no code implementations19 Jan 2022 Ahmet Alacaoglu, Volkan Cevher, Stephen J. Wright

We prove complexity bounds for the primal-dual algorithm with random extrapolation and coordinate descent (PURE-CD), which has been shown to obtain good practical performance for solving convex-concave min-max problems with bilinear coupling.

Convergence of adaptive algorithms for constrained weakly convex optimization

no code implementations NeurIPS 2021 Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher

We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective.

Stochastic Optimization

Sample-efficient actor-critic algorithms with an etiquette for zero-sum Markov games

no code implementations29 Sep 2021 Ahmet Alacaoglu, Luca Viano, Niao He, Volkan Cevher

Our sample complexities also match the best-known results for global convergence of policy gradient and two time-scale actor-critic algorithms in the single agent setting.

Policy Gradient Methods

Stochastic Variance Reduction for Variational Inequality Methods

1 code implementation16 Feb 2021 Ahmet Alacaoglu, Yura Malitsky

We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.

Random extrapolation for primal-dual coordinate descent

no code implementations ICML 2020 Ahmet Alacaoglu, Olivier Fercoq, Volkan Cevher

We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.

Conditional gradient methods for stochastically constrained convex minimization

no code implementations ICML 2020 Maria-Luiza Vladarean, Ahmet Alacaoglu, Ya-Ping Hsieh, Volkan Cevher

We propose two novel conditional gradient-based methods for solving structured stochastic convex optimization problems with a large number of linear constraints.

Convergence of adaptive algorithms for weakly convex constrained optimization

no code implementations11 Jun 2020 Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher

We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective.

Stochastic Optimization

Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization

no code implementations NeurIPS 2017 Ahmet Alacaoglu, Quoc Tran-Dinh, Olivier Fercoq, Volkan Cevher

We propose a new randomized coordinate descent method for a convex optimization template with broad applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.