1 code implementation • 26 Jan 2024 • Haoyuan Cai, Sulaiman A. Alghunaim, Ali H. Sayed
The optimistic gradient method is useful in addressing minimax optimization problems.
no code implementations • 12 Oct 2023 • Luyao Guo, Sulaiman A. Alghunaim, Kun Yuan, Laurent Condat, Jinde Cao
We demonstrate that the leading communication complexity of ProxSkip is $\mathcal{O}\left(\frac{p\sigma^2}{n\epsilon^2}\right)$ for non-convex and convex settings, and $\mathcal{O}\left(\frac{p\sigma^2}{n\epsilon}\right)$ for the strongly convex setting, where $n$ represents the number of nodes, $p$ denotes the probability of communication, $\sigma^2$ signifies the level of stochastic noise, and $\epsilon$ denotes the desired accuracy level.
no code implementations • 10 Oct 2022 • Edward Duc Hien Nguyen, Sulaiman A. Alghunaim, Kun Yuan, César A. Uribe
We study the decentralized optimization problem where a network of $n$ agents seeks to minimize the average of a set of heterogeneous non-convex cost functions distributedly.
no code implementations • 17 May 2021 • Kun Yuan, Sulaiman A. Alghunaim, Xinmeng Huang
For smooth objective functions, the transient stage (which measures the number of iterations the algorithm has to experience before achieving the linear speedup stage) of D-SGD is on the order of ${\Omega}(n/(1-\beta)^2)$ and $\Omega(n^3/(1-\beta)^4)$ for strongly and generally convex cost functions, respectively, where $1-\beta \in (0, 1)$ is a topology-dependent quantity that approaches $0$ for a large and sparse network.
no code implementations • 15 Jun 2020 • Sulaiman A. Alghunaim, Ming Yan, Ali H. Sayed
This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents.
no code implementations • 26 Mar 2019 • Kun Yuan, Sulaiman A. Alghunaim, Bicheng Ying, Ali H. Sayed
It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes.
no code implementations • 23 Dec 2017 • Sulaiman A. Alghunaim, Ali H. Sayed
In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally.
Optimization and Control