Search Results for author: Sulaiman A. Alghunaim

Found 7 papers, 1 papers with code

Diffusion Stochastic Optimization for Min-Max Problems

1 code implementation26 Jan 2024 Haoyuan Cai, Sulaiman A. Alghunaim, Ali H. Sayed

The optimistic gradient method is useful in addressing minimax optimization problems.

Stochastic Optimization

Revisiting Decentralized ProxSkip: Achieving Linear Speedup

no code implementations12 Oct 2023 Luyao Guo, Sulaiman A. Alghunaim, Kun Yuan, Laurent Condat, Jinde Cao

We demonstrate that the leading communication complexity of ProxSkip is $\mathcal{O}\left(\frac{p\sigma^2}{n\epsilon^2}\right)$ for non-convex and convex settings, and $\mathcal{O}\left(\frac{p\sigma^2}{n\epsilon}\right)$ for the strongly convex setting, where $n$ represents the number of nodes, $p$ denotes the probability of communication, $\sigma^2$ signifies the level of stochastic noise, and $\epsilon$ denotes the desired accuracy level.

Distributed Optimization Federated Learning

On the Performance of Gradient Tracking with Local Updates

no code implementations10 Oct 2022 Edward Duc Hien Nguyen, Sulaiman A. Alghunaim, Kun Yuan, César A. Uribe

We study the decentralized optimization problem where a network of $n$ agents seeks to minimize the average of a set of heterogeneous non-convex cost functions distributedly.

Federated Learning

Removing Data Heterogeneity Influence Enhances Network Topology Dependence of Decentralized SGD

no code implementations17 May 2021 Kun Yuan, Sulaiman A. Alghunaim, Xinmeng Huang

For smooth objective functions, the transient stage (which measures the number of iterations the algorithm has to experience before achieving the linear speedup stage) of D-SGD is on the order of ${\Omega}(n/(1-\beta)^2)$ and $\Omega(n^3/(1-\beta)^4)$ for strongly and generally convex cost functions, respectively, where $1-\beta \in (0, 1)$ is a topology-dependent quantity that approaches $0$ for a large and sparse network.

Stochastic Optimization

A Multi-Agent Primal-Dual Strategy for Composite Optimization over Distributed Features

no code implementations15 Jun 2020 Sulaiman A. Alghunaim, Ming Yan, Ali H. Sayed

This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents.

regression

On the Influence of Bias-Correction on Distributed Stochastic Optimization

no code implementations26 Mar 2019 Kun Yuan, Sulaiman A. Alghunaim, Bicheng Ying, Ali H. Sayed

It is still unknown {\em whether}, {\em when} and {\em why} these bias-correction methods can outperform their traditional counterparts (such as consensus and diffusion) with noisy gradient and constant step-sizes.

Stochastic Optimization

Distributed Coupled Multi-Agent Stochastic Optimization

no code implementations23 Dec 2017 Sulaiman A. Alghunaim, Ali H. Sayed

In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.