Distributed Optimization

77 papers with code • 0 benchmarks • 0 datasets

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Libraries

Use these libraries to find Distributed Optimization models and implementations
3 papers
90
2 papers
4,182
2 papers
174

FairSync: Ensuring Amortized Group Exposure in Distributed Recommendation Retrieval

xuchen0427/fairsync 16 Feb 2024

Specifically, FairSync resolves the issue by moving it to the dual space, where a central node aggregates historical fairness data into a vector and distributes it to all servers.

0
16 Feb 2024

Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of Multipliers

sisl/distributed_admm_sampler 29 Jan 2024

Many machine learning applications require operating on a spatially distributed dataset.

1
29 Jan 2024

Asynchronous Local-SGD Training for Language Modeling

google-deepmind/asyncdiloco 17 Jan 2024

Local stochastic gradient descent (Local-SGD), also referred to as federated averaging, is an approach to distributed optimization where each device performs more than one SGD update per communication.

24
17 Jan 2024

Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates

nikosimus/cc-for-br-learning 15 Oct 2023

Byzantine robustness is an essential feature of algorithms for certain distributed optimization problems, typically encountered in collaborative/federated learning.

0
15 Oct 2023

Differentially Private Distributed Estimation and Learning

papachristoumarios/dp-distributed-estimation 28 Jun 2023

We show that the noise that minimizes the convergence time to the best estimates is the Laplace noise, with parameters corresponding to each agent's sensitivity to their signal and network characteristics.

3
28 Jun 2023

Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness

ezelikman/justonebyte 16 Jun 2023

Language model training in distributed settings is limited by the communication cost of gradient exchanges.

11
16 Jun 2023

Error Feedback Shines when Features are Rare

burlachenkok/ef21_with_rare_features 24 May 2023

To illustrate our main result, we show that in order to find a random vector $\hat{x}$ such that $\lVert {\nabla f(\hat{x})} \rVert^2 \leq \varepsilon$ in expectation, ${\color{green}\sf GD}$ with the ${\color{green}\sf Top1}$ sparsifier and ${\color{green}\sf EF}$ requires ${\cal O} \left(\left( L+{\color{blue}r} \sqrt{ \frac{{\color{red}c}}{n} \min \left( \frac{{\color{red}c}}{n} \max_i L_i^2, \frac{1}{n}\sum_{i=1}^n L_i^2 \right) }\right) \frac{1}{\varepsilon} \right)$ bits to be communicated by each worker to the server only, where $L$ is the smoothness constant of $f$, $L_i$ is the smoothness constant of $f_i$, ${\color{red}c}$ is the maximal number of clients owning any feature ($1\leq {\color{red}c} \leq n$), and ${\color{blue}r}$ is the maximal number of features owned by any client ($1\leq {\color{blue}r} \leq d$).

0
24 May 2023

On the Convergence of Decentralized Federated Learning Under Imperfect Information Sharing

vishnupandi/FedNDL3 19 Mar 2023

The first algorithm, Federated Noisy Decentralized Learning (FedNDL1), comes from the literature, where the noise is added to their parameters to simulate the scenario of the presence of noisy communication channels.

1
19 Mar 2023

Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient

nikosimus/br-lsvrg 8 Mar 2023

Distributed optimization with open collaboration is a popular field since it provides an opportunity for small groups/companies/universities, and individuals to jointly solve huge-scale problems.

1
08 Mar 2023

TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation

adap/flower 20 Feb 2023

In federated learning, a large number of users collaborate to learn a global model.

4,182
20 Feb 2023