Distributed Optimization
77 papers with code • 0 benchmarks • 0 datasets
The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.
Source: Analysis of Distributed StochasticDual Coordinate Ascent
Benchmarks
These leaderboards are used to track progress in Distributed Optimization
Libraries
Use these libraries to find Distributed Optimization models and implementationsLatest papers with no code
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction
These methods replace the outer loop with probabilistic gradient computation triggered by a coin flip in each iteration, ensuring simpler proofs, efficient hyperparameter selection, and sharp convergence guarantees.
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
In Distributed optimization and Learning, and even more in the modern framework of federated learning, communication, which is slow and costly, is critical.
MUSIC: Accelerated Convergence for Distributed Optimization With Inexact and Exact Methods
Gradient-type distributed optimization methods have blossomed into one of the most important tools for solving a minimization learning task over a networked agent system.
Privacy-Preserving Distributed Optimization and Learning
We first discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation and indicate their pros and cons for privacy protection in distributed optimization and learning.
Distributed Momentum Methods Under Biased Gradient Estimations
In this work, we establish non-asymptotic convergence bounds on distributed momentum methods under biased gradient estimation on both general non-convex and $\mu$-PL non-convex problems.
TernaryVote: Differentially Private, Communication Efficient, and Byzantine Resilient Distributed Optimization on Heterogeneous Data
In this paper, we propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
A Survey of Resilient Coordination for Cyber-Physical Systems Against Malicious Attacks
Furthermore, miscellaneous resilient coordination problems are discussed in this survey.
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity
We introduce M3, a method combining MARINA-P with uplink compression and a momentum step, achieving bidirectional compression with provable improvements in total communication complexity as the number of workers increases.
Survey of Distributed Algorithms for Resource Allocation over Multi-Agent Systems
This survey paper provides a comprehensive analysis of distributed algorithms for addressing the distributed resource allocation (DRA) problem over multi-agent systems.
Optimal Data Splitting in Distributed Optimization for Machine Learning
Therefore, a large amount of research has recently been directed at solving this problem.