Distributed Optimization
77 papers with code • 0 benchmarks • 0 datasets
The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.
Source: Analysis of Distributed StochasticDual Coordinate Ascent
Benchmarks
These leaderboards are used to track progress in Distributed Optimization
Libraries
Use these libraries to find Distributed Optimization models and implementationsLatest papers with no code
Estimation Network Design framework for efficient distributed optimization
Distributed decision problems features a group of agents that can only communicate over a peer-to-peer network, without a central memory.
Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization
To address the special optimization problem, we propose a coupled distributed stochastic approximation algorithm, in which every agent updates the current beliefs of its unknown parameter and decision variable by stochastic approximation method; and then averages the beliefs and decision variables of its neighbors over network in consensus protocol.
Distributed Fractional Bayesian Learning for Adaptive Optimization
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter, whereas they mean to collaboratively estimate the true parameter and find the optimal solution over a connected network.
Federated Optimization with Doubly Regularized Drift Correction
Federated learning is a distributed optimization paradigm that allows training machine learning models across decentralized devices while keeping the data localized.
Analysis of Distributed Optimization Algorithms on a Real Processing-In-Memory System
Processor-centric architectures (e. g., CPU, GPU) commonly used for modern ML training workloads are limited by the data movement bottleneck, i. e., due to repeatedly accessing the training dataset.
Generalized Gradient Descent is a Hypergraph Functor
In this paper, we show that generalized gradient descent with respect to a given CRDC induces a hypergraph functor from a hypergraph category of optimization problems to a hypergraph category of dynamical systems.
Distributed Maximum Consensus over Noisy Links
We introduce a distributed algorithm, termed noise-robust distributed maximum consensus (RD-MC), for estimating the maximum value within a multi-agent network in the presence of noisy communication links.
Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization
Community battery systems have been widely deployed to provide services to the grid.
Quantization Avoids Saddle Points in Distributed Optimization
More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization.
Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction
These methods replace the outer loop with probabilistic gradient computation triggered by a coin flip in each iteration, ensuring simpler proofs, efficient hyperparameter selection, and sharp convergence guarantees.