Distributed Optimization

77 papers with code • 0 benchmarks • 0 datasets

The goal of Distributed Optimization is to optimize a certain objective defined over millions of billions of data that is distributed over many machines by utilizing the computational power of these machines.

Source: Analysis of Distributed StochasticDual Coordinate Ascent

Libraries

Use these libraries to find Distributed Optimization models and implementations
3 papers
90
2 papers
4,176
2 papers
174

Latest papers with no code

Estimation Network Design framework for efficient distributed optimization

no code yet • 23 Apr 2024

Distributed decision problems features a group of agents that can only communicate over a peer-to-peer network, without a central memory.

Rate Analysis of Coupled Distributed Stochastic Approximation for Misspecified Optimization

no code yet • 21 Apr 2024

To address the special optimization problem, we propose a coupled distributed stochastic approximation algorithm, in which every agent updates the current beliefs of its unknown parameter and decision variable by stochastic approximation method; and then averages the beliefs and decision variables of its neighbors over network in consensus protocol.

Distributed Fractional Bayesian Learning for Adaptive Optimization

no code yet • 17 Apr 2024

This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter, whereas they mean to collaboratively estimate the true parameter and find the optimal solution over a connected network.

Federated Optimization with Doubly Regularized Drift Correction

no code yet • 12 Apr 2024

Federated learning is a distributed optimization paradigm that allows training machine learning models across decentralized devices while keeping the data localized.

Analysis of Distributed Optimization Algorithms on a Real Processing-In-Memory System

no code yet • 10 Apr 2024

Processor-centric architectures (e. g., CPU, GPU) commonly used for modern ML training workloads are limited by the data movement bottleneck, i. e., due to repeatedly accessing the training dataset.

Generalized Gradient Descent is a Hypergraph Functor

no code yet • 28 Mar 2024

In this paper, we show that generalized gradient descent with respect to a given CRDC induces a hypergraph functor from a hypergraph category of optimization problems to a hypergraph category of dynamical systems.

Distributed Maximum Consensus over Noisy Links

no code yet • 27 Mar 2024

We introduce a distributed algorithm, termed noise-robust distributed maximum consensus (RD-MC), for estimating the maximum value within a multi-agent network in the presence of noisy communication links.

Network-Aware Value Stacking of Community Battery via Asynchronous Distributed Optimization

no code yet • 20 Mar 2024

Community battery systems have been widely deployed to provide services to the grid.

Quantization Avoids Saddle Points in Distributed Optimization

no code yet • 15 Mar 2024

More specifically, we propose a stochastic quantization scheme and prove that it can effectively escape saddle points and ensure convergence to a second-order stationary point in distributed nonconvex optimization.

Streamlining in the Riemannian Realm: Efficient Riemannian Optimization with Loopless Variance Reduction

no code yet • 11 Mar 2024

These methods replace the outer loop with probabilistic gradient computation triggered by a coin flip in each iteration, ensuring simpler proofs, efficient hyperparameter selection, and sharp convergence guarantees.