Search Results for author: Anton Rodomanov

Found 3 papers, 0 papers with code

Federated Optimization with Doubly Regularized Drift Correction

no code implementations12 Apr 2024 Xiaowen Jiang, Anton Rodomanov, Sebastian U. Stich

Federated learning is a distributed optimization paradigm that allows training machine learning models across decentralized devices while keeping the data localized.

Distributed Optimization Federated Learning

Non-Convex Stochastic Composite Optimization with Polyak Momentum

no code implementations5 Mar 2024 Yuan Gao, Anton Rodomanov, Sebastian U. Stich

In this paper, we focus on the stochastic proximal gradient method with Polyak momentum.

Polynomial Preconditioning for Gradient Methods

no code implementations30 Jan 2023 Nikita Doikov, Anton Rodomanov

We study first-order methods with preconditioning for solving structured nonlinear convex optimization problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.