Search Results for author: Sarit Khirirat

Found 9 papers, 1 papers with code

Compressed Federated Reinforcement Learning with a Generative Model

no code implementations26 Mar 2024 Ali Beikmohammadi, Sarit Khirirat, Sindri Magnússon

Addressing this challenge, federated reinforcement learning (FedRL) has emerged, wherein agents collaboratively learn a single policy by aggregating local estimations.

Q-Learning reinforcement-learning

Distributed Momentum Methods Under Biased Gradient Estimations

no code implementations29 Feb 2024 Ali Beikmohammadi, Sarit Khirirat, Sindri Magnússon

In this work, we establish non-asymptotic convergence bounds on distributed momentum methods under biased gradient estimation on both general non-convex and $\mu$-PL non-convex problems.

Distributed Optimization Meta-Learning

On the Convergence of Federated Learning Algorithms without Data Similarity

1 code implementation29 Feb 2024 Ali Beikmohammadi, Sarit Khirirat, Sindri Magnússon

In this paper, we present a novel and unified framework for analyzing the convergence of federated learning algorithms without the need for data similarity conditions.

Federated Learning

Clip21: Error Feedback for Gradient Clipping

no code implementations30 May 2023 Sarit Khirirat, Eduard Gorbunov, Samuel Horváth, Rustem Islamov, Fakhri Karray, Peter Richtárik

Motivated by the increasing popularity and importance of large-scale training under differential privacy (DP) constraints, we study distributed gradient methods with gradient clipping, i. e., clipping applied to the gradients computed from local information at the nodes.

Balancing Privacy and Performance for Private Federated Learning Algorithms

no code implementations11 Apr 2023 Xiangjian Hou, Sarit Khirirat, Mohammad Yaqub, Samuel Horvath

Our findings reveal a direct correlation between the optimal number of local steps, communication rounds, and a set of variables, e. g the DP privacy budget and other problem parameters, specifically in the context of strongly convex optimization.

Federated Learning

A flexible framework for communication-efficient machine learning: from HPC to IoT

no code implementations13 Mar 2020 Sarit Khirirat, Sindri Magnússon, Arda Aytekin, Mikael Johansson

With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes.

BIG-bench Machine Learning

Compressed Gradient Methods with Hessian-Aided Error Compensation

no code implementations23 Sep 2019 Sarit Khirirat, Sindri Magnússon, Mikael Johansson

Several gradient compression techniques have been proposed to reduce the communication load at the price of a loss in solution accuracy.

The Convergence of Sparsified Gradient Methods

no code implementations NeurIPS 2018 Dan Alistarh, Torsten Hoefler, Mikael Johansson, Sarit Khirirat, Nikola Konstantinov, Cédric Renggli

Distributed training of massive machine learning models, in particular deep neural networks, via Stochastic Gradient Descent (SGD) is becoming commonplace.

Quantization

Distributed learning with compressed gradients

no code implementations18 Jun 2018 Sarit Khirirat, Hamid Reza Feyzmahdavian, Mikael Johansson

Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed optimization for large-scale machine learning.

BIG-bench Machine Learning Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.