no code implementations • 11 Apr 2024 • Yunxiang Li, Rui Yuan, Chen Fan, Mark Schmidt, Samuel Horváth, Robert M. Gower, Martin Takáč
Policy gradient is a widely utilized and foundational algorithm in the field of reinforcement learning (RL).
no code implementations • 27 Mar 2024 • Yunxiang Li, Nicolas Mauricio Cuadrado, Samuel Horváth, Martin Takáč
The smart grid domain requires bolstering the capabilities of existing energy management systems; Federated Learning (FL) aligns with this goal as it demonstrates a remarkable ability to train models on heterogeneous datasets while maintaining data privacy, making it suitable for smart grid applications, which often involve disparate data distributions and interdependencies among features that hinder the suitability of linear models.
no code implementations • 8 Feb 2024 • Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth
In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients.
no code implementations • 7 Feb 2024 • Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
no code implementations • 18 Dec 2023 • Nikita Kotelevskii, Samuel Horváth, Karthik Nandakumar, Martin Takáč, Maxim Panov
This paper presents a new approach to federated learning that allows selecting a model from global and personalized ones that would perform better for a particular input point.
no code implementations • 23 Nov 2023 • Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
Distributed learning has emerged as a leading paradigm for training large machine learning models.
no code implementations • 3 Oct 2023 • Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
High-probability analysis of stochastic first-order optimization methods under mild assumptions on the noise has been gaining a lot of attention in recent years.
no code implementations • 30 May 2023 • Sarit Khirirat, Eduard Gorbunov, Samuel Horváth, Rustem Islamov, Fakhri Karray, Peter Richtárik
Motivated by the increasing popularity and importance of large-scale training under differential privacy (DP) constraints, we study distributed gradient methods with gradient clipping, i. e., clipping applied to the gradients computed from local information at the nodes.
no code implementations • 29 May 2023 • Jihao Xin, Marco Canini, Peter Richtárik, Samuel Horváth
To obtain theoretical guarantees, we generalize the notion of standard unbiased compression operators to incorporate Global-QSGD.
no code implementations • 29 May 2023 • Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth
We present a partially personalized formulation of Federated Learning (FL) that strikes a balance between the flexibility of personalization and cooperativeness of global training.
no code implementations • 7 Feb 2023 • Grigory Malinovsky, Samuel Horváth, Konstantin Burlachenko, Peter Richtárik
Under this scheme, each client joins the learning process every $R$ communication rounds, which we refer to as a meta epoch.
no code implementations • 2 Feb 2023 • Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
During recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing.
no code implementations • 7 Dec 2022 • Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč
Federated learning has become a popular machine learning paradigm with many potential real-life applications, including recommendation systems, the Internet of Things (IoT), healthcare, and self-driving cars.
no code implementations • 10 Aug 2022 • Samuel Horváth, Konstantin Mishchenko, Peter Richtárik
In this work, we propose new adaptive step size strategies that improve several stochastic gradient methods.
no code implementations • 1 Jul 2022 • Samuel Horváth
Federated learning (FL) is an emerging machine learning paradigm involving multiple clients, e. g., mobile phone devices, with an incentive to collaborate in solving a machine learning problem coordinated by a central server.
1 code implementation • 1 Jun 2022 • Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel
However, many fruitful directions, such as the usage of variance reduction for achieving robustness and communication compression for reducing communication costs, remain weakly explored in the field.
no code implementations • 27 Apr 2022 • Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat
The practice of applying several local updates before aggregation across clients has been empirically shown to be a successful approach to overcoming the communication bottleneck in Federated Learning (FL).
2 code implementations • 7 Feb 2022 • Konstantin Burlachenko, Samuel Horváth, Peter Richtárik
Our system supports abstractions that provide researchers with a sufficient level of flexibility to experiment with existing and novel approaches to advance the state-of-the-art.
no code implementations • 22 Nov 2021 • Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik
A persistent problem in federated learning is that it is not clear what the optimization objective should be: the standard average risk minimization of supervised learning is inadequate in handling several major constraints specific to federated learning, such as communication adaptivity and personalization control.
no code implementations • 25 Feb 2021 • Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau
Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.
no code implementations • NeurIPS 2021 • Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik
Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity.
1 code implementation • ICLR 2021 • Samuel Horváth, Peter Richtárik
EF remains the only known technique that can deal with the error induced by contractive compressors which are not unbiased, such as Top-$K$.
no code implementations • 27 Feb 2020 • Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan
In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning.
1 code implementation • 13 Feb 2020 • Samuel Horváth, Lihua Lei, Peter Richtárik, Michael. I. Jordan
Adaptivity is an important yet under-studied property in modern optimization theory.
no code implementations • 25 Sep 2019 • Sélim Chraibi, Adil Salim, Samuel Horváth, Filip Hanzely, Peter Richtárik
Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases.