no code implementations • 1 May 2024 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot
It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable.
no code implementations • 20 Feb 2024 • Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych
The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.
1 code implementation • NeurIPS 2023 • Martijn de Vos, Sadegh Farhadkhani, Rachid Guerraoui, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma
We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches.
no code implementations • 3 Feb 2023 • Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.
no code implementations • 30 Sep 2022 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan
Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.
1 code implementation • 22 Sep 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan
We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.
no code implementations • 24 May 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.
1 code implementation • 17 Feb 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang, Oscar Villemaud
More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic).
1 code implementation • 4 Jun 2021 • Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang
We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality.
no code implementations • NeurIPS 2021 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault
We study Byzantine collaborative learning, where $n$ nodes seek to collectively learn from each others' local data.