Search Results for author: Sadegh Farhadkhani

Found 10 papers, 4 papers with code

On the Relevance of Byzantine Robust Optimization Against Data Poisoning

no code implementations1 May 2024 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot

It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable.

Autonomous Driving Data Poisoning

Tackling Byzantine Clients in Federated Learning

no code implementations20 Feb 2024 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych

The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.

Federated Learning Image Classification

Epidemic Learning: Boosting Decentralized Learning with Randomized Communication

1 code implementation NeurIPS 2023 Martijn de Vos, Sadegh Farhadkhani, Rachid Guerraoui, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma

We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches.

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

no code implementations3 Feb 2023 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Robust Collaborative Learning with Linear Gradient Overhead

1 code implementation22 Sep 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan

We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.

Image Classification

Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums

no code implementations24 May 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.

BIG-bench Machine Learning Distributed Optimization

An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

1 code implementation17 Feb 2022 Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang, Oscar Villemaud

More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic).

Data Poisoning Personalized Federated Learning

Strategyproof Learning: Building Trustworthy User-Generated Datasets

1 code implementation4 Jun 2021 Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang

We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.