Search Results for author: John Stephan

Found 9 papers, 2 papers with code

Robustness, Efficiency, or Privacy: Pick Two in Machine Learning

no code implementations22 Dec 2023 Youssef Allouah, Rachid Guerraoui, John Stephan

The success of machine learning (ML) applications relies on vast datasets and distributed architectures which, as they grow, present major challenges.

Computational Efficiency Data Poisoning

SABLE: Secure And Byzantine robust LEarning

no code implementations11 Sep 2023 Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John Stephan, Martin Zuber

SABLE leverages HTS, a novel and efficient homomorphic operator implementing the prominent coordinate-wise trimmed mean robust aggregator.

Image Classification Privacy Preserving

On the Privacy-Robustness-Utility Trilemma in Distributed Learning

no code implementations9 Feb 2023 Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

no code implementations3 Feb 2023 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Robust Collaborative Learning with Linear Gradient Overhead

1 code implementation22 Sep 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan

We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.

Image Classification

Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums

no code implementations24 May 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.

BIG-bench Machine Learning Distributed Optimization

Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

1 code implementation16 Feb 2021 Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML).

Cannot find the paper you are looking for? You can Submit a new open access paper.