no code implementations • 30 Sep 2022 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan
Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.
no code implementations • 29 May 2021 • Lê-Nguyên Hoang, Louis Faucon, Aidan Jungo, Sergei Volodin, Dalia Papuc, Orfeas Liossatos, Ben Crulis, Mariame Tighanimine, Isabela Constantin, Anastasiia Kucherenko, Alexandre Maurer, Felix Grimberg, Vlad Nitu, Chris Vossen, Sébastien Rouault, El-Mahdi El-Mhamdi
We outline the structure of the Tournesol database, the key features of the Tournesol platform and the main hurdles that must be overcome to make it a successful project.
1 code implementation • 16 Feb 2021 • Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan
This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML).
no code implementations • ICLR 2021 • El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault
We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness.
1 code implementation • 12 Oct 2020 • Rachid Guerraoui, Arsany Guirguis, Jérémy Max Plassmann, Anton Alexandre Ragot, Sébastien Rouault
We present Garfield, a library to transparently make machine learning (ML) applications, initially built with popular (but fragile) frameworks, e. g., TensorFlow and PyTorch, Byzantine-resilient.
no code implementations • NeurIPS 2021 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault
We study Byzantine collaborative learning, where $n$ nodes seek to collectively learn from each others' local data.
1 code implementation • 28 Feb 2020 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault
Momentum is a variant of gradient descent that has been proposed for its benefits on convergence.
no code implementations • 5 May 2019 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault
Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning.
BIG-bench Machine Learning Vocal Bursts Intensity Prediction
no code implementations • 5 May 2019 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault
The third, Minimum-Diameter Averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers.
1 code implementation • ICML 2018 • El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault
Based on this leeway, we build a simple attack, and experimentally show its strong to utmost effectivity on CIFAR-10 and MNIST.