Search Results for author: El-Mahdi El-Mhamdi

Found 7 papers, 2 papers with code

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Distributed Momentum for Byzantine-resilient Learning

1 code implementation28 Feb 2020 El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Momentum is a variant of gradient descent that has been proposed for its benefits on convergence.

Genuinely Distributed Byzantine Machine Learning

no code implementations5 May 2019 El-Mahdi El-Mhamdi, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault

The third, Minimum-Diameter Averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers.

BIG-bench Machine Learning

Fast and Robust Distributed Learning in High Dimension

no code implementations5 May 2019 El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

The Probabilistic Fault Tolerance of Neural Networks in the Continuous Limit

3 code implementations ICLR 2020 El-Mahdi El-Mhamdi, Rachid Guerraoui, Andrei Kucharavy, Sergei Volodin

We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting.

Cannot find the paper you are looking for? You can Submit a new open access paper.