no code implementations • ICML 2020 • Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif
We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime.
no code implementations • 1 May 2024 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot
It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable.
no code implementations • 20 Feb 2024 • Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych
The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.
no code implementations • 11 Sep 2023 • Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John Stephan, Martin Zuber
SABLE leverages HTS, a novel and efficient homomorphic operator implementing the prominent coordinate-wise trimmed mean robust aggregator.
no code implementations • 9 Feb 2023 • Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.
no code implementations • 3 Feb 2023 • Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.
no code implementations • 30 Sep 2022 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan
Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.
1 code implementation • 22 Sep 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan
We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.
no code implementations • 3 Jun 2022 • Raphael Ettedgui, Alexandre Araujo, Rafael Pinot, Yann Chevaleyre, Jamal Atif
We first show that these certificates use too little information about the classifier, and are in particular blind to the local curvature of the decision boundary.
no code implementations • 24 May 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.
no code implementations • 20 May 2022 • Laurent Meunier, Raphaël Ettedgui, Rafael Pinot, Yann Chevaleyre, Jamal Atif
In this paper, we expose some pathological behaviors specific to the adversarial problem, and show that no convex surrogate loss can be consistent or calibrated in this context.
no code implementations • 8 Oct 2021 • Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Sebastien Rouault, John Stephan
Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine learning.
no code implementations • 22 Feb 2021 • Rafael Pinot, Laurent Meunier, Florian Yger, Cédric Gouy-Pailler, Yann Chevaleyre, Jamal Atif
This paper investigates the theory of robustness against adversarial attacks.
no code implementations • 13 Feb 2021 • Laurent Meunier, Meyer Scetbon, Rafael Pinot, Jamal Atif, Yann Chevaleyre
This paper tackles the problem of adversarial examples from a game theoretic point of view.
no code implementations • 4 Dec 2020 • Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne
It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa.
no code implementations • 16 Jun 2020 • Arnaud Grivet Sébert, Rafael Pinot, Martin Zuber, Cédric Gouy-Pailler, Renaud Sirdey
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art of private deep learning against a wider range of threats, in particular the honest-but-curious server assumption.
1 code implementation • 26 Feb 2020 • Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif
We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the Adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime.
no code implementations • 19 Jun 2019 • Rafael Pinot, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
This short note highlights some links between two lines of research within the emerging topic of trustworthy machine learning: differential privacy and robustness to adversarial examples.
no code implementations • 25 Mar 2019 • Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne
This paper tackles the problem of defending a neural network against adversarial attacks crafted with different norms (in particular $\ell_\infty$ and $\ell_2$ bounded adversarial examples).
1 code implementation • NeurIPS 2019 • Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
This paper investigates the theory of robustness against adversarial attacks.
no code implementations • 10 Mar 2018 • Rafael Pinot, Anne Morvan, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
In this paper, we present the first differentially private clustering method for arbitrary-shaped node clusters in a graph.
no code implementations • 19 Jan 2018 • Rafael Pinot
It provides a simple way of producing the topology of a private almost minimum spanning tree which outperforms, in most cases, the state of the art "Laplace mechanism" in terms of weight-approximation error.