1 code implementation • 22 Sep 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan
We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.
no code implementations • 10 Aug 2020 • Lê Nguyên Hoang
If these additional data are not expected to reduce sufficiently the predictor's uncertainty on the player's decision, then the player's epistemic system will counterfactually prefer to 2-Box.
no code implementations • NeurIPS 2021 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault
We study Byzantine collaborative learning, where $n$ nodes seek to collectively learn from each others' local data.
no code implementations • 5 May 2019 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault
The third, Minimum-Diameter Averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers.
no code implementations • 4 Sep 2018 • Lê Nguyên Hoang
This paper discussed the {\it robust alignment} problem, that is, the problem of aligning the goals of algorithms with human preferences.
no code implementations • 7 Jun 2018 • El Mahdi El Mhamdi, Rachid Guerraoui, Lê Nguyên Hoang, Alexandre Maurer
We first solve the problem analytically in the case of two populations, with a uniform bonus-malus on the zones where each population is a majority.
no code implementations • 31 Jan 2018 • Lê Nguyên Hoang, Rachid Guerraoui
Deep learning relies on a very specific kind of neural networks: those superposing several neural layers.