no code implementations • 20 May 2022 • Laurent Meunier, Raphaël Ettedgui, Rafael Pinot, Yann Chevaleyre, Jamal Atif
In this paper, we expose some pathological behaviors specific to the adversarial problem, and show that no convex surrogate loss can be consistent or calibrated in this context.
1 code implementation • 28 Oct 2021 • Meyer Scetbon, Laurent Meunier, Yaniv Romano
We propose a new conditional dependence measure and a statistical test for conditional independence.
no code implementations • 25 Oct 2021 • Laurent Meunier, Blaise Delattre, Alexandre Araujo, Alexandre Allauzen
The Lipschitz constant of neural networks has been established as a key quantity to enforce the robustness to adversarial examples.
no code implementations • 10 Aug 2021 • Laurent Meunier, Iskander Legheraba, Yann Chevaleyre, Olivier Teytaud
Averaging the $\mu$ best individuals among the $\lambda$ evaluations is known to provide better estimates of the optimum of a function than just picking up the best.
no code implementations • ICML Workshop AML 2021 • Alessandro Cappelli, Julien Launay, Laurent Meunier, Ruben Ohana, Iacopo Poli
Robustness to adversarial attacks is typically obtained through expensive adversarial training with Projected Gradient Descent.
no code implementations • 22 Feb 2021 • Rafael Pinot, Laurent Meunier, Florian Yger, Cédric Gouy-Pailler, Yann Chevaleyre, Jamal Atif
This paper investigates the theory of robustness against adversarial attacks.
no code implementations • 13 Feb 2021 • Laurent Meunier, Meyer Scetbon, Rafael Pinot, Jamal Atif, Yann Chevaleyre
This paper tackles the problem of adversarial examples from a game theoretic point of view.
1 code implementation • 6 Jan 2021 • Alessandro Cappelli, Ruben Ohana, Julien Launay, Laurent Meunier, Iacopo Poli, Florent Krzakala
In the white-box setting, our defense works by obfuscating the parameters of the random projection.
no code implementations • 4 Dec 2020 • Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne
It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa.
no code implementations • 8 Oct 2020 • Laurent Meunier, Herilalaina Rakotoarison, Pak Kan Wong, Baptiste Roziere, Jeremy Rapin, Olivier Teytaud, Antoine Moreau, Carola Doerr
We demonstrate the advantages of such a broad collection by deriving from it Automated Black Box Optimizer (ABBO), a general-purpose algorithm selection wizard.
no code implementations • 12 Jun 2020 • Meyer Scetbon, Laurent Meunier, Jamal Atif, Marco Cuturi
When there is only one agent, we recover the Optimal Transport problem.
no code implementations • 24 Apr 2020 • Laurent Meunier, Yann Chevaleyre, Jeremy Rapin, Clément W. Royer, Olivier Teytaud
With our choice of selection rate, we get a provable regret of order $O(\lambda^{-1})$ which has to be compared with $O(\lambda^{-2/d})$ in the case where $\mu=1$.
no code implementations • 24 Apr 2020 • Laurent Meunier, Carola Doerr, Jeremy Rapin, Olivier Teytaud
Design of experiments, random search, initialization of population-based methods, or sampling inside an epoch of an evolutionary algorithm use a sample drawn according to some probability distribution for approximating the location of an optimum.
no code implementations • NeurIPS 2020 • Evrard Garcelon, Baptiste Roziere, Laurent Meunier, Jean Tarbouriech, Olivier Teytaud, Alessandro Lazaric, Matteo Pirotta
In many of these domains, malicious agents may have incentives to attack the bandit algorithm to induce it to perform a desired behavior.
no code implementations • 5 Oct 2019 • Laurent Meunier, Jamal Atif, Olivier Teytaud
In the targeted setting, we are able to reach, with a limited budget of $100, 000$, $100\%$ of success rate with a budget of $6, 662$ queries on average, i. e. we need $800$ queries less than the current state of the art.
no code implementations • 25 Mar 2019 • Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne
This paper tackles the problem of defending a neural network against adversarial attacks crafted with different norms (in particular $\ell_\infty$ and $\ell_2$ bounded adversarial examples).
1 code implementation • NeurIPS 2019 • Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
This paper investigates the theory of robustness against adversarial attacks.