1 code implementation • 15 Feb 2024 • Chawin Sitawarin, Norman Mu, David Wagner, Alexandre Araujo
In this work, we introduce the Proxy-Guided Attack on LLMs (PAL), the first optimization-based attack on LLMs in a black-box query-only setting.
no code implementations • 25 Jan 2024 • Patricia Pauli, Aaron Havens, Alexandre Araujo, Siddharth Garg, Farshad Khorrami, Frank Allgöwer, Bin Hu
However, a direct application of LipSDP to the resultant residual ReLU networks is conservative and even fails in recovering the well-known fact that the MaxMin activation is 1-Lipschitz.
1 code implementation • 29 Nov 2023 • Alexandre Araujo, Jean Ponce, Julien Mairal
Focus stacking is widely used in micro, macro, and landscape photography to reconstruct all-in-focus images from multiple frames obtained with focus bracketing, that is, with shallow depth of field and different focus planes.
1 code implementation • 27 Oct 2023 • Sara Ghazanfari, Alexandre Araujo, Prashanth Krishnamurthy, Farshad Khorrami, Siddharth Garg
On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks.
no code implementations • 5 Oct 2023 • Othmane Laousy, Alexandre Araujo, Guillaume Chassagnon, Nikos Paragios, Marie-Pierre Revel, Maria Vakalopoulou
In this paper, we present for the first time a certified segmentation baseline for medical imaging based on randomized smoothing and diffusion models.
no code implementations • 28 Sep 2023 • Blaise Delattre, Alexandre Araujo, Quentin Barthélemy, Alexandre Allauzen
The certified radius in this context is a crucial indicator of the robustness of models.
1 code implementation • 27 Jul 2023 • Sara Ghazanfari, Siddharth Garg, Prashanth Krishnamurthy, Farshad Khorrami, Alexandre Araujo
In this paper, we propose the Robust Learned Perceptual Image Patch Similarity (R-LPIPS) metric, a new metric that leverages adversarially trained deep features.
no code implementations • 16 Jun 2023 • Othmane Laousy, Alexandre Araujo, Guillaume Chassagnon, Marie-Pierre Revel, Siddharth Garg, Farshad Khorrami, Maria Vakalopoulou
The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy.
1 code implementation • NeurIPS 2023 • Haotian Xue, Alexandre Araujo, Bin Hu, Yongxin Chen
Neural networks are known to be susceptible to adversarial samples: small variations of natural examples crafted to deliberately mislead the models.
1 code implementation • 25 May 2023 • Blaise Delattre, Quentin Barthélemy, Alexandre Araujo, Alexandre Allauzen
Since the control of the Lipschitz constant has a great impact on the training stability, generalization, and robustness of neural networks, the estimation of this value is nowadays a real scientific challenge.
1 code implementation • ICLR 2023 • Alexandre Araujo, Aaron Havens, Blaise Delattre, Alexandre Allauzen, Bin Hu
Important research efforts have focused on the design and training of neural networks with a controlled Lipschitz constant.
Ranked #1 on Provable Adversarial Defense on CIFAR-100
no code implementations • 3 Jun 2022 • Raphael Ettedgui, Alexandre Araujo, Rafael Pinot, Yann Chevaleyre, Jamal Atif
We first show that these certificates use too little information about the classifier, and are in particular blind to the local curvature of the decision boundary.
no code implementations • 25 Oct 2021 • Laurent Meunier, Blaise Delattre, Alexandre Araujo, Alexandre Allauzen
The Lipschitz constant of neural networks has been established as a key quantity to enforce the robustness to adversarial examples.
no code implementations • 2 Sep 2021 • Alexandre Araujo
This thesis focuses on the problem of training neural networks which are not only accurate but also compact, easy to train, reliable and robust to adversarial examples.
no code implementations • 4 Dec 2020 • Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne
It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa.
2 code implementations • 15 Jun 2020 • Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif
This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks.
no code implementations • ICLR 2019 • Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif
Recent results from linear algebra stating that any matrix can be decomposed into products of diagonal and circulant matrices has lead to the design of compact deep neural network architectures that perform well in practice.
no code implementations • 25 Mar 2019 • Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne
This paper tackles the problem of defending a neural network against adversarial attacks crafted with different norms (in particular $\ell_\infty$ and $\ell_2$ bounded adversarial examples).
1 code implementation • NeurIPS 2019 • Rafael Pinot, Laurent Meunier, Alexandre Araujo, Hisashi Kashima, Florian Yger, Cédric Gouy-Pailler, Jamal Atif
This paper investigates the theory of robustness against adversarial attacks.
no code implementations • 29 Jan 2019 • Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif
In this paper, we study deep diagonal circulant neural networks, that is deep neural networks in which weight matrices are the product of diagonal and circulant ones.
1 code implementation • 2 Oct 2018 • Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif
In real world scenarios, model accuracy is hardly the only factor to consider.