Search Results for author: Yann Chevaleyre

Found 22 papers, 3 papers with code

Randomization matters How to defend against strong adversarial attacks

no code implementations ICML 2020 Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif

We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime.

Exploring Precision and Recall to assess the quality and diversity of LLMs

no code implementations16 Feb 2024 Florian Le Bronnec, Alexandre Verine, Benjamin Negrevergne, Yann Chevaleyre, Alexandre Allauzen

This paper introduces a novel evaluation framework for Large Language Models (LLMs) such as Llama-2 and Mistral, focusing on the adaptation of Precision and Recall metrics from image generation to text generation.

Image Generation Text Generation

Optimal Budgeted Rejection Sampling for Generative Models

no code implementations1 Nov 2023 Alexandre Verine, Muni Sreenivas Pydi, Benjamin Negrevergne, Yann Chevaleyre

Rejection sampling methods have recently been proposed to improve the performance of discriminator-based generative models.

Image Generation

Adversarial attacks for mixtures of classifiers

no code implementations20 Jul 2023 Lucas Gnecco Heredia, Benjamin Negrevergne, Yann Chevaleyre

However, it has been shown that existing attacks are not well suited for this kind of classifiers.

Training Normalizing Flows with the Precision-Recall Divergence

no code implementations1 Feb 2023 Alexandre Verine, Benjamin Negrevergne, Muni Sreenivas Pydi, Yann Chevaleyre

Generative models can have distinct mode of failures like mode dropping and low quality samples, which cannot be captured by a single scalar metric.

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

no code implementations3 Jun 2022 Raphael Ettedgui, Alexandre Araujo, Rafael Pinot, Yann Chevaleyre, Jamal Atif

We first show that these certificates use too little information about the classifier, and are in particular blind to the local curvature of the decision boundary.

An $α$-No-Regret Algorithm For Graphical Bilinear Bandits

no code implementations1 Jun 2022 Geovani Rizk, Igor Colin, Albert Thomas, Rida Laraki, Yann Chevaleyre

We propose the first regret-based approach to the Graphical Bilinear Bandits problem, where $n$ agents in a graph play a stochastic bilinear bandit game with each of their neighbors.

Towards Consistency in Adversarial Classification

no code implementations20 May 2022 Laurent Meunier, Raphaël Ettedgui, Rafael Pinot, Yann Chevaleyre, Jamal Atif

In this paper, we expose some pathological behaviors specific to the adversarial problem, and show that no convex surrogate loss can be consistent or calibrated in this context.

Classification

Asymptotic convergence rates for averaging strategies

no code implementations10 Aug 2021 Laurent Meunier, Iskander Legheraba, Yann Chevaleyre, Olivier Teytaud

Averaging the $\mu$ best individuals among the $\lambda$ evaluations is known to provide better estimates of the optimum of a function than just picking up the best.

On the expressivity of bi-Lipschitz normalizing flows

no code implementations ICML Workshop INNF 2021 Alexandre Verine, Benjamin Negrevergne, Fabrice Rossi, Yann Chevaleyre

An invertible function is bi-Lipschitz if both the function and its inverse have bounded Lipschitz constants.

Best Arm Identification in Graphical Bilinear Bandits

no code implementations14 Dec 2020 Geovani Rizk, Albert Thomas, Igor Colin, Rida Laraki, Yann Chevaleyre

We study the best arm identification problem in which the learner wants to find the graph allocation maximizing the sum of the bilinear rewards.

On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory

2 code implementations15 Jun 2020 Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif

This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks.

On averaging the best samples in evolutionary computation

no code implementations24 Apr 2020 Laurent Meunier, Yann Chevaleyre, Jeremy Rapin, Clément W. Royer, Olivier Teytaud

With our choice of selection rate, we get a provable regret of order $O(\lambda^{-1})$ which has to be compared with $O(\lambda^{-2/d})$ in the case where $\mu=1$.

Randomization matters. How to defend against strong adversarial attacks

1 code implementation26 Feb 2020 Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif

We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the Adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime.

The Expressive Power of Deep Neural Networks with Circulant Matrices

no code implementations ICLR 2019 Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif

Recent results from linear algebra stating that any matrix can be decomposed into products of diagonal and circulant matrices has lead to the design of compact deep neural network architectures that perform well in practice.

General Classification Video Classification

Understanding and Training Deep Diagonal Circulant Neural Networks

no code implementations29 Jan 2019 Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif

In this paper, we study deep diagonal circulant neural networks, that is deep neural networks in which weight matrices are the product of diagonal and circulant ones.

Video Classification

Disease Classification in Metagenomics with 2D Embeddings and Deep Learning

no code implementations23 Jun 2018 Thanh Hai Nguyen, Edi Prifti, Yann Chevaleyre, Nataliya Sokolovska, Jean-Daniel Zucker

Generally, when the sample size ($N$) is much bigger than the number of features ($d$), DL often outperforms other machine learning (ML) techniques, often through the use of Convolutional Neural Networks (CNNs).

Classification General Classification

Deep Learning for Metagenomic Data: using 2D Embeddings and Convolutional Neural Networks

no code implementations1 Dec 2017 Thanh Hai Nguyen, Yann Chevaleyre, Edi Prifti, Nataliya Sokolovska, Jean-Daniel Zucker

However, in many bioinformatics ML tasks, we encounter the opposite situation where d is greater than N. In these situations, applying DL techniques (such as feed-forward networks) would lead to severe overfitting.

Cannot find the paper you are looking for? You can Submit a new open access paper.