1 code implementation • 2 Oct 2023 • Kamalika Chaudhuri, David Lopez-Paz
To build robust, fair, and safe AI systems, we would like our classifiers to say ``I don't know'' when facing test examples that are difficult or fall outside of the training classes. The ubiquitous strategy to predict under uncertainty is the simplistic \emph{reject-or-classify} rule: abstain from prediction if epistemic uncertainty is high, classify otherwise. Unfortunately, this recipe does not allow different sources of uncertainty to communicate with each other, produces miscalibrated predictions, and it does not allow to correct for misspecifications in our uncertainty estimates.
no code implementations • 28 Sep 2023 • Mohammad Pezeshki, Diane Bouchacourt, Mark Ibrahim, Nicolas Ballas, Pascal Vincent, David Lopez-Paz
Successful out-of-distribution generalization requires environment annotations.
no code implementations • 18 Sep 2023 • Sharut Gupta, Stefanie Jegelka, David Lopez-Paz, Kartik Ahuja
Two lines of work are taking the central stage in AI research.
1 code implementation • 26 May 2023 • Kartik Ahuja, David Lopez-Paz
In-context learning, a capability that enables a model to learn from input examples on the fly without necessitating weight updates, is a defining characteristic of large language models.
1 code implementation • 20 Dec 2022 • Alexandre Ramé, Kartik Ahuja, Jianyu Zhang, Matthieu Cord, Léon Bottou, David Lopez-Paz
In this paper, we thus propose model ratatouille, a new strategy to recycle the multiple fine-tunings of the same foundation model on diverse auxiliary tasks.
Ranked #14 on Domain Generalization on PACS
no code implementations • 3 Nov 2022 • Badr Youbi Idrissi, Diane Bouchacourt, Randall Balestriero, Ivan Evtimov, Caner Hazirbas, Nicolas Ballas, Pascal Vincent, Michal Drozdzal, David Lopez-Paz, Mark Ibrahim
Equipped with ImageNet-X, we investigate 2, 200 current recognition models and study the types of mistakes as a function of model's (1) architecture, e. g. transformer vs. convolutional, (2) learning paradigm, e. g. supervised vs. self-supervised, and (3) training procedures, e. g., data augmentation.
no code implementations • 20 Jul 2022 • David Lopez-Paz, Diane Bouchacourt, Levent Sagun, Nicolas Usunier
By highlighting connections to the literature in domain generalization, we propose to measure fairness as the ability of the system to generalize under multiple stress tests -- distributions of examples with social relevance.
no code implementations • 23 May 2022 • Kamalika Chaudhuri, Kartik Ahuja, Martin Arjovsky, David Lopez-Paz
When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results.
2 code implementations • 24 Mar 2022 • Jianyu Zhang, David Lopez-Paz, Léon Bottou
On the one hand, a rich representation provides a good initialization for the optimizer.
no code implementations • NeurIPS 2021 • Ramakrishna Vedantam, David Lopez-Paz, David J. Schwab
Recent work demonstrates that deep neural networks trained using Empirical Risk Minimization (ERM) can generalize under distribution shift, outperforming specialized training algorithms for domain generalization.
1 code implementation • 27 Oct 2021 • Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, David Lopez-Paz
We study the problem of learning classifiers that perform well across (known or unknown) groups of data.
Ranked #1 on Out-of-Distribution Generalization on UrbanCars
no code implementations • 29 Sep 2021 • Mohamed Ishmael Belghazi, David Lopez-Paz
Adding new datasets, algorithms, measures, or metrics is a matter of a few lines of code-in so hoping that UIMNET becomes a stepping stone towards realistic, rigorous, and reproducible research in uncertainty estimation.
1 code implementation • 13 Jul 2021 • Mohamed Ishmael Belghazi, David Lopez-Paz
Adding new datasets, algorithms, measures, or metrics is a matter of a few lines of code-in so hoping that UIMNET becomes a stepping stone towards realistic, rigorous, and reproducible research in uncertainty estimation.
2 code implementations • 22 Feb 2021 • Benjamin Aubin, Agnieszka Słowik, Martin Arjovsky, Leon Bottou, David Lopez-Paz
There is an increasing interest in algorithms to learn invariant correlations across training environments.
9 code implementations • ICLR 2021 • Ishaan Gulrajani, David Lopez-Paz
As a first step, we realize that model selection is non-trivial for domain generalization tasks.
Ranked #42 on Domain Generalization on PACS
1 code implementation • ICLR 2020 • Jonathan Gordon, David Lopez-Paz, Marco Baroni, Diane Bouchacourt
Humans understand novel sentences by composing meanings and roles of core language components.
no code implementations • 19 Feb 2020 • Arslan Chaudhry, Albert Gordo, Puneet K. Dokania, Philip Torr, David Lopez-Paz
In continual learning, the learner faces a stream of data whose distribution changes over time.
no code implementations • 25 Sep 2019 • Jean-Remi King, Francois Charton, Maxime Oquab, David Lopez-Paz
Identifying causes from observations can be particularly challenging when i) potential factors are difficult to manipulate individually and ii) observations are complex and multi-dimensional.
14 code implementations • 5 Jul 2019 • Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, David Lopez-Paz
We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions.
no code implementations • ICLR 2019 • Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.
4 code implementations • 9 Mar 2019 • Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, David Lopez-Paz
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm.
1 code implementation • NeurIPS 2019 • Mohamed Ishmael Belghazi, Maxime Oquab, Yann Lecun, David Lopez-Paz
We introduce the Neural Conditioner (NC), a self-supervised machine able to learn about all the conditional distributions of a random vector $X$.
1 code implementation • NeurIPS 2019 • Natasa Tagasovska, David Lopez-Paz
To estimate epistemic uncertainty, we propose Orthonormal Certificates (OCs), a collection of diverse non-constant functions that map all training samples to zero.
12 code implementations • ICLR 2019 • Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples.
Ranked #18 on Image Classification on OmniBenchmark
1 code implementation • 13 Mar 2018 • Diviyan Kalainathan, Olivier Goudet, Isabelle Guyon, David Lopez-Paz, Michèle Sebag
A new causal discovery method, Structural Agnostic Modeling (SAM), is presented in this paper.
1 code implementation • ICLR 2019 • Carl-Johann Simon-Gabriel, Yann Ollivier, Léon Bottou, Bernhard Schölkopf, David Lopez-Paz
Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.
no code implementations • 21 Dec 2017 • Leon Bottou, Martin Arjovsky, David Lopez-Paz, Maxime Oquab
Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion.
1 code implementation • ICLR 2018 • Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag
We present Causal Generative Neural Networks (CGNNs) to learn functional causal models from observational data.
71 code implementations • ICLR 2018 • Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
2 code implementations • 15 Sep 2017 • Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag
We introduce a new approach to functional causal modeling from observational data, called Causal Generative Neural Networks (CGNN).
6 code implementations • ICML 2018 • Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam
Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images.
5 code implementations • NeurIPS 2017 • David Lopez-Paz, Marc'Aurelio Ranzato
One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge.
no code implementations • 23 Feb 2017 • Mateo Rojas-Carulla, Marco Baroni, David Lopez-Paz
In this paper, we develop a framework to estimate the cause-effect relation between two static entities $x$ and $y$: for instance, an art masterpiece $x$ and its fraudulent copy $y$.
no code implementations • 26 Nov 2016 • Z. Berkay Celik, David Lopez-Paz, Patrick McDaniel
In this paper, we present privacy distillation, a mechanism which allows patients to control the type and amount of information they wish to disclose to the healthcare providers for use in statistical models.
1 code implementation • 20 Oct 2016 • David Lopez-Paz, Maxime Oquab
The goal of this paper is to establish the properties, performance, and uses of C2ST.
no code implementations • 12 Jul 2016 • David Lopez-Paz
Second, we build on this framework to interpret the problem of causal inference as the task of distribution classification, yielding a family of novel causal inference algorithms.
2 code implementations • CVPR 2017 • David Lopez-Paz, Robert Nishihara, Soumith Chintala, Bernhard Schölkopf, Léon Bottou
Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.
no code implementations • 9 Feb 2016 • Ilya Tolstikhin, David Lopez-Paz
Transductive learning considers a training set of $m$ labeled samples and a test set of $u$ unlabeled samples, with the goal of best labeling that particular test set.
1 code implementation • 11 Nov 2015 • David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik
Distillation (Hinton et al., 2015) and privileged information (Vapnik & Izmailov, 2015) are two techniques that enable machines to learn from other machines.
no code implementations • 12 Aug 2015 • Robert Nishihara, David Lopez-Paz, Léon Bottou
This work is naturally framed in the extreme bandit setting, which deals with sequentially choosing which distribution from a collection to sample in order to minimize (maximize) the single best cost (reward).
1 code implementation • 9 Feb 2015 • David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Ilya Tolstikhin
We pose causal inference as the problem of learning to classify probability distributions.
no code implementations • 16 Sep 2014 • Daniel Hernández-Lobato, Pablo Morales-Mombiela, David Lopez-Paz, Alberto Suárez
The problem of non-linear causal inference is addressed by performing an embedding in an expanded feature space, in which the relation between causes and effects can be assumed to be linear.
no code implementations • 15 Sep 2014 • David Lopez-Paz, Krikamol Muandet, Benjamin Recht
We are interested in learning causal relationships between pairs of random variables, purely from observational data.
no code implementations • 1 Feb 2014 • David Lopez-Paz, Suvrit Sra, Alex Smola, Zoubin Ghahramani, Bernhard Schölkopf
Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale.
no code implementations • NeurIPS 2013 • David Lopez-Paz, Philipp Hennig, Bernhard Schölkopf
We introduce the Randomized Dependence Coefficient (RDC), a measure of non-linear dependence between random variables of arbitrary dimension based on the Hirschfeld-Gebelein-R\'enyi Maximum Correlation Coefficient.
no code implementations • NeurIPS 2012 • David Lopez-Paz, Jose M. Hernández-Lobato, Bernhard Schölkopf
A new framework based on the theory of copulas is proposed to address semi-supervised domain adaptation problems.