Search Results for author: Diane Bouchacourt

Found 31 papers, 13 papers with code

Embracing Diversity: Interpretable Zero-shot classification beyond one vector per class

no code implementations25 Apr 2024 Mazda Moayeri, Michael Rabbat, Mark Ibrahim, Diane Bouchacourt

We propose a method to encode and account for diversity within a class using inferred attributes, still in the zero-shot setting without retraining.

PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

1 code implementation NeurIPS 2023 Florian Bordes, Shashank Shekhar, Mark Ibrahim, Diane Bouchacourt, Pascal Vincent, Ari S. Morcos

Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation.

Representation Learning

Does Progress On Object Recognition Benchmarks Improve Real-World Generalization?

no code implementations24 Jul 2023 Megan Richards, Polina Kirichenko, Diane Bouchacourt, Mark Ibrahim

Second, we study model generalization across geographies by measuring the disparities in performance across regions, a more fine-grained measure of real world generalization.

Object Recognition

Pinpointing Why Object Recognition Performance Degrades Across Income Levels and Geographies

1 code implementation11 Apr 2023 Laura Gustafson, Megan Richards, Melissa Hall, Caner Hazirbas, Diane Bouchacourt, Mark Ibrahim

As an example, we show that mitigating a model's vulnerability to texture can improve performance on the lower income level.

Object Recognition

ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations

no code implementations3 Nov 2022 Badr Youbi Idrissi, Diane Bouchacourt, Randall Balestriero, Ivan Evtimov, Caner Hazirbas, Nicolas Ballas, Pascal Vincent, Michal Drozdzal, David Lopez-Paz, Mark Ibrahim

Equipped with ImageNet-X, we investigate 2, 200 current recognition models and study the types of mistakes as a function of model's (1) architecture, e. g. transformer vs. convolutional, (2) learning paradigm, e. g. supervised vs. self-supervised, and (3) training procedures, e. g., data augmentation.

Data Augmentation

The Robustness Limits of SoTA Vision Models to Natural Variation

no code implementations24 Oct 2022 Mark Ibrahim, Quentin Garrido, Ari Morcos, Diane Bouchacourt

We study not only how robust recent state-of-the-art models are, but also the extent to which models can generalize variation in factors when they're present during training.

Robust Self-Supervised Learning with Lie Groups

no code implementations24 Oct 2022 Mark Ibrahim, Diane Bouchacourt, Ari Morcos

Our approach applies the formalism of Lie groups to capture continuous transformations to improve models' robustness to distributional shifts.

Self-Supervised Learning

Disentanglement of Correlated Factors via Hausdorff Factorized Support

1 code implementation13 Oct 2022 Karsten Roth, Mark Ibrahim, Zeynep Akata, Pascal Vincent, Diane Bouchacourt

We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over $+60\%$ in relative improvement over existing disentanglement methods.

Disentanglement

Measuring and signing fairness as performance under multiple stakeholder distributions

no code implementations20 Jul 2022 David Lopez-Paz, Diane Bouchacourt, Levent Sagun, Nicolas Usunier

By highlighting connections to the literature in domain generalization, we propose to measure fairness as the ability of the system to generalize under multiple stress tests -- distributions of examples with social relevance.

Domain Generalization Fairness

Grounding inductive biases in natural images:invariance stems from variations in data

1 code implementation NeurIPS 2021 Diane Bouchacourt, Mark Ibrahim, Ari S. Morcos

While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors.

Data Augmentation Translation

Grounding inductive biases in natural images: invariance stems from variations in data

1 code implementation NeurIPS 2021 Diane Bouchacourt, Mark Ibrahim, Ari S. Morcos

While prior work has focused on synthetic data, we attempt here to characterize the factors of variation in a real dataset, ImageNet, and study the invariance of both standard residual networks and the recently proposed vision transformer with respect to changes in these factors.

Data Augmentation Translation

Addressing the Topological Defects of Disentanglement via Distributed Operators

1 code implementation10 Feb 2021 Diane Bouchacourt, Mark Ibrahim, Stéphane Deny

A core challenge in Machine Learning is to learn to disentangle natural factors of variation in data (e. g. object shape vs. pose).

Disentanglement

Addressing the Topological Defects of Disentanglement

no code implementations1 Jan 2021 Diane Bouchacourt, Mark Ibrahim, Stephane Deny

A core challenge in Machine Learning is to disentangle natural factors of variation in data (e. g. object shape vs pose).

Disentanglement

Think before you act: A simple baseline for compositional generalization

1 code implementation29 Sep 2020 Christina Heinze-Deml, Diane Bouchacourt

Contrarily to humans who have the ability to recombine familiar expressions to create novel ones, modern neural networks struggle to do so.

Compositionality and Generalization in Emergent Languages

1 code implementation ACL 2020 Rahma Chaabouni, Eugene Kharitonov, Diane Bouchacourt, Emmanuel Dupoux, Marco Baroni

Third, while compositionality is not necessary for generalization, it provides an advantage in terms of language transmission: The more compositional a language is, the more easily it will be picked up by new learners, even when the latter differ in architecture from the original agents.

Disentanglement

A Benchmark for Systematic Generalization in Grounded Language Understanding

4 code implementations NeurIPS 2020 Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, Brenden M. Lake

In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.

Systematic Generalization

Focus on What's Informative and Ignore What's not: Communication Strategies in a Referential Game

no code implementations5 Nov 2019 Roberto Dessì, Diane Bouchacourt, Davide Crepaldi, Marco Baroni

Research in multi-agent cooperation has shown that artificial agents are able to learn to play a simple referential game while developing a shared lexicon.

EDUCE: Explaining model Decision through Unsupervised Concepts Extraction

no code implementations25 Sep 2019 Diane Bouchacourt, Ludovic Denoyer

Therefore, we propose a new self-interpretable model that performs output prediction and simultaneously provides an explanation in terms of the presence of particular concepts in the input.

Sentiment Analysis text-classification +1

Mastering emergent language: learning to guide in simulated navigation

no code implementations14 Aug 2019 Mathijs Mul, Diane Bouchacourt, Elia Bruni

A typical setup to achieve this is with a scripted teacher which guides a virtual agent using language instructions.

Navigate

EGG: a toolkit for research on Emergence of lanGuage in Games

no code implementations IJCNLP 2019 Eugene Kharitonov, Rahma Chaabouni, Diane Bouchacourt, Marco Baroni

There is renewed interest in simulating language emergence among deep neural agents that communicate to jointly solve a task, spurred by the practical aim to develop language-enabled interactive AIs, as well as by theoretical questions about the evolution of human language.

Entropy Minimization In Emergent Languages

1 code implementation ICML 2020 Eugene Kharitonov, Rahma Chaabouni, Diane Bouchacourt, Marco Baroni

There is growing interest in studying the languages that emerge when neural agents are jointly trained to solve tasks requiring communication through a discrete channel.

Representation Learning

Miss Tools and Mr Fruit: Emergent communication in agents learning about object affordances

1 code implementation ACL 2019 Diane Bouchacourt, Marco Baroni

Recent research studies communication emergence in communities of deep network agents assigned a joint task, hoping to gain insights on human language evolution.

EDUCE: Explaining model Decisions through Unsupervised Concepts Extraction

no code implementations28 May 2019 Diane Bouchacourt, Ludovic Denoyer

Therefore, we propose a new self-interpretable model that performs output prediction and simultaneously provides an explanation in terms of the presence of particular concepts in the input.

Sentiment Analysis text-classification +1

How agents see things: On visual representations in an emergent language game

no code implementations EMNLP 2018 Diane Bouchacourt, Marco Baroni

There is growing interest in the language developed by agents interacting in emergent-communication settings.

Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations

2 code implementations24 May 2017 Diane Bouchacourt, Ryota Tomioka, Sebastian Nowozin

We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control.

Disentanglement

DISCO Nets : DISsimilarity COefficients Networks

no code implementations NeurIPS 2016 Diane Bouchacourt, Pawan K. Mudigonda, Sebastian Nowozin

We present a new type of probabilistic model which we call DISsimilarity COefficient Networks (DISCO Nets).

DISCO Nets: DISsimilarity COefficient Networks

no code implementations8 Jun 2016 Diane Bouchacourt, M. Pawan Kumar, Sebastian Nowozin

We present a new type of probabilistic model which we call DISsimilarity COefficient Networks (DISCO Nets).

Entropy-Based Latent Structured Output Prediction

no code implementations ICCV 2015 Diane Bouchacourt, Sebastian Nowozin, M. Pawan Kumar

To this end, we propose a novel prediction criterion that includes as special cases all previous prediction criteria that have been used in the literature.

Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.