no code implementations • 23 May 2022 • Kamalika Chaudhuri, Kartik Ahuja, Martin Arjovsky, David Lopez-Paz
When facing data with imbalanced classes or groups, practitioners follow an intriguing strategy to achieve best results.
1 code implementation • 27 Oct 2021 • Badr Youbi Idrissi, Martin Arjovsky, Mohammad Pezeshki, David Lopez-Paz
We study the problem of learning classifiers that perform well across (known or unknown) groups of data.
Ranked #1 on Out-of-Distribution Generalization on UrbanCars
1 code implementation • 3 Mar 2021 • Martin Arjovsky
A central topic in the thesis is the strong link between discovering the causal structure of the data, finding features that are reliable (when using them to predict) regardless of their context, and out of distribution generalization.
BIG-bench Machine Learning Out-of-Distribution Generalization
2 code implementations • 22 Feb 2021 • Benjamin Aubin, Agnieszka Słowik, Martin Arjovsky, Leon Bottou, David Lopez-Paz
There is an increasing interest in algorithms to learn invariant correlations across training environments.
no code implementations • NeurIPS 2020 • Sarah Jane Hong, Martin Arjovsky, Darryl Barnhart, Ian Thompson
We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent.
1 code implementation • ICLR 2020 • Zhengdao Chen, Jianyu Zhang, Martin Arjovsky, Léon Bottou
We propose Symplectic Recurrent Neural Networks (SRNNs) as learning algorithms that capture the dynamics of physical systems from observed trajectories.
14 code implementations • 5 Jul 2019 • Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, David Lopez-Paz
We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions.
no code implementations • 21 Dec 2017 • Leon Bottou, Martin Arjovsky, David Lopez-Paz, Maxime Oquab
Learning algorithms for implicit generative models can optimize a variety of criteria that measure how the data distribution differs from the implicit model distribution, including the Wasserstein distance, the Energy distance, and the Maximum Mean Discrepancy criterion.
no code implementations • ICML 2017 • Martin Arjovsky, Soumith Chintala, Léon Bottou
We introduce a new algorithm named WGAN, an alternative to traditional GAN training.
111 code implementations • NeurIPS 2017 • Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability.
Ranked #3 on Image Generation on CAT 256x256
120 code implementations • 26 Jan 2017 • Martin Arjovsky, Soumith Chintala, Léon Bottou
We introduce a new algorithm named WGAN, an alternative to traditional GAN training.
no code implementations • 17 Jan 2017 • Martin Arjovsky, Léon Bottou
The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks.
9 code implementations • 2 Jun 2016 • Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, Aaron Courville
We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process.
2 code implementations • 20 Nov 2015 • Martin Arjovsky, Amar Shah, Yoshua Bengio
When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies.
Ranked #26 on Sequential Image Classification on Sequential MNIST
no code implementations • 30 May 2015 • Martin Arjovsky
Nonconvex optimization problems such as the ones in training deep neural networks suffer from a phenomenon called saddle point proliferation.