1 code implementation • 15 Feb 2022 • Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, Nicolas Loizou
Although variants of the new methods are known for solving minimization problems, they were never considered or analyzed for solving min-max problems and VIPs.
1 code implementation • 16 Nov 2021 • Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou
The Stochastic Extragradient (SEG) method is one of the most popular algorithms for solving min-max optimization and variational inequalities problems (VIP) appearing in various machine learning tasks.
no code implementations • 29 Sep 2021 • Chiara Regniez, Gauthier Gidel, Hugo Berard
We show a formal connection between our formulation and optimal transport by relaxing AT into DRO problem with an $\infty$-Wasserstein constraint.
1 code implementation • NeurIPS 2021 • Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, Simon Lacoste-Julien
Two of the most prominent algorithms for solving unconstrained smooth games are the classical stochastic gradient descent-ascent (SGDA) and the recently introduced stochastic consensus optimization (SCO) [Mescheder et al., 2017].
1 code implementation • ICLR 2022 • Andjela Mladenovic, Avishek Joey Bose, Hugo Berard, William L. Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream.
no code implementations • ICML 2020 • Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, Ioannis Mitliagkas
The success of adversarial formulations in machine learning has brought renewed motivation for smooth games.
1 code implementation • NeurIPS 2020 • Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton
We introduce Adversarial Example Games (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier.
1 code implementation • ICLR 2020 • Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, Simon Lacoste-Julien
Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train compared to standard deep neural networks.
1 code implementation • ICLR 2019 • Gauthier Gidel, Hugo Berard, Gaëtan Vignoud, Pascal Vincent, Simon Lacoste-Julien
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train.
no code implementations • ICLR 2018 • Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien
Parametric adversarial divergences, which are a generalization of the losses used to train generative adversarial networks (GANs), have often been described as being approximations of their nonparametric counterparts, such as the Jensen-Shannon divergence, which can be derived under the so-called optimal discriminator assumption.