1 code implementation • 25 Nov 2022 • Shuyu Dong, Kento Uemura, Akito Fujii, Shuang Chang, Yusuke Koyanagi, Koji Maruhashi, Michèle Sebag
In the context of linear structural equation models (SEMs), this paper focuses on learning causal structures from the inverse covariance matrix.
1 code implementation • 10 Apr 2022 • Shuyu Dong, Michèle Sebag
Learning directed acyclic graphs (DAGs) is long known a critical challenge at the core of probabilistic and causal modeling.
no code implementations • 5 Nov 2021 • Mikhail Evchenko, Joaquin Vanschoren, Holger H. Hoos, Marc Schoenauer, Michèle Sebag
Machine learning, already at the core of increasingly many systems and applications, is set to become even more ubiquitous with the rapid rise of wearable devices and the Internet of Things.
no code implementations • 24 Jun 2020 • Gwendoline de Bie, Herilalaina Rakotoarison, Gabriel Peyré, Michèle Sebag
On both tasks, Dida learns meta-features supporting the characterization of a (labelled) dataset.
no code implementations • 4 Mar 2020 • Michele Sebag, Victor Berger, Michèle Sebag
We claim that a source of severe failures for Variational Auto-Encoders is the choice of the distribution class used for the observation model. A first theoretical and experimental contribution of the paper is to establish that even in the large sample limit with arbitrarily powerful neural architectures and latent space, the VAE failsif the sharpness of the distribution class does not match the scale of the data. Our second claim is that the distribution sharpness must preferably be learned by the VAE (as opposed to, fixed and optimized offline): Autonomously adjusting this sharpness allows the VAE to dynamically control the trade-off between the optimization of the reconstruction loss and the latent compression.
no code implementations • 22 Jan 2020 • Victor Berger, Michèle Sebag
This paper focuses on their control.
2 code implementations • 1 Jun 2019 • Herilalaina Rakotoarison, Marc Schoenauer, Michèle Sebag
The AutoML task consists of selecting the proper algorithm in a machine learning portfolio, and its hyperparameter values, in order to deliver the best performance on the dataset at hand.
no code implementations • Springer Cham 2019 • Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera, Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, Alexander Statnikov, WeiWei Tu, Evelyne Viegas
The solutions of the winners are systematically benchmarked over all datasets of all rounds and compared with canonical machine learning algorithms available in scikit-learn.
Ranked #1 on AutoML on Chalearn-AutoML-1
no code implementations • ICLR 2019 • Guillaume DOQUET, Michèle Sebag
The paper, interested in unsupervised feature selection, aims to retain the features best accounting for the local patterns in the data.
no code implementations • 3 Jul 2018 • Victor Berger, Michèle Sebag
Generative Adversarial Networks (Goodfellow et al., 2014), a major breakthrough in the field of generative modeling, learn a discriminator to estimate some distance between the target and the candidate distributions.
1 code implementation • 13 Mar 2018 • Diviyan Kalainathan, Olivier Goudet, Isabelle Guyon, David Lopez-Paz, Michèle Sebag
A new causal discovery method, Structural Agnostic Modeling (SAM), is presented in this paper.
1 code implementation • ICLR 2018 • Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag
We present Causal Generative Neural Networks (CGNNs) to learn functional causal models from observational data.
2 code implementations • 15 Sep 2017 • Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michèle Sebag
We introduce a new approach to functional causal modeling from observational data, called Causal Generative Neural Networks (CGNN).
no code implementations • 5 Sep 2017 • Alice Schoenauer-Sebag, Marc Schoenauer, Michèle Sebag
When applied to training deep neural networks, stochastic gradient descent (SGD) often incurs steady progression phases, interrupted by catastrophic episodes in which loss and gradient norm explode.
no code implementations • 29 Sep 2016 • Yoann Isaac, Quentin Barthélemy, Cédric Gouy-Pailler, Michèle Sebag, Jamal Atif
This paper addresses the structurally-constrained sparse decomposition of multi-dimensional signals onto overcomplete families of vectors, called dictionaries.
no code implementations • 10 Jun 2014 • Ilya Loshchilov, Marc Schoenauer, Michèle Sebag, Nikolaus Hansen
The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems.
no code implementations • 6 Jan 2014 • Nicolas Galichet, Michèle Sebag, Olivier Teytaud
Motivated by applications in energy management, this paper presents the Multi-Armed Risk-Aware Bandit (MARAB) algorithm.
no code implementations • 12 Aug 2013 • Ilya Loshchilov, Marc Schoenauer, Michèle Sebag
This weakness is commonly addressed through surrogate optimization, learning an estimate of the objective function a. k. a.
no code implementations • 10 Apr 2013 • François-Michel De Rainville, Michèle Sebag, Christian Gagné, Marc Schoenauer, Denis Laurendeau
At each iteration, the dynamic multi-armed bandit makes a decision on which species to evolve for a generation, using the history of progress made by the different species to guide the decisions.
no code implementations • 21 Mar 2013 • Yoann Isaac, Quentin Barthélemy, Jamal Atif, Cédric Gouy-Pailler, Michèle Sebag
An extensive empirical evaluation shows how the proposed approach compares to the state of the art depending on the signal features.
no code implementations • 5 Aug 2012 • Riad Akrour, Marc Schoenauer, Michèle Sebag
This paper focuses on reinforcement learning (RL) with limited prior knowledge.
1 code implementation • 11 Apr 2012 • Ilya Loshchilov, Marc Schoenauer, Michèle Sebag
The resulting algorithm, saACM-ES, adjusts online the lifelength of the current surrogate model (the number of CMA-ES generations before learning a new surrogate) and the surrogate hyper-parameters.
no code implementations • International Conference on Machine Learning 2010 2010 • Romaric Gaudel, Michèle Sebag
This paper formalizes Feature Selection as a Reinforcement Learning problem, leading to a provably optimal though intractable selection policy.