no code implementations • 15 Feb 2024 • Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger
We argue that often there is a critical mismatch between what one wishes to explain (e. g. the output of a classifier) and what current methods such as SHAP explain (e. g. the scalar probability of a class).
1 code implementation • 5 May 2023 • David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cedric Archambeau
Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on model-based optimizers that learn surrogate models of the target function to guide the search.
1 code implementation • 8 Feb 2023 • Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, Andrew Gordon Wilson, Cedric Archambeau
We present Fortuna, an open-source library for uncertainty quantification in deep learning.
no code implementations • 5 Nov 2021 • Riccardo Grazzi, Valentin Flunkert, David Salinas, Tim Januschowski, Matthias Seeger, Cedric Archambeau
While classical time series forecasting considers individual time series in isolation, recent advances based on deep learning showed that jointly learning from a large pool of related time series can boost the forecasting accuracy.
1 code implementation • 10 Jun 2021 • Eric Hans Lee, David Eriksson, Valerio Perrone, Matthias Seeger
Bayesian optimization (BO) is a popular method for optimizing expensive-to-evaluate black-box functions.
1 code implementation • 16 Apr 2021 • Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau
Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.
1 code implementation • 17 Feb 2021 • Louis C. Tiao, Aaron Klein, Matthias Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos
Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods.
no code implementations • 15 Dec 2020 • Valerio Perrone, Huibin Shen, Aida Zolic, Iaroslav Shcherbatyi, Amr Ahmed, Tanya Bansal, Michele Donini, Fela Winkelmolen, Rodolphe Jenatton, Jean Baptiste Faddoul, Barbara Pogorzelska, Miroslav Miladinovic, Krishnaram Kenthapadi, Matthias Seeger, Cédric Archambeau
To democratize access to machine learning systems, it is essential to automate the tuning.
no code implementations • 15 Dec 2020 • Piali Das, Valerio Perrone, Nikita Ivkin, Tanya Bansal, Zohar Karnin, Huibin Shen, Iaroslav Shcherbatyi, Yotam Elor, Wilton Wu, Aida Zolic, Thibaut Lienart, Alex Tang, Amr Ahmed, Jean Baptiste Faddoul, Rodolphe Jenatton, Fela Winkelmolen, Philip Gautier, Leo Dirac, Andre Perunicic, Miroslav Miladinovic, Giovanni Zappella, Cédric Archambeau, Matthias Seeger, Bhaskar Dutt, Laurence Rouesnel
AutoML systems provide a black-box solution to machine learning problems by selecting the right way of processing features, choosing an algorithm and tuning the hyperparameters of the entire pipeline.
3 code implementations • 24 Mar 2020 • Aaron Klein, Louis C. Tiao, Thibaut Lienart, Cedric Archambeau, Matthias Seeger
We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Hyperband and Gaussian process-based Bayesian optimization.
no code implementations • 22 Mar 2020 • Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger
Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.
no code implementations • ICML 2020 • Cuong V. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau
We introduce a new measure to evaluate the transferability of representations learned by classifiers.
Ranked #4 on Transferability on classification benchmark
no code implementations • 15 Oct 2019 • Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger
We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.
no code implementations • NeurIPS 2019 • Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton
Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.
no code implementations • 8 Dec 2017 • Valerio Perrone, Rodolphe Jenatton, Matthias Seeger, Cedric Archambeau
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization.
no code implementations • 24 Oct 2017 • Matthias Seeger, Asmus Hetzel, Zhenwen Dai, Eric Meissner, Neil D. Lawrence
Development systems for deep learning (DL), such as Theano, Torch, TensorFlow, or MXNet, are easy-to-use tools for creating complex neural network models.
no code implementations • 22 Sep 2017 • Matthias Seeger, Syama Rangapuram, Yuyang Wang, David Salinas, Jan Gasthaus, Tim Januschowski, Valentin Flunkert
We present a scalable and robust Bayesian inference method for linear state space models.
no code implementations • ICML 2017 • Rodolphe Jenatton, Cedric Archambeau, Javier González, Matthias Seeger
The benefit of leveraging this structure is twofold: we explore the search space more efficiently and posterior inference scales more favorably with the number of observations than Gaussian Process-based approaches published in the literature.
no code implementations • 5 Jun 2013 • Mohammad Emtiyaz Khan, Aleksandr Y. Aravkin, Michael P. Friedlander, Matthias Seeger
Latent Gaussian models (LGMs) are widely used in statistics and machine learning.
2 code implementations • 21 Dec 2009 • Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias Seeger
Many applications require optimizing an unknown, noisy function that is expensive to evaluate.
no code implementations • NeurIPS 2009 • Matthias Seeger
We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints.
no code implementations • NeurIPS 2008 • Duy Nguyen-Tuong, Jan R. Peters, Matthias Seeger
Inspired by local learning, we propose a method to speed up standard Gaussian Process regression (GPR) with local GP models (LGP).
no code implementations • NeurIPS 2008 • Hannes Nickisch, Rolf Pohmann, Bernhard Schölkopf, Matthias Seeger
We propose a novel scalable variational inference algorithm, and show how powerful methods of numerical mathematics can be modified to compute primitives in our framework.