Search Results for author: Matthias Seeger

Found 23 papers, 7 papers with code

Explaining Probabilistic Models with Distributional Values

no code implementations15 Feb 2024 Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger

We argue that often there is a critical mismatch between what one wishes to explain (e. g. the output of a classifier) and what current methods such as SHAP explain (e. g. the scalar probability of a class).

Optimizing Hyperparameters with Conformal Quantile Regression

1 code implementation5 May 2023 David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cedric Archambeau

Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on model-based optimizers that learn surrogate models of the target function to guide the search.

Gaussian Processes Hyperparameter Optimization +1

Meta-Forecasting by combining Global Deep Representations with Local Adaptation

no code implementations5 Nov 2021 Riccardo Grazzi, Valentin Flunkert, David Salinas, Tim Januschowski, Matthias Seeger, Cedric Archambeau

While classical time series forecasting considers individual time series in isolation, recent advances based on deep learning showed that jointly learning from a large pool of related time series can boost the forecasting accuracy.

Meta-Learning Time Series +1

Automatic Termination for Hyperparameter Optimization

1 code implementation16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.

Bayesian Optimization Hyperparameter Optimization

Model-based Asynchronous Hyperparameter and Neural Architecture Search

3 code implementations24 Mar 2020 Aaron Klein, Louis C. Tiao, Thibaut Lienart, Cedric Archambeau, Matthias Seeger

We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Hyperband and Gaussian process-based Bayesian optimization.

Bayesian Optimization Hyperparameter Optimization +1

Cost-aware Bayesian Optimization

no code implementations22 Mar 2020 Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger

Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.

Bayesian Optimization

Constrained Bayesian Optimization with Max-Value Entropy Search

no code implementations15 Oct 2019 Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.

Bayesian Optimization Hyperparameter Optimization

Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning

no code implementations NeurIPS 2019 Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton

Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.

Bayesian Optimization Hyperparameter Optimization +1

Auto-Differentiating Linear Algebra

no code implementations24 Oct 2017 Matthias Seeger, Asmus Hetzel, Zhenwen Dai, Eric Meissner, Neil D. Lawrence

Development systems for deep learning (DL), such as Theano, Torch, TensorFlow, or MXNet, are easy-to-use tools for creating complex neural network models.

Active Learning Bayesian Optimization +1

Bayesian Optimization with Tree-structured Dependencies

no code implementations ICML 2017 Rodolphe Jenatton, Cedric Archambeau, Javier González, Matthias Seeger

The benefit of leveraging this structure is twofold: we explore the search space more efficiently and posterior inference scales more favorably with the number of observations than Gaussian Process-based approaches published in the literature.

Bayesian Optimization Binary Classification +1

Speeding up Magnetic Resonance Image Acquisition by Bayesian Multi-Slice Adaptive Compressed Sensing

no code implementations NeurIPS 2009 Matthias Seeger

We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints.

Variational Inference

Local Gaussian Process Regression for Real Time Online Model Learning

no code implementations NeurIPS 2008 Duy Nguyen-Tuong, Jan R. Peters, Matthias Seeger

Inspired by local learning, we propose a method to speed up standard Gaussian Process regression (GPR) with local GP models (LGP).

GPR regression

Bayesian Experimental Design of Magnetic Resonance Imaging Sequences

no code implementations NeurIPS 2008 Hannes Nickisch, Rolf Pohmann, Bernhard Schölkopf, Matthias Seeger

We propose a novel scalable variational inference algorithm, and show how powerful methods of numerical mathematics can be modified to compute primitives in our framework.

Bayesian Inference Experimental Design +1

Cannot find the paper you are looking for? You can Submit a new open access paper.