Search Results for author: Cedric Archambeau

Found 25 papers, 6 papers with code

A Negative Result on Gradient Matching for Selective Backprop

no code implementations8 Dec 2023 Lukas Balles, Cedric Archambeau, Giovanni Zappella

With increasing scale in model and dataset size, the training of deep neural networks becomes a massive computational burden.

Optimizing Hyperparameters with Conformal Quantile Regression

1 code implementation5 May 2023 David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cedric Archambeau

Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on model-based optimizers that learn surrogate models of the target function to guide the search.

Gaussian Processes Hyperparameter Optimization +1

Renate: A Library for Real-World Continual Learning

1 code implementation24 Apr 2023 Martin Wistuba, Martin Ferianc, Lukas Balles, Cedric Archambeau, Giovanni Zappella

We discuss requirements for the use of continual learning algorithms in practice, from which we derive design principles for Renate.

Continual Learning

Private Synthetic Data for Multitask Learning and Marginal Queries

no code implementations15 Sep 2022 Giuseppe Vietri, Cedric Archambeau, Sergul Aydore, William Brown, Michael Kearns, Aaron Roth, Ankit Siva, Shuai Tang, Zhiwei Steven Wu

A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}.

Uncertainty Calibration in Bayesian Neural Networks via Distance-Aware Priors

no code implementations17 Jul 2022 Gianluca Detommaso, Alberto Gasparin, Andrew Wilson, Cedric Archambeau

As we move away from the data, the predictive uncertainty should increase, since a great variety of explanations are consistent with the little available information.

regression

Continual Learning with Transformers for Image Classification

no code implementations28 Jun 2022 Beyza Ermis, Giovanni Zappella, Martin Wistuba, Aditya Rawal, Cedric Archambeau

This phenomenon is known as catastrophic forgetting and it is often difficult to prevent due to practical constraints, such as the amount of data that can be stored or the limited computation sources that can be used.

Continual Learning Image Classification +2

Memory Efficient Continual Learning with Transformers

no code implementations9 Mar 2022 Beyza Ermis, Giovanni Zappella, Martin Wistuba, Aditya Rawal, Cedric Archambeau

Moreover, applications increasingly rely on large pre-trained neural networks, such as pre-trained Transformers, since the resources or data might not be available in sufficiently large quantities to practitioners to train the model from scratch.

Continual Learning text-classification +1

Meta-Forecasting by combining Global Deep Representations with Local Adaptation

no code implementations5 Nov 2021 Riccardo Grazzi, Valentin Flunkert, David Salinas, Tim Januschowski, Matthias Seeger, Cedric Archambeau

While classical time series forecasting considers individual time series in isolation, recent advances based on deep learning showed that jointly learning from a large pool of related time series can boost the forecasting accuracy.

Meta-Learning Time Series +1

A multi-objective perspective on jointly tuning hardware and hyperparameters

no code implementations10 Jun 2021 David Salinas, Valerio Perrone, Olivier Cruchant, Cedric Archambeau

In three benchmarks where hardware is selected in addition to hyperparameters, we obtain runtime and cost reductions of at least 5. 8x and 8. 8x, respectively.

AutoML Transfer Learning

Dynamic Pruning of a Neural Network via Gradient Signal-to-Noise Ratio

no code implementations ICML Workshop AutoML 2021 Julien Niklas Siems, Aaron Klein, Cedric Archambeau, Maren Mahsereci

Dynamic sparsity pruning undoes this limitation and allows to adapt the structure of the sparse neural network during training.

Automatic Termination for Hyperparameter Optimization

1 code implementation16 Apr 2021 Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger, Cedric Archambeau

Across an extensive range of real-world HPO problems and baselines, we show that our termination criterion achieves a better trade-off between the test performance and optimization time.

Bayesian Optimization Hyperparameter Optimization

Model-based Asynchronous Hyperparameter and Neural Architecture Search

3 code implementations24 Mar 2020 Aaron Klein, Louis C. Tiao, Thibaut Lienart, Cedric Archambeau, Matthias Seeger

We introduce a model-based asynchronous multi-fidelity method for hyperparameter and neural architecture search that combines the strengths of asynchronous Hyperband and Gaussian process-based Bayesian optimization.

Bayesian Optimization Hyperparameter Optimization +1

Cost-aware Bayesian Optimization

no code implementations22 Mar 2020 Eric Hans Lee, Valerio Perrone, Cedric Archambeau, Matthias Seeger

Bayesian optimization (BO) is a class of global optimization algorithms, suitable for minimizing an expensive objective function in as few function evaluations as possible.

Bayesian Optimization

Constrained Bayesian Optimization with Max-Value Entropy Search

no code implementations15 Oct 2019 Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

We propose constrained Max-value Entropy Search (cMES), a novel information theoretic-based acquisition function implementing this formulation.

Bayesian Optimization Hyperparameter Optimization

Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning

no code implementations NeurIPS 2019 Valerio Perrone, Huibin Shen, Matthias Seeger, Cedric Archambeau, Rodolphe Jenatton

Despite its simplicity, we show that our approach considerably boosts BO by reducing the size of the search space, thus accelerating the optimization of a variety of black-box optimization problems.

Bayesian Optimization Hyperparameter Optimization +1

Scalable Hyperparameter Transfer Learning

no code implementations NeurIPS 2018 Valerio Perrone, Rodolphe Jenatton, Matthias W. Seeger, Cedric Archambeau

Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization.

Bayesian Optimization Hyperparameter Optimization +2

Bayesian Optimization with Tree-structured Dependencies

no code implementations ICML 2017 Rodolphe Jenatton, Cedric Archambeau, Javier González, Matthias Seeger

The benefit of leveraging this structure is twofold: we explore the search space more efficiently and posterior inference scales more favorably with the number of observations than Gaussian Process-based approaches published in the literature.

Bayesian Optimization Binary Classification +1

Online optimization and regret guarantees for non-additive long-term constraints

no code implementations17 Feb 2016 Rodolphe Jenatton, Jim Huang, Dominik Csiba, Cedric Archambeau

We consider online optimization in the 1-lookahead setting, where the objective does not decompose additively over the rounds of the online game.

Incremental Variational Inference for Latent Dirichlet Allocation

no code implementations17 Jul 2015 Cedric Archambeau, Beyza Ermis

We introduce incremental variational inference and apply it to latent Dirichlet allocation (LDA).

Variational Inference

Overlapping Trace Norms in Multi-View Learning

no code implementations24 Apr 2014 Behrouz Behmardi, Cedric Archambeau, Guillaume Bouchard

Multi-view learning leverages correlations between different sources of data to make predictions in one view based on observations in another view.

Imputation MULTI-VIEW LEARNING

Cannot find the paper you are looking for? You can Submit a new open access paper.