Search Results for author: Cédric Archambeau

Found 20 papers, 3 papers with code

Explaining Probabilistic Models with Distributional Values

no code implementations15 Feb 2024 Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger

We argue that often there is a critical mismatch between what one wishes to explain (e. g. the output of a classifier) and what current methods such as SHAP explain (e. g. the scalar probability of a class).

PASHA: Efficient HPO and NAS with Progressive Resource Allocation

2 code implementations14 Jul 2022 Ondrej Bohdal, Lukas Balles, Martin Wistuba, Beyza Ermis, Cédric Archambeau, Giovanni Zappella

Hyperparameter optimization (HPO) and neural architecture search (NAS) are methods of choice to obtain the best-in-class machine learning models, but in practice they can be costly to run.

BIG-bench Machine Learning Hyperparameter Optimization +1

Gradient-Matching Coresets for Rehearsal-Based Continual Learning

no code implementations28 Mar 2022 Lukas Balles, Giovanni Zappella, Cédric Archambeau

Most widely-used CL methods rely on a rehearsal memory of data points to be reused while training on new data.

Continual Learning Management

More Than Words: Towards Better Quality Interpretations of Text Classifiers

no code implementations23 Dec 2021 Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi

The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.

Feature Importance Sentence

Gradient-matching coresets for continual learning

no code implementations9 Dec 2021 Lukas Balles, Giovanni Zappella, Cédric Archambeau

We devise a coreset selection method based on the idea of gradient matching: The gradients induced by the coreset should match, as closely as possible, those induced by the original training dataset.

Continual Learning

Multi-objective Asynchronous Successive Halving

2 code implementations23 Jun 2021 Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau

Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.

Fairness Hyperparameter Optimization +3

On the Lack of Robust Interpretability of Neural Text Classifiers

no code implementations Findings (ACL) 2021 Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.

Hyperparameter Transfer Learning with Adaptive Complexity

no code implementations25 Feb 2021 Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau

Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.

Bayesian Optimization Decision Making +1

Fair Bayesian Optimization

no code implementations9 Jun 2020 Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau

Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.

Bayesian Optimization Fairness

An interpretable latent variable model for attribute applicability in the Amazon catalogue

no code implementations30 Nov 2017 Tammo Rukat, Dustin Lange, Cédric Archambeau

Learning attribute applicability of products in the Amazon catalog (e. g., predicting that a shoe should have a value for size, but not for battery-type at scale is a challenge.

Attribute

Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

no code implementations23 Dec 2015 Rodolphe Jenatton, Jim Huang, Cédric Archambeau

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated over a finite number of rounds T , but can be violated in intermediate rounds.

Online Inference for Relation Extraction with a Reduced Feature Set

no code implementations18 Apr 2015 Maxim Rabinovich, Cédric Archambeau

Access to web-scale corpora is gradually bringing robust automatic knowledge base creation and extension within reach.

Relation Relation Extraction +1

Sparse probabilistic projections

no code implementations NeurIPS 2008 Cédric Archambeau, Francis R. Bach

We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases.

Variational Inference for Diffusion Processes

no code implementations NeurIPS 2007 Cédric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, John S. Shawe-Taylor

Diffusion processes are a family of continuous-time continuous-state stochastic processes that are in general only partially observed.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.