no code implementations • 15 Feb 2024 • Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger
We argue that often there is a critical mismatch between what one wishes to explain (e. g. the output of a classifier) and what current methods such as SHAP explain (e. g. the scalar probability of a class).
1 code implementation • 23 Oct 2023 • Pola Schwöbel, Jacek Golebiowski, Michele Donini, Cédric Archambeau, Danish Pruthi
Large language models (LLMs) encode vast amounts of world knowledge.
2 code implementations • 14 Jul 2022 • Ondrej Bohdal, Lukas Balles, Martin Wistuba, Beyza Ermis, Cédric Archambeau, Giovanni Zappella
Hyperparameter optimization (HPO) and neural architecture search (NAS) are methods of choice to obtain the best-in-class machine learning models, but in practice they can be costly to run.
no code implementations • 28 Mar 2022 • Lukas Balles, Giovanni Zappella, Cédric Archambeau
Most widely-used CL methods rely on a rehearsal memory of data points to be reused while training on new data.
no code implementations • 23 Dec 2021 • Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi
The large size and complex decision mechanisms of state-of-the-art text classifiers make it difficult for humans to understand their predictions, leading to a potential lack of trust by the users.
no code implementations • 9 Dec 2021 • Lukas Balles, Giovanni Zappella, Cédric Archambeau
We devise a coreset selection method based on the idea of gradient matching: The gradients induced by the coreset should match, as closely as possible, those induced by the original training dataset.
2 code implementations • 23 Jun 2021 • Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau
Hyperparameter optimization (HPO) is increasingly used to automatically tune the predictive performance (e. g., accuracy) of machine learning models.
no code implementations • Findings (ACL) 2021 • Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi
With the ever-increasing complexity of neural language models, practitioners have turned to methods for understanding the predictions of these models.
no code implementations • ICML Workshop AutoML 2021 • Giovanni Zappella, David Salinas, Cédric Archambeau
In this work we consider the problem of repeated hyperparameter and neural architecture search (HNAS).
no code implementations • 25 Feb 2021 • Samuel Horváth, Aaron Klein, Peter Richtárik, Cédric Archambeau
Bayesian optimization (BO) is a sample efficient approach to automatically tune the hyperparameters of machine learning models.
no code implementations • 15 Dec 2020 • Valerio Perrone, Huibin Shen, Aida Zolic, Iaroslav Shcherbatyi, Amr Ahmed, Tanya Bansal, Michele Donini, Fela Winkelmolen, Rodolphe Jenatton, Jean Baptiste Faddoul, Barbara Pogorzelska, Miroslav Miladinovic, Krishnaram Kenthapadi, Matthias Seeger, Cédric Archambeau
To democratize access to machine learning systems, it is essential to automate the tuning.
no code implementations • 15 Dec 2020 • Piali Das, Valerio Perrone, Nikita Ivkin, Tanya Bansal, Zohar Karnin, Huibin Shen, Iaroslav Shcherbatyi, Yotam Elor, Wilton Wu, Aida Zolic, Thibaut Lienart, Alex Tang, Amr Ahmed, Jean Baptiste Faddoul, Rodolphe Jenatton, Fela Winkelmolen, Philip Gautier, Leo Dirac, Andre Perunicic, Miroslav Miladinovic, Giovanni Zappella, Cédric Archambeau, Matthias Seeger, Bhaskar Dutt, Laurence Rouesnel
AutoML systems provide a black-box solution to machine learning problems by selecting the right way of processing features, choosing an algorithm and tuning the hyperparameters of the entire pipeline.
no code implementations • 23 Nov 2020 • Gauthier Guinet, Valerio Perrone, Cédric Archambeau
Bayesian optimization (BO) is a popular method to optimize expensive black-box functions.
no code implementations • 9 Jun 2020 • Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cédric Archambeau
Moreover, our method can be used in synergy with such specialized fairness techniques to tune their hyperparameters.
no code implementations • 30 Nov 2017 • Tammo Rukat, Dustin Lange, Cédric Archambeau
Learning attribute applicability of products in the Amazon catalog (e. g., predicting that a shoe should have a value for size, but not for battery-type at scale is a challenge.
no code implementations • 23 Dec 2015 • Rodolphe Jenatton, Jim Huang, Cédric Archambeau
We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated over a finite number of rounds T , but can be violated in intermediate rounds.
no code implementations • 18 Apr 2015 • Maxim Rabinovich, Cédric Archambeau
Access to web-scale corpora is gradually bringing robust automatic knowledge base creation and extension within reach.
no code implementations • NeurIPS 2011 • Shengbo Guo, Onno Zoeter, Cédric Archambeau
We propose a new sparse Bayesian model for multi-task regression and classification.
no code implementations • NeurIPS 2008 • Cédric Archambeau, Francis R. Bach
We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases.
no code implementations • NeurIPS 2007 • Cédric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, John S. Shawe-Taylor
Diffusion processes are a family of continuous-time continuous-state stochastic processes that are in general only partially observed.