no code implementations • 12 Apr 2024 • Etash Guha, Shlok Natarajan, Thomas Möllenhoff, Mohammad Emtiyaz Khan, Eugene Ndiaye
Conformal prediction (CP) for regression can be challenging, especially when the output distribution is heteroscedastic, multimodal, or skewed.
no code implementations • 5 Feb 2024 • Yu-Guan Hsieh, James Thornton, Eugene Ndiaye, Michal Klein, Marco Cuturi, Pierre Ablin
Beyond minimizing a single training loss, many deep learning estimation pipelines rely on an auxiliary objective to quantify and encourage desirable properties of the model (e. g. performance on another dataset, robustness, agreement with a prior).
no code implementations • 27 Jan 2024 • Charles Guille-Escuret, Eugene Ndiaye
We explore a novel methodology for constructing confidence regions for parameters of linear models, using predictions from any arbitrary predictor.
1 code implementation • 3 Dec 2023 • Matthew Lau, Ismaila Seck, Athanasios P Meliopoulos, Wenke Lee, Eugene Ndiaye
From its properties, we intuit that equality separation is suitable for anomaly detection.
1 code implementation • 11 Jul 2023 • Etash Kumar Guha, Eugene Ndiaye, Xiaoming Huo
Given a sequence of observable variables $\{(x_1, y_1), \ldots, (x_n, y_n)\}$, the conformal prediction method estimates a confidence set for $y_{n+1}$ given $x_{n+1}$ that is valid for any finite sample size by merely assuming that the joint distribution of the data is permutation invariant.
no code implementations • 31 Oct 2022 • Chancellor Johnstone, Eugene Ndiaye
It is common in machine learning to estimate a response y given covariate information x.
1 code implementation • 28 May 2022 • Diptesh Das, Eugene Ndiaye, Ichiro Takeuchi
In predictive modeling for high-stake decision-making, predictors must be not only accurate but also reliable.
1 code implementation • 19 Dec 2021 • Eugene Ndiaye
When one observes a sequence of variables $(x_1, y_1), \ldots, (x_n, y_n)$, Conformal Prediction (CP) is a methodology that allows to estimate a confidence set for $y_{n+1}$ given $x_{n+1}$ by merely assuming that the distribution of the data is exchangeable.
no code implementations • 9 Dec 2021 • Eugene Ndiaye, Ichiro Takeuchi
Path-following algorithms are frequently used in composite optimization problems where a series of subproblems, with varying regularization hyperparameters, are solved sequentially.
1 code implementation • 14 Apr 2021 • Eugene Ndiaye, Ichiro Takeuchi
Conformal prediction constructs a confidence set for an unobserved response of a feature vector based on previous identically distributed and exchangeable observations of responses and features.
no code implementations • 6 Sep 2020 • Eugene Ndiaye, Olivier Fercoq, Joseph Salmon
Screening rules were recently introduced as a technique for explicitly identifying active structures such as sparsity, in optimization problem arising in machine learning.
1 code implementation • NeurIPS 2019 • Eugene Ndiaye, Ichiro Takeuchi
If you are predicting the label $y$ of a new object with $\hat y$, how confident are you that $y = \hat y$?
1 code implementation • 12 Oct 2018 • Eugene Ndiaye, Tam Le, Olivier Fercoq, Joseph Salmon, Ichiro Takeuchi
Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task.
1 code implementation • NeurIPS 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.
1 code implementation • 17 Nov 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.
2 code implementations • 8 Jun 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.
1 code implementation • 19 Feb 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.
no code implementations • NeurIPS 2015 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.