no code implementations • 12 Feb 2024 • Parikshit Gopalan, Lunjia Hu, Guy N. Rothblum
Projected smooth calibration gives strong guarantees for all downstream decision makers who want to use the predictor for binary classification problems of the form: does the label belong to a subset $T \subseteq [k]$: e. g. is this an image of an animal?
no code implementations • 26 Jan 2024 • Parikshit Gopalan, Princewill Okoroafor, Prasad Raghavendra, Abhishek Shetty, Mihir Singhal
An \textit{omnipredictor} for a class $\mathcal L$ of loss functions and a class $\mathcal C$ of hypotheses is a predictor whose predictions incur less expected loss than the best hypothesis in $\mathcal C$ for every loss in $\mathcal L$.
no code implementations • NeurIPS 2023 • Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran
Optimizing proper loss functions is popularly believed to yield predictors with good calibration properties; the intuition being that for such losses, the global optimum is to predict the ground-truth probabilities, which is indeed calibrated.
no code implementations • 19 Apr 2023 • Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Adam Tauman Kalai, Preetum Nakkiran
We show that minimizing the squared loss over all neural nets of size $n$ implies multicalibration for all but a bounded number of unlucky values of $n$.
no code implementations • NeurIPS 2023 • Parikshit Gopalan, Michael P. Kim, Omer Reingold
We establish an equivalence between swap variants of omniprediction and multicalibration and swap agnostic learning.
no code implementations • 30 Nov 2022 • Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran
We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors.
no code implementations • 16 Oct 2022 • Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder
This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.
no code implementations • 2 Mar 2022 • Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao
This stringent notion -- that predictions be well-calibrated across a rich class of intersecting subpopulations -- provides its strong guarantees at a cost: the computational and sample complexity of learning multicalibrated predictors are high, and grow exponentially with the number of class labels.
1 code implementation • 28 Feb 2022 • Parikshit Gopalan, Nina Narodytska, Omer Reingold, Vatsal Sharan, Udi Wieder
Estimating the Kullback-Leibler (KL) divergence between two distributions given samples from them is well-studied in machine learning and information theory.
no code implementations • 11 Sep 2021 • Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, Udi Wieder
We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.
no code implementations • 10 Mar 2021 • Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder
We significantly strengthen previous work that use the MaxEntropy approach, that define the importance weights based on a distribution $Q$ closest to $P$, that looks the same as $R$ on every set $C \in \mathcal{C}$, where $\mathcal{C}$ may be a huge collection of sets.
1 code implementation • NeurIPS 2019 • Parikshit Gopalan, Vatsal Sharan, Udi Wieder
We consider the problem of detecting anomalies in a large dataset.
no code implementations • NeurIPS 2018 • Vatsal Sharan, Parikshit Gopalan, Udi Wieder
We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores.