Search Results for author: Parikshit Gopalan

Found 13 papers, 2 papers with code

On Computationally Efficient Multi-Class Calibration

no code implementations12 Feb 2024 Parikshit Gopalan, Lunjia Hu, Guy N. Rothblum

Projected smooth calibration gives strong guarantees for all downstream decision makers who want to use the predictor for binary classification problems of the form: does the label belong to a subset $T \subseteq [k]$: e. g. is this an image of an animal?

Binary Classification Computational Efficiency

Omnipredictors for Regression and the Approximate Rank of Convex Functions

no code implementations26 Jan 2024 Parikshit Gopalan, Princewill Okoroafor, Prasad Raghavendra, Abhishek Shetty, Mihir Singhal

An \textit{omnipredictor} for a class $\mathcal L$ of loss functions and a class $\mathcal C$ of hypotheses is a predictor whose predictions incur less expected loss than the best hypothesis in $\mathcal C$ for every loss in $\mathcal L$.

regression

When Does Optimizing a Proper Loss Yield Calibration?

no code implementations NeurIPS 2023 Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran

Optimizing proper loss functions is popularly believed to yield predictors with good calibration properties; the intuition being that for such losses, the global optimum is to predict the ground-truth probabilities, which is indeed calibrated.

Loss Minimization Yields Multicalibration for Large Neural Networks

no code implementations19 Apr 2023 Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Adam Tauman Kalai, Preetum Nakkiran

We show that minimizing the squared loss over all neural nets of size $n$ implies multicalibration for all but a bounded number of unlucky values of $n$.

Fairness

Swap Agnostic Learning, or Characterizing Omniprediction via Multicalibration

no code implementations NeurIPS 2023 Parikshit Gopalan, Michael P. Kim, Omer Reingold

We establish an equivalence between swap variants of omniprediction and multicalibration and swap agnostic learning.

Fairness

A Unifying Theory of Distance from Calibration

no code implementations30 Nov 2022 Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Preetum Nakkiran

We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors.

Loss Minimization through the Lens of Outcome Indistinguishability

no code implementations16 Oct 2022 Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, Udi Wieder

This decomposition highlights the utility of a new multi-group fairness notion that we call calibrated multiaccuracy, which lies in between multiaccuracy and multicalibration.

Fairness

Low-Degree Multicalibration

no code implementations2 Mar 2022 Parikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao

This stringent notion -- that predictions be well-calibrated across a rich class of intersecting subpopulations -- provides its strong guarantees at a cost: the computational and sample complexity of learning multicalibrated predictors are high, and grow exponentially with the number of class labels.

Fairness

KL Divergence Estimation with Multi-group Attribution

1 code implementation28 Feb 2022 Parikshit Gopalan, Nina Narodytska, Omer Reingold, Vatsal Sharan, Udi Wieder

Estimating the Kullback-Leibler (KL) divergence between two distributions given samples from them is well-studied in machine learning and information theory.

Fairness

Omnipredictors

no code implementations11 Sep 2021 Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, Udi Wieder

We suggest a rigorous new paradigm for loss minimization in machine learning where the loss function can be ignored at the time of learning and only be taken into account when deciding an action.

Fairness

Multicalibrated Partitions for Importance Weights

no code implementations10 Mar 2021 Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder

We significantly strengthen previous work that use the MaxEntropy approach, that define the importance weights based on a distribution $Q$ closest to $P$, that looks the same as $R$ on every set $C \in \mathcal{C}$, where $\mathcal{C}$ may be a huge collection of sets.

Anomaly Detection Domain Adaptation

Efficient Anomaly Detection via Matrix Sketching

no code implementations NeurIPS 2018 Vatsal Sharan, Parikshit Gopalan, Udi Wieder

We consider the problem of finding anomalies in high-dimensional data using popular PCA based anomaly scores.

Anomaly Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.