no code implementations • ICML 2020 • Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
Individual fairness was proposed to address some of the shortcomings of group fairness.
no code implementations • 17 Jan 2024 • Nabarun Deb, Debarghya Mukherjee
Our main result shows that a non-trivial trade-off between the complexity of the underlying function class and the dependence among the observations characterizes the learning rate in a large class of nonparametric problems.
no code implementations • 28 Jun 2023 • Jianqing Fan, Jiawei Ge, Debarghya Mukherjee
Uncertainty quantification for prediction is an intriguing problem with significant applications in various fields, such as biomedical science, economic studies, and weather forecasts.
no code implementations • 12 Feb 2023 • Sohom Bhattacharya, Jianqing Fan, Debarghya Mukherjee
We show that under certain standard assumptions, debiased deep neural networks achieve a minimax optimal rate both in terms of $(n, d)$.
1 code implementation • 26 May 2022 • Subha Maity, Debarghya Mukherjee, Moulinath Banerjee, Yuekai Sun
Time-varying stochastic optimization problems frequently arise in machine learning practice (e. g. gradual domain shift, object tracking, strategic classification).
no code implementations • 1 May 2022 • Debarghya Mukherjee, Felix Petersen, Mikhail Yurochkin, Yuekai Sun
In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases.
1 code implementation • NeurIPS 2021 • Felix Petersen, Debarghya Mukherjee, Yuekai Sun, Mikhail Yurochkin
In this work, we propose general post-processing algorithms for individual fairness (IF).
no code implementations • 22 Feb 2021 • Debarghya Mukherjee, Moulinath Banerjee, Ya'acov Ritov
In this paper, we present a new model coined SCENTS: Score Explained Non-Randomized Treatment Systems, and a corresponding method that allows estimation of the treatment effect at $\sqrt{n}$ rate in the presence of fairly general forms of confoundedness, when the `score' variable on whose basis treatment is assigned can be explained via certain feature measurements of the individuals under study.
Methodology Statistics Theory Statistics Theory
no code implementations • 1 Jan 2021 • Debarghya Mukherjee, Aritra Guha, Justin Solomon, Yuekai Sun, Mikhail Yurochkin
In light of recent advances in solving the OT problem, OT distances are widely used as loss functions in minimum distance estimation.
no code implementations • NeurIPS 2021 • Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, Yuekai Sun
Many instances of algorithmic bias are caused by subpopulation shifts.
no code implementations • 28 Sep 2020 • Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, Yuekai Sun
If the algorithmic biases in an ML model are due to sampling biases in the training data, then enforcing algorithmic fairness may improve the performance of the ML model on unbiased test data.
no code implementations • 19 Jun 2020 • Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.