no code implementations • 19 Dec 2023 • Avinandan Bose, Mihaela Curmei, Daniel L. Jiang, Jamie Morgenstern, Sarah Dean, Lillian J. Ratliff, Maryam Fazel
(ii) Suboptimal Local Solutions: The total loss (sum of loss functions across all users and all services) landscape is not convex even if the individual losses on a single service are convex, making it likely for the learning dynamics to get stuck in local minima.
no code implementations • 13 Dec 2023 • Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, Kevin Jamieson
In this work, we address the challenges of reducing bias and improving accuracy in data-scarce environments, where the cost of collecting labeled data prohibits the use of large, labeled datasets.
no code implementations • 15 Sep 2022 • Ira Globus-Harris, Varun Gupta, Christopher Jung, Michael Kearns, Jamie Morgenstern, Aaron Roth
We show how to take a regression function $\hat{f}$ that is appropriately ``multicalibrated'' and efficiently post-process it into an approximately error minimizing classifier satisfying a large variety of fairness constraints.
1 code implementation • 7 Jul 2022 • Saba Ahmadi, Pranjal Awasthi, Samir Khuller, Matthäus Kleindessner, Jamie Morgenstern, Pattara Sukprasert, Ali Vakilian
In this paper, we propose a natural notion of individual preference (IP) stability for clustering, which asks that every data point, on average, is closer to the points in its own cluster than to the points in any other cluster.
no code implementations • 22 Jun 2022 • Romain Camilleri, Andrew Wagenmaker, Jamie Morgenstern, Lalit Jain, Kevin Jamieson
To our knowledge, our results are the first on best-arm identification in linear bandits with safety constraints.
1 code implementation • 6 Jun 2022 • Sarah Dean, Mihaela Curmei, Lillian J. Ratliff, Jamie Morgenstern, Maryam Fazel
We study the participation and retraining dynamics that arise when both the learners and sub-populations of users are \emph{risk-reducing}, which cover a broad class of updates including gradient descent, multiplicative weights, etc.
no code implementations • 25 May 2022 • Sarah Dean, Jamie Morgenstern
We use a similar model of preference dynamics, where an individual's preferences move towards content the consume and enjoy, and away from content they consume and dislike.
1 code implementation • 11 Feb 2022 • Pranjal Awasthi, Christopher Jung, Jamie Morgenstern
Suppose we are given two datasets: a labeled dataset and unlabeled dataset which also has additional auxiliary features not present in the first dataset.
no code implementations • 4 Feb 2022 • Bhuvesh Kumar, Jamie Morgenstern, Okke Schrijvers
We present four main results: 1) for the episodic setting we give sample complexity bounds for the spend rate prediction problem: given $n$ samples from each episode, with high probability we have $|\widehat{\rho}_e - \rho_e| \leq \tilde{O}(\frac{1}{n^{1/3}})$ where $\rho_e$ is the optimal spend rate for the episode, $\widehat{\rho}_e$ is the estimate from our algorithm, 2) we extend the algorithm of Balseiro and Gur (2017) to operate on varying, approximate spend rates and show that the resulting combined system of optimal spend rate estimation and online pacing algorithm for episodic settings has regret that vanishes in number of historic samples $n$ and the number of rounds $T$, 3) for non-episodic but slowly-changing distributions we show that the same approach approximates the optimal bidding strategy up to a factor dependent on the rate-of-change of the distributions and 4) we provide experiments showing that our algorithm outperforms both static spend plans and non-pacing across a wide variety of settings.
no code implementations • 27 Aug 2021 • Siddarth Srinivasan, Jamie Morgenstern
The revenue raised in the submission stage auction is used to pay reviewers based on the quality of their reviews in the reviewing stage.
no code implementations • 16 Feb 2021 • Pranjal Awasthi, Alex Beutel, Matthaeus Kleindessner, Jamie Morgenstern, Xuezhi Wang
An alternate approach that is commonly used is to separately train an attribute classifier on data with sensitive attribute information, and then use it later in the ML pipeline to evaluate the bias of a given classifier.
1 code implementation • 11 Jun 2020 • Jacob Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang
We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization.
no code implementations • 8 Jun 2020 • Matthäus Kleindessner, Pranjal Awasthi, Jamie Morgenstern
A common distinction in fair machine learning, in particular in fair classification, is between group fairness and individual fairness.
no code implementations • 9 Feb 2020 • Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, Jamie Morgenstern
The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range of constraints and objectives.
2 code implementations • 7 Jun 2019 • Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern
We identify conditions on the perturbation that guarantee that the bias of a classifier is reduced even by running equalized odds with the perturbed attribute.
1 code implementation • 10 Apr 2019 • Ángel Alexander Cabrera, Will Epperson, Fred Hohman, Minsuk Kahng, Jamie Morgenstern, Duen Horng Chau
We present FairVis, a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models.
2 code implementations • NeurIPS 2019 • Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, Santosh Vempala
Our main result is an exact polynomial-time algorithm for the two-criterion dimensionality reduction problem when the two criteria are increasing concave functions.
1 code implementation • 21 Feb 2019 • Benjamin Wilson, Judy Hoffman, Jamie Morgenstern
In this work, we investigate whether state-of-the-art object detection systems have equitable predictive performance on pedestrians with different skin tones.
1 code implementation • 24 Jan 2019 • Matthäus Kleindessner, Samira Samadi, Pranjal Awasthi, Jamie Morgenstern
Given the widespread popularity of spectral clustering (SC) for partitioning graph data, we study a version of constrained SC in which we try to incorporate the fairness notion proposed by Chierichetti et al. (2017).
1 code implementation • 24 Jan 2019 • Matthäus Kleindessner, Pranjal Awasthi, Jamie Morgenstern
In data summarization we want to choose $k$ prototypes in order to summarize a data set.
1 code implementation • NeurIPS 2018 • Samira Samadi, Uthaipon Tantipongpipat, Jamie Morgenstern, Mohit Singh, Santosh Vempala
This motivates our study of dimensionality reduction techniques which maintain similar fidelity for A and B.
21 code implementations • 23 Mar 2018 • Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford
The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains.
no code implementations • NeurIPS 2018 • Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, Zhiwei Steven Wu
Bandit learning is characterized by the tension between long-term exploration and short-term exploitation.
1 code implementation • 7 Jun 2017 • Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We introduce a flexible family of fairness regularizers for (linear and logistic) regression problems.
no code implementations • ICML 2017 • Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
We initiate the study of fairness in reinforcement learning, where the actions of a learning algorithm may affect its environment and future rewards.
no code implementations • 29 Oct 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth
We study fairness in linear bandit problems.
no code implementations • NeurIPS 2016 • Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth
This tight connection allows us to provide a provably fair algorithm for the linear contextual bandit problem with a polynomial dependence on the dimension, and to show (for a different class of functions) a worst-case exponential gap in regret between fair and non-fair learning algorithms
no code implementations • 11 Apr 2016 • Jamie Morgenstern, Tim Roughgarden
We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions.
no code implementations • 3 Nov 2015 • Justin Hsu, Jamie Morgenstern, Ryan Rogers, Aaron Roth, Rakesh Vohra
Second, we provide learning-theoretic results that show that such prices are robust to changing the buyers in the market, so long as all buyers are sampled from the same (unknown) distribution.