Search Results for author: Alexandra Chouldechova

Found 23 papers, 8 papers with code

A structured regression approach for evaluating model performance across intersectional subgroups

no code implementations26 Jan 2024 Christine Herlihy, Kimberly Truong, Alexandra Chouldechova, Miroslav Dudik

Disaggregated evaluation is a central task in AI fairness assessment, with the goal to measure an AI system's performance across different subgroups defined by combinations of demographic or other sensitive attributes.

Fairness regression

The Impact of Differential Feature Under-reporting on Algorithmic Fairness

no code implementations16 Jan 2024 Nil-Jana Akpinar, Zachary C. Lipton, Alexandra Chouldechova

Predictive risk models in the public sector are commonly developed using administrative data that is more complete for subpopulations that more greatly rely on public services.

Decision Making Fairness

Multi-Target Multiplicity: Flexibility and Fairness in Target Specification under Resource Constraints

1 code implementation23 Jun 2023 Jamelle Watson-Daniels, Solon Barocas, Jake M. Hofman, Alexandra Chouldechova

Along the way, we refine the study of single-target multiplicity by introducing notions of multiplicity that respect resource constraints -- a feature of many real-world tasks that is not captured by existing notions of predictive multiplicity.

Decision Making Fairness

Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models

no code implementations20 Jun 2022 Emily Black, Hadi Elzayn, Alexandra Chouldechova, Jacob Goldin, Daniel E. Ho

First, we show how the use of more flexible machine learning (classification) methods -- as opposed to simpler models -- shifts audit burdens from high to middle-income taxpayers.

Fairness regression

Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders

no code implementations18 May 2022 Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu

In this work, we conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system or who work in it to understand their beliefs and concerns around PRMs, and to engage them in imagining new uses of data and technologies in the child welfare system.

Decision Making

Doubting AI Predictions: Influence-Driven Second Opinion Recommendation

no code implementations29 Apr 2022 Maria De-Arteaga, Alexandra Chouldechova, Artur Dubrawski

Effective human-AI collaboration requires a system design that provides humans with meaningful ways to make sense of and critically evaluate algorithmic recommendations.

Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

1 code implementation17 Feb 2022 Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi

In many settings, however, the final prediction or decision of a system is under the control of a human, who uses an algorithm's output along with their own personal expertise in order to produce a combined prediction.

Fairness

The Impact of Algorithmic Risk Assessments on Human Predictions and its Analysis via Crowdsourcing Studies

1 code implementation3 Sep 2021 Riccardo Fogliato, Alexandra Chouldechova, Zachary Lipton

As algorithmic risk assessment instruments (RAIs) are increasingly adopted to assist decision makers, their predictive performance and potential to promote inequity have come under scrutiny.

The effect of differential victim crime reporting on predictive policing systems

1 code implementation30 Jan 2021 Nil-Jana Akpinar, Maria De-Arteaga, Alexandra Chouldechova

Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot\'a, Colombia.

Fairness

Leveraging Expert Consistency to Improve Algorithmic Decision Support

no code implementations24 Jan 2021 Maria De-Arteaga, Vincent Jeanselme, Artur Dubrawski, Alexandra Chouldechova

However, there is frequently a gap between decision objectives and what is captured in the observed outcomes used as labels to train ML models.

BIG-bench Machine Learning

Characterizing Fairness Over the Set of Good Models Under Selective Labels

1 code implementation2 Jan 2021 Amanda Coston, Ashesh Rambachan, Alexandra Chouldechova

We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance, or "the set of good models."

Fairness

Counterfactual Risk Assessments, Evaluation, and Fairness

1 code implementation30 Aug 2019 Amanda Coston, Alan Mishler, Edward H. Kennedy, Alexandra Chouldechova

These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform.

counterfactual Decision Making +1

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

no code implementations NAACL 2019 Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, Adam Tauman Kalai

In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name.

Word Embeddings

Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting

4 code implementations27 Jan 2019 Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Adam Tauman Kalai

We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives.

Classification General Classification

The Frontiers of Fairness in Machine Learning

no code implementations20 Oct 2018 Alexandra Chouldechova, Aaron Roth

The last few years have seen an explosion of academic and popular interest in algorithmic fairness.

BIG-bench Machine Learning Fairness

Learning under selective labels in the presence of expert consistency

no code implementations2 Jul 2018 Maria De-Arteaga, Artur Dubrawski, Alexandra Chouldechova

We explore the problem of learning under selective labels in the context of algorithm-assisted decision making.

Data Augmentation Decision Making +1

Does mitigating ML's impact disparity require treatment disparity?

1 code implementation NeurIPS 2018 Zachary C. Lipton, Alexandra Chouldechova, Julian McAuley

Following related work in law and policy, two notions of disparity have come to shape the study of fairness in algorithmic decision-making.

Decision Making Fairness

Fairer and more accurate, but for whom?

no code implementations30 Jun 2017 Alexandra Chouldechova, Max G'Sell

Complex statistical machine learning models are increasingly being used or considered for use in high-stakes decision-making pipelines in domains such as financial services, health care, criminal justice and human services.

Decision Making Fairness

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

no code implementations28 Feb 2017 Alexandra Chouldechova

Recidivism prediction instruments (RPI's) provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.

Fairness

Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

no code implementations24 Oct 2016 Alexandra Chouldechova

Recidivism prediction instruments provide decision makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time.

Fairness

Generalized Additive Model Selection

no code implementations11 Jun 2015 Alexandra Chouldechova, Trevor Hastie

We introduce GAMSEL (Generalized Additive Model Selection), a penalized likelihood approach for fitting sparse generalized additive models in high dimension.

Additive models Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.