Search Results for author: L. Elisa Celis

Found 31 papers, 16 papers with code

Fair Classification with Partial Feedback: An Exploration-Based Data-Collection Approach

no code implementations17 Feb 2024 Vijay Keswani, Anay Mehrotra, L. Elisa Celis

For any exploration strategy, the approach comes with guarantees that (1) all sub-populations are explored, (2) the fraction of false positives is bounded, and (3) the trained classifier converges to a "desired" classifier.

Fairness

Bias in Evaluation Processes: An Optimization-Based Model

1 code implementation NeurIPS 2023 L. Elisa Celis, Amit Kumar, Anay Mehrotra, Nisheeth K. Vishnoi

We characterize the distributions that arise from our model and study the effect of the parameters on the observed distribution.

Subset Selection Based On Multiple Rankings in the Presence of Bias: Effectiveness of Fairness Constraints for Multiwinner Voting Score Functions

1 code implementation16 Jun 2023 Niclas Boehmer, L. Elisa Celis, Lingxiao Huang, Anay Mehrotra, Nisheeth K. Vishnoi

We consider the problem of subset selection where one is given multiple rankings of items and the goal is to select the highest ``quality'' subset.

Fairness

Designing Closed-Loop Models for Task Allocation

1 code implementation31 May 2023 Vijay Keswani, L. Elisa Celis, Krishnaram Kenthapadi, Matthew Lease

Instead, we find ourselves in a "closed" decision-making loop in which the same fallible human decisions we rely on in practice must also be used to guide task allocation.

Decision Making

Addressing Strategic Manipulation Disparities in Fair Classification

no code implementations22 May 2022 Vijay Keswani, L. Elisa Celis

In real-world classification settings, such as loan application evaluation or content moderation on online platforms, individuals respond to classifier predictions by strategically updating their features to increase their likelihood of receiving a particular (positive) decision (at a certain cost).

Classification Fairness

Auditing for Diversity using Representative Examples

1 code implementation15 Jul 2021 Vijay Keswani, L. Elisa Celis

Our proposed algorithm uses the pairwise similarity between elements in the dataset and elements in the control set to effectively bootstrap an approximation to the disparity of the dataset.

Attribute

Fair Classification with Adversarial Perturbations

1 code implementation NeurIPS 2021 L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi

Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.

Classification Fairness +1

Mitigating Bias in Set Selection with Noisy Protected Attributes

2 code implementations9 Nov 2020 Anay Mehrotra, L. Elisa Celis

Subset selection algorithms are ubiquitous in AI-driven applications, including, online recruiting portals and image search engines, so it is imperative that these tools are not discriminatory on the basis of protected attributes such as gender or race.

Fairness Image Retrieval

The Effect of the Rooney Rule on Implicit Bias in the Long Term

1 code implementation21 Oct 2020 L. Elisa Celis, Chris Hays, Anay Mehrotra, Nisheeth K. Vishnoi

Our main result is that, when the panel is constrained by the Rooney Rule, their implicit bias roughly reduces at a rate that is the inverse of the size of the shortlist--independent of the number of candidates, whereas without the Rooney Rule, the rate is inversely proportional to the number of candidates.

Dialect Diversity in Text Summarization on Twitter

no code implementations15 Jul 2020 Vijay Keswani, L. Elisa Celis

Discussions on Twitter involve participation from different communities with different dialects and it is often necessary to summarize a large number of posts into a representative sample to provide a synopsis.

Attribute Extractive Summarization +2

Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees

1 code implementation8 Jun 2020 L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi

We present an optimization framework for learning a fair classifier in the presence of noisy perturbations in the protected attributes.

Fairness General Classification

Interventions for Ranking in the Presence of Implicit Bias

no code implementations23 Jan 2020 L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi

Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e. g., defined by gender or race).

Assessing Social and Intersectional Biases in Contextualized Word Representations

1 code implementation NeurIPS 2019 Yi Chern Tan, L. Elisa Celis

In this paper, we analyze the extent to which state-of-the-art models for contextual word representations, such as BERT and GPT-2, encode biases with respect to gender, race, and intersectional identities.

Fairness Sentence +1

Data preprocessing to mitigate bias: A maximum entropy based approach

1 code implementation ICML 2020 L. Elisa Celis, Vijay Keswani, Nisheeth K. Vishnoi

Unlike prior work, it can efficiently learn distributions over large domains, controllably adjust the representation rates of protected groups and achieve target fairness metrics such as statistical parity, yet remains close to the empirical distribution induced by the given dataset.

Fairness

Improved Adversarial Learning for Fair Classification

no code implementations29 Jan 2019 L. Elisa Celis, Vijay Keswani

Motivated by concerns that machine learning algorithms may introduce significant bias in classification models, developing fair classifiers has become an important problem in machine learning research.

BIG-bench Machine Learning Classification +2

Toward Controlling Discrimination in Online Ad Auctions

1 code implementation29 Jan 2019 L. Elisa Celis, Anay Mehrotra, Nisheeth K. Vishnoi

To prevent this, we propose a constrained ad auction framework that maximizes the platform's revenue conditioned on ensuring that the audience seeing an advertiser's ad is distributed appropriately across sensitive types such as gender or race.

Fairness

Implicit Diversity in Image Summarization

no code implementations29 Jan 2019 L. Elisa Celis, Vijay Keswani

We develop a novel approach that takes as input a visibly diverse control set of images and uses this set to select a set of images of people in response to a query.

Attribute Image Retrieval

Balanced News Using Constrained Bandit-based Personalization

no code implementations24 Jun 2018 Sayash Kapoor, Vijay Keswani, Nisheeth K. Vishnoi, L. Elisa Celis

We present a prototype for a news search engine that presents balanced viewpoints across liberal and conservative articles with the goal of de-polarizing content and allowing users to escape their filter bubble.

Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees

4 code implementations15 Jun 2018 L. Elisa Celis, Lingxiao Huang, Vijay Keswani, Nisheeth K. Vishnoi

The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees.

Classification Fairness +1

An Algorithmic Framework to Control Bias in Bandit-based Personalization

no code implementations23 Feb 2018 L. Elisa Celis, Sayash Kapoor, Farnood Salehi, Nisheeth K. Vishnoi

Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user.

Fairness

Fair and Diverse DPP-based Data Summarization

1 code implementation ICML 2018 L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi

Sampling methods that choose a subset of the data proportional to its diversity in the feature space are popular for data summarization.

Data Summarization Fairness

Coordinate Descent with Bandit Sampling

no code implementations NeurIPS 2018 Farnood Salehi, Patrick Thiran, L. Elisa Celis

Ideally, we would update the decision variable that yields the largest decrease in the cost function.

Multiwinner Voting with Fairness Constraints

1 code implementation27 Oct 2017 L. Elisa Celis, Lingxiao Huang, Nisheeth K. Vishnoi

Multiwinner voting rules are used to select a small representative subset of candidates or items from a larger set given the preferences of voters.

Attribute Fairness

Stochastic Optimization with Bandit Sampling

no code implementations8 Aug 2017 Farnood Salehi, L. Elisa Celis, Patrick Thiran

This approach for sampling datapoints is general, and can be used in conjunction with any algorithm that uses an unbiased gradient estimation -- we expect it to have broad applicability beyond the specific examples explored in this work.

Stochastic Optimization

Fair Personalization

no code implementations7 Jul 2017 L. Elisa Celis, Nisheeth K. Vishnoi

Personalization is pervasive in the online space as, when combined with learning, it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user.

A Distributed Learning Dynamics in Social Groups

no code implementations8 May 2017 L. Elisa Celis, Peter M. Krafft, Nisheeth K. Vishnoi

Finally, we observe that our infinite population dynamics is a stochastic variant of the classic multiplicative weights update (MWU) method.

Ranking with Fairness Constraints

2 code implementations22 Apr 2017 L. Elisa Celis, Damian Straszak, Nisheeth K. Vishnoi

Ranking algorithms are deployed widely to order a set of items in applications such as search engines, news feeds, and recommendation systems.

Attribute Fairness +2

Lean From Thy Neighbor: Stochastic & Adversarial Bandits in a Network

no code implementations14 Apr 2017 L. Elisa Celis, Farnood Salehi

We provide algorithms for this setting, both for stochastic and adversarial bandits, and show that their regret smoothly interpolates between the regret in the classical bandit setting and that of the full-information setting as a function of the neighbors' exploration.

Decision Making Sociology

How to be Fair and Diverse?

no code implementations23 Oct 2016 L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi

However, in doing so, a question that seems to be overlooked is whether it is possible to produce fair subsamples that are also adequately representative of the feature space of the data set - an important and classic requirement in machine learning.

BIG-bench Machine Learning Data Summarization +2

On the Complexity of Constrained Determinantal Point Processes

no code implementations1 Aug 2016 L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Damian Straszak, Nisheeth K. Vishnoi

Consequently, we obtain a few algorithms of independent interest: 1) to count over the base polytope of regular matroids when there are additional (succinct) budget constraints and, 2) to evaluate and compute the mixed characteristic polynomials, that played a central role in the resolution of the Kadison-Singer problem, for certain special cases.

Fairness Point Processes

Sequential Voting Promotes Collective Discovery in Social Recommendation Systems

1 code implementation14 Mar 2016 L. Elisa Celis, Peter M. Krafft, Nathan Kobe

Domains in which content quality can be defined exogenously and measured objectively are thus needed in order to better assess the design choices of social recommendation systems.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.