1 code implementation • 26 Feb 2023 • Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar
We revisit the problem of fair principal component analysis (PCA), where the goal is to learn the best low-rank linear approximation of the data that obfuscates demographic information.
1 code implementation • 7 Jul 2022 • Saba Ahmadi, Pranjal Awasthi, Samir Khuller, Matthäus Kleindessner, Jamie Morgenstern, Pattara Sukprasert, Ali Vakilian
In this paper, we propose a natural notion of individual preference (IP) stability for clustering, which asks that every data point, on average, is closer to the points in its own cluster than to the points in any other cluster.
1 code implementation • 9 Apr 2022 • Michael Lohaus, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco Locatello, Chris Russell
Based on this observation, we investigate an alternative fairness approach: we add a second classification head to the network to explicitly predict the protected attribute (such as race or gender) alongside the original task.
no code implementations • CVPR 2022 • Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Chris Russell
Algorithmic fairness is frequently motivated in terms of a trade-off in which overall performance is decreased so as to improve performance on disadvantaged groups where the algorithm would otherwise be less accurate.
no code implementations • 8 Mar 2022 • Paul Rolland, Volkan Cevher, Matthäus Kleindessner, Chris Russel, Bernhard Schölkopf, Dominik Janzing, Francesco Locatello
This paper demonstrates how to recover causal graphs from the score of the data distribution in non-linear additive (Gaussian) noise models.
no code implementations • NeurIPS 2021 • Frederik Träuble, Julius von Kügelgen, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, Peter Gehler
; and (ii) if the new predictions differ from the current ones, should we update?
1 code implementation • 7 May 2021 • Matthäus Kleindessner, Samira Samadi, Muhammad Bilal Zafar, Krishnaram Kenthapadi, Chris Russell
We initiate the study of fairness for ordinal regression.
1 code implementation • 11 Jun 2020 • Jacob Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang
We propose simple active sampling and reweighting strategies for optimizing min-max fairness that can be applied to any classification or regression model learned via loss minimization.
no code implementations • 8 Jun 2020 • Matthäus Kleindessner, Pranjal Awasthi, Jamie Morgenstern
A common distinction in fair machine learning, in particular in fair classification, is between group fairness and individual fairness.
2 code implementations • 7 Jun 2019 • Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern
We identify conditions on the perturbation that guarantee that the bias of a classifier is reduced even by running equalized odds with the perturbed attribute.
1 code implementation • 24 Jan 2019 • Matthäus Kleindessner, Samira Samadi, Pranjal Awasthi, Jamie Morgenstern
Given the widespread popularity of spectral clustering (SC) for partitioning graph data, we study a version of constrained SC in which we try to incorporate the fairness notion proposed by Chierichetti et al. (2017).
1 code implementation • 24 Jan 2019 • Matthäus Kleindessner, Pranjal Awasthi, Jamie Morgenstern
In data summarization we want to choose $k$ prototypes in order to summarize a data set.
no code implementations • NeurIPS 2017 • Matthäus Kleindessner, Ulrike Von Luxburg
Given only information in the form of similarity triplets "Object A is more similar to object B than to object C" about a data set, we propose two ways of defining a kernel function on the data set.
no code implementations • 23 Feb 2016 • Matthäus Kleindessner, Ulrike Von Luxburg
In recent years it has become popular to study machine learning problems in a setting of ordinal distance information rather than numerical distance measurements.