Search Results for author: Sara Kim

Found 3 papers, 1 papers with code

Within-group fairness: A guidance for more sound between-group fairness

no code implementations20 Jan 2023 Sara Kim, Kyusang Yu, Yongdai Kim

We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups.

Decision Making Fairness

SLIDE: a surrogate fairness constraint to ensure fairness consistency

1 code implementation7 Feb 2022 Kunwoong Kim, Ilsang Ohn, Sara Kim, Yongdai Kim

As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair.

Fairness valid

$L_q$ regularization for Fairness AI robust to sampling bias

no code implementations29 Sep 2021 Yongdai Kim, Sara Kim, Seonghyeon Kim, Kunwoong Kim

To ensure fairness on test data, we develop computationally efficient learning algorithms robust to sampling bias.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.