no code implementations • 20 Jan 2023 • Sara Kim, Kyusang Yu, Yongdai Kim
We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups.
1 code implementation • 7 Feb 2022 • Kunwoong Kim, Ilsang Ohn, Sara Kim, Yongdai Kim
As they have a vital effect on social decision makings, AI algorithms should be not only accurate and but also fair.
no code implementations • 29 Sep 2021 • Yongdai Kim, Sara Kim, Seonghyeon Kim, Kunwoong Kim
To ensure fairness on test data, we develop computationally efficient learning algorithms robust to sampling bias.