no code implementations • 5 Apr 2023 • Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha
Building upon research suggesting that people blame AI systems, we investigated how several factors influence people's reactive attitudes towards machines, designers, and users.
no code implementations • 11 May 2022 • Gabriel Lima, Nina Grgić-Hlača, Jin Keun Jeong, Meeyoung Cha
Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i. e., patients), due to a misguided perception that they have control over explainable algorithms.
no code implementations • 1 Feb 2021 • Gabriel Lima, Nina Grgić-Hlača, Meeyoung Cha
How to attribute responsibility for autonomous artificial intelligence (AI) systems' actions has been widely debated across the humanities and social science disciplines.
no code implementations • 2 May 2020 • Nina Grgić-Hlača, Gabriel Lima, Adrian Weller, Elissa M. Redmiles
A growing number of oversight boards and regulatory bodies seek to monitor and govern algorithms that make decisions about people's lives.
no code implementations • 26 Feb 2018 • Nina Grgić-Hlača, Elissa M. Redmiles, Krishna P. Gummadi, Adrian Weller
As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making.
no code implementations • 30 Jun 2017 • Nina Grgić-Hlača, Muhammad Bilal Zafar, Krishna P. Gummadi, Adrian Weller
Consider a binary decision making process where a single machine learning classifier replaces a multitude of humans.