no code implementations • 6 Feb 2024 • Xuechunzi Bai, Angelina Wang, Ilia Sucholutsky, Thomas L. Griffiths
Large language models (LLMs) can pass explicit bias tests but still harbor implicit biases, similar to humans who endorse egalitarian beliefs yet exhibit subtle biases.
no code implementations • 6 Feb 2024 • Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett
However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity.
no code implementations • 31 Jan 2022 • Kevin R. McKee, Xuechunzi Bai, Susan T. Fiske
Participants' perceptions of warmth and competence predict their stated preferences for different agents, above and beyond objective performance metrics.