no code implementations • LTEDI (ACL) 2022 • Arianna Muti, Marta Marchiori Manerba, Katerina Korre, Alberto Barrón-Cedeño
Task Hope Speech Detection required models for the automatic identification of hopeful comments for equality, diversity, and inclusion.
1 code implementation • NLPerspectives (LREC) 2022 • Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri
Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.
1 code implementation • ACL (WOAH) 2021 • Marta Marchiori Manerba, Sara Tonelli
Our evaluation shows that, although BERT-based classifiers achieve high accuracy levels on a variety of natural language processing tasks, they perform very poorly as regards fairness and bias, in particular on samples involving implicit stereotypes, expressions of hate towards minorities and protected attributes such as race or sexual orientation.
no code implementations • 27 Feb 2024 • Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, Debora Nozza
Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.
no code implementations • 15 Nov 2023 • Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein
While the impact of these biases has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, offering a constrained view of the nature of societal biases within language models.