Search Results for author: Marta Marchiori Manerba

Found 5 papers, 2 papers with code

LeaningTower@LT-EDI-ACL2022: When Hope and Hate Collide

no code implementations LTEDI (ACL) 2022 Arianna Muti, Marta Marchiori Manerba, Katerina Korre, Alberto Barrón-Cedeño

Task Hope Speech Detection required models for the automatic identification of hopeful comments for equality, diversity, and inclusion.

Active Learning Hope Speech Detection

Bias Discovery within Human Raters: A Case Study of the Jigsaw Dataset

1 code implementation NLPerspectives (LREC) 2022 Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri

Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.

Fine-Grained Fairness Analysis of Abusive Language Detection Systems with CheckList

1 code implementation ACL (WOAH) 2021 Marta Marchiori Manerba, Sara Tonelli

Our evaluation shows that, although BERT-based classifiers achieve high accuracy levels on a variety of natural language processing tasks, they perform very poorly as regards fairness and bias, in particular on samples involving implicit stereotypes, expressions of hate towards minorities and protected attributes such as race or sexual orientation.

Abusive Language Fairness

FairBelief - Assessing Harmful Beliefs in Language Models

no code implementations27 Feb 2024 Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, Debora Nozza

Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.

Fairness

Social Bias Probing: Fairness Benchmarking for Language Models

no code implementations15 Nov 2023 Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein

While the impact of these biases has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, offering a constrained view of the nature of societal biases within language models.

Benchmarking Fairness +1

Cannot find the paper you are looking for? You can Submit a new open access paper.