1 code implementation • EACL (WASSA) 2021 • Federico Bianchi, Debora Nozza, Dirk Hovy
While sentiment analysis is a popular task to understand people’s reactions online, we often need more nuanced information: is the post negative because the user is angry or sad?
1 code implementation • WASSA (ACL) 2022 • Federico Bianchi, Debora Nozza, Dirk Hovy
Detecting emotion in text allows social and computational scientists to study how people behave and react to online events.
no code implementations • EACL (WASSA) 2021 • Tommaso Fornaciari, Federico Bianchi, Debora Nozza, Dirk Hovy
The paper describes the MilaNLP team’s submission (Bocconi University, Milan) in the WASSA 2021 Shared Task on Empathy Detection and Emotion Classification.
1 code implementation • LTEDI (ACL) 2022 • Debora Nozza, Federico Bianchi, Anne Lauscher, Dirk Hovy
Current language technology is ubiquitous and directly influences individuals’ lives worldwide.
no code implementations • BigScience (ACL) 2022 • Debora Nozza, Federico Bianchi, Dirk Hovy
We hope to open a discussion on the best methodologies to handle social bias testing in language models.
1 code implementation • SemEval (NAACL) 2022 • Giuseppe Attanasio, Debora Nozza, Federico Bianchi
In this paper, we describe the system proposed by the MilaNLP team for the Multimedia Automatic Misogyny Identification (MAMI) challenge.
1 code implementation • NAACL (WOAH) 2022 • Debora Nozza, Federico Bianchi, Giuseppe Attanasio
Online hate speech is a dangerous phenomenon that can (and should) be promptly counteracted properly.
1 code implementation • nlppower (ACL) 2022 • Giuseppe Attanasio, Debora Nozza, Eliana Pastor, Dirk Hovy
In this paper, we provide the first benchmark study of interpretability approaches for hate speech detection.
no code implementations • EMNLP (insights) 2020 • Silvia Terragni, Debora Nozza, Elisabetta Fersini, Messina Enza
Topic models have been widely used to discover hidden topics in a collection of documents.
no code implementations • LTEDI (ACL) 2022 • Debora Nozza
In this paper, we describe our approach for the task of homophobia and transphobia detection in English social media comments.
no code implementations • 27 Feb 2024 • Mattia Setzu, Marta Marchiori Manerba, Pasquale Minervini, Debora Nozza
Language Models (LMs) have been shown to inherit undesired biases that might hurt minorities and underrepresented groups if such systems were integrated into real-world applications without careful fairness auditing.
1 code implementation • 18 Oct 2023 • Giuseppe Attanasio, Flor Miriam Plaza-del-Arco, Debora Nozza, Anne Lauscher
In MT, this might lead to misgendered translations, resulting, among other harms, in the perpetuation of stereotypes and prejudices.
1 code implementation • 5 Sep 2023 • Helena Bonaldi, Giuseppe Attanasio, Debora Nozza, Marco Guerini
Regularized models produce better counter narratives than state-of-the-art approaches in most cases, both in terms of automatic metrics and human evaluation, especially when hateful targets are not present in the training data.
no code implementations • 24 Jul 2023 • Flor Miriam Plaza-del-Arco, Debora Nozza, Dirk Hovy
Recent studies emphasize the importance of considering human label variation in data annotation.
no code implementations • 25 May 2023 • Anne Lauscher, Debora Nozza, Archie Crowley, Ehm Miltersen, Dirk Hovy
As 3rd-person pronoun usage shifts to include novel forms, e. g., neopronouns, we need more research on identity-inclusive NLP.
1 code implementation • 21 Nov 2022 • Samia Touileb, Debora Nozza
Scandinavian countries are perceived as role-models when it comes to gender equality.
1 code implementation • 7 Nov 2022 • Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan
For example, we find cases of prompting for basic traits or social roles resulting in images reinforcing whiteness as ideal, prompting for occupations resulting in amplification of racial and gender disparities, and prompting for objects resulting in reification of American norms.
1 code implementation • 20 Oct 2022 • Paul Röttger, Debora Nozza, Federico Bianchi, Dirk Hovy
More data is needed, but annotating hateful content is expensive, time-consuming and potentially harmful to annotators.
1 code implementation • 14 Oct 2022 • Debora Nozza, Dirk Hovy
Work on hate speech has made the consideration of rude and harmful examples in scientific publications inevitable.
no code implementations • 13 Oct 2022 • Giuseppe Attanasio, Debora Nozza, Federico Bianchi, Dirk Hovy
Consequently, we should continuously update our models with new data to expose them to new events and facts.
1 code implementation • 2 Aug 2022 • Giuseppe Attanasio, Eliana Pastor, Chiara Di Bonaventura, Debora Nozza
With ferret, users can visualize and compare transformers-based models output explanations using state-of-the-art XAI methods on any free-text or existing XAI corpora.
1 code implementation • NAACL (WOAH) 2022 • Paul Röttger, Haitham Seelawi, Debora Nozza, Zeerak Talat, Bertie Vidgen
To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models.
1 code implementation • Findings (ACL) 2022 • Giuseppe Attanasio, Debora Nozza, Dirk Hovy, Elena Baralis
EAR also reveals overfitting terms, i. e., terms most likely to induce bias, to help identify their effect on the model, task, and predictions.
1 code implementation • nlppower (ACL) 2022 • Federico Bianchi, Debora Nozza, Dirk Hovy
We introduce language invariant properties: i. e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms.
no code implementations • ACL 2021 • Debora Nozza
Reducing and counter-acting hate speech on Social Media is a significant concern.
1 code implementation • NAACL 2021 • Debora Nozza, Federico Bianchi, Dirk Hovy
Our results show that 4. 3{\%} of the time, language models complete a sentence with a hurtful word.
Ranked #1 on Hurtful Sentence Completion on HONEST
no code implementations • LREC 2020 • Elisabetta Fersini, Debora Nozza, Giulia Boifava
Hate speech may take different forms in online social environments.
2 code implementations • EACL 2021 • Federico Bianchi, Silvia Terragni, Dirk Hovy, Debora Nozza, Elisabetta Fersini
They all cover the same content, but the linguistic differences make it impossible to use traditional, bag-of-word-based topic models.
no code implementations • 5 Mar 2020 • Debora Nozza, Federico Bianchi, Dirk Hovy
Driven by the potential of BERT models, the NLP community has started to investigate and generate an abundant number of BERT models that are trained on a particular language, and tested on a specific data domain and task.
no code implementations • SEMEVAL 2019 • Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti
The paper describes the organization of the SemEval 2019 Task 5 about the detection of hate speech against immigrants and women in Spanish and English messages extracted from Twitter.
no code implementations • EACL 2017 • Debora Nozza, Fausto Ristagno, Matteo Palmonari, Elisabetta Fersini, Manch, Pikakshi a, Enza Messina
In this paper we present TWINE, a real-time system for the big data analysis and exploration of information extracted from Twitter streams.
no code implementations • EACL 2017 • Debora Nozza, Elisabetta Fersini, Enza Messina
Sentiment Analysis is a broad task that involves the analysis of various aspect of the natural language text.