Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis

Hate speech and toxic comments are a common concern of social media platform users. Although these comments are, fortunately, the minority in these platforms, they are still capable of causing harm. Therefore, identifying these comments is an important task for studying and preventing the proliferation of toxicity in social media. Previous work in automatically detecting toxic comments focus mainly in English, with very few work in languages like Brazilian Portuguese. In this paper, we propose a new large-scale dataset for Brazilian Portuguese with tweets annotated as either toxic or non-toxic or in different types of toxicity. We present our dataset collection and annotation process, where we aimed to select candidates covering multiple demographic groups. State-of-the-art BERT models were able to achieve 76% macro-F1 score using monolingual data in the binary case. We also show that large-scale monolingual data is still needed to create more accurate models, despite recent advances in multilingual approaches. An error analysis and experiments with multi-label classification show the difficulty of classifying certain types of toxic comments that appear less frequently in our data and highlights the need to develop models that are aware of different categories of toxicity.

PDF Abstract Asian Chapter 2020 PDF Asian Chapter 2020 Abstract

Datasets


Introduced in the Paper:

ToLD-Br

Used in the Paper:

OLID

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Hate Speech Detection ToLD-Br Multilingual BERT F1-score 0.75 # 1
Hate Speech Detection ToLD-Br AutoML F1-score 0.74 # 2

Methods