BERT of all trades, master of some

LREC 2020  ·  Denis Gordeev, Olga Lykova ·

This paper describes our results for TRAC 2020 competition held together with the conference LREC 2020. Our team name was Ms8qQxMbnjJMgYcw. The competition consisted of 2 subtasks in 3 languages (Bengali, English and Hindi) where the participants{'} task was to classify aggression in short texts from social media and decide whether it is gendered or not. We used a single BERT-based system with two outputs for all tasks simultaneously. Our model placed first in English and second in Bengali gendered text classification competition tasks with 0.87 and 0.93 in F1-score respectively.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text Classification TRAC2-Benghali. Task 2. BERT F1 0.929702403 # 1
Text Classification TRAC2-English. Task2. BERT F1 0.871585052 # 1

Methods


No methods listed for this paper. Add relevant methods here