Czech Dataset for Cross-lingual Subjectivity Classification

LREC 2022  ·  Pavel Přibáň, Josef Steinberger ·

In this paper, we introduce a new Czech subjectivity dataset of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. Our prime motivation is to provide a reliable dataset that can be used with the existing English dataset as a benchmark to test the ability of pre-trained multilingual models to transfer knowledge between Czech and English and vice versa. Two annotators annotated the dataset reaching 0.83 of the Cohen's \k{appa} inter-annotator agreement. To the best of our knowledge, this is the first subjectivity dataset for the Czech language. We also created an additional dataset that consists of 200k automatically labeled sentences. Both datasets are freely available for research purposes. Furthermore, we fine-tune five pre-trained BERT-like models to set a monolingual baseline for the new dataset and we achieve 93.56% of accuracy. We fine-tune models on the existing English dataset for which we obtained results that are on par with the current state-of-the-art results. Finally, we perform zero-shot cross-lingual subjectivity classification between Czech and English to verify the usability of our dataset as the cross-lingual benchmark. We compare and discuss the cross-lingual and monolingual results and the ability of multilingual models to transfer knowledge between languages.

PDF Abstract LREC 2022 PDF LREC 2022 Abstract

Datasets


Introduced in the Paper:

Czech Subjectivity Dataset

Used in the Paper:

SUBJ
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Subjectivity Analysis Czech Subjectivity Dataset XLM-R-Large Accuracy 93.56 # 1
Subjectivity Analysis Czech Subjectivity Dataset mBERT Accuracy 91.23 # 5
Subjectivity Analysis Czech Subjectivity Dataset Czert-B Accuracy 92.85 # 3
Subjectivity Analysis Czech Subjectivity Dataset RobeCzech Accuracy 93.29 # 2
Subjectivity Analysis Czech Subjectivity Dataset Czech Electra Accuracy 91.85 # 4

Methods


No methods listed for this paper. Add relevant methods here