Model and Data Transfer for Cross-Lingual Sequence Labelling in Zero-Resource Settings

23 Oct 2022  Â·  Iker GarcĂ­a-Ferrero, Rodrigo Agerri, German Rigau ·

Zero-resource cross-lingual transfer approaches aim to apply supervised models from a source language to unlabelled target languages. In this paper we perform an in-depth study of the two main techniques employed so far for cross-lingual zero-resource sequence labelling, based either on data or model transfer. Although previous research has proposed translation and annotation projection (data-based cross-lingual transfer) as an effective technique for cross-lingual sequence labelling, in this paper we experimentally demonstrate that high capacity multilingual language models applied in a zero-shot (model-based cross-lingual transfer) setting consistently outperform data-based cross-lingual transfer approaches. A detailed analysis of our results suggests that this might be due to important differences in language use. More specifically, machine translation often generates a textual signal which is different to what the models are exposed to when using gold standard data, which affects both the fine-tuning and evaluation processes. Our results also indicate that data-based cross-lingual transfer approaches remain a competitive option when high-capacity multilingual language models are not available.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Cross-Lingual NER CoNLL 2003 XLM-RoBERTa-large Spanish 79.5 # 1
German 74.5 # 2
Dutch 82.3 # 1
Cross-Lingual NER CoNLL Dutch XLM-R large F1 79.7 # 7
Cross-Lingual NER CoNLL German XLM-R large F1 74.5 # 4
Cross-Lingual NER CoNLL Spanish XLM-R large F1 79.5 # 1

Methods


No methods listed for this paper. Add relevant methods here