Rethinking embedding coupling in pre-trained language models

We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Lingual Question Answering MLQA Decoupled F1 53.1 # 2
Cross-Lingual Question Answering MLQA Coupled EM 37.3 # 2
F1 53.1 # 2
Cross-Lingual NER NER Decoupled F1 68.9 # 2
Cross-Lingual NER NER Coupled F1 69.2 # 1
Cross-Lingual Paraphrase Identification PAWS-X Coupled Accuracy 85.3 # 2
Cross-Lingual Paraphrase Identification PAWS-X Decoupled Accuracy 85.0 # 3
Cross-Lingual Question Answering TyDiQA-GoldP Decoupled EM 42.8 # 8
F1 58.1 # 6
Cross-Lingual Natural Language Inference XNLI Decoupled Accuracy 71.3 # 2
Cross-Lingual Natural Language Inference XNLI Coupled Accuracy 70.7 # 3
Cross-Lingual Question Answering XQuAD Coupled EM 46.2 # 3
F1 63.2 # 3
Cross-Lingual Question Answering XQuAD Decoupled EM 46.9 # 2
F1 63.8 # 2

Methods