Integrating Pre-trained Language Model into Neural Machine Translation

30 Oct 2023  ·  Soon-Jae Hwang, Chang-Sung Jeong ·

Neural Machine Translation (NMT) has become a significant technology in natural language processing through extensive research and development. However, the deficiency of high-quality bilingual language pair data still poses a major challenge to improving NMT performance. Recent studies have been exploring the use of contextual information from pre-trained language model (PLM) to address this problem. Yet, the issue of incompatibility between PLM and NMT model remains unresolved. This study proposes PLM-integrated NMT (PiNMT) model to overcome the identified problems. PiNMT model consists of three critical components, PLM Multi Layer Converter, Embedding Fusion, and Cosine Alignment, each playing a vital role in providing effective PLM information to NMT. Furthermore, two training strategies, Separate Learning Rates and Dual Step Training, are also introduced in this paper. By implementing the proposed PiNMT model and training strategy, we achieve state-of-the-art performance on the IWSLT'14 En$\leftrightarrow$De dataset. This study's outcomes are noteworthy as they demonstrate a novel approach for efficiently integrating PLM with NMT to overcome incompatibility and enhance performance.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Machine Translation IWSLT2014 English-German PiNMT BLEU score 32.2 # 1
Machine Translation IWSLT2014 German-English PiNMT BLEU score 40.43 # 1

Methods


No methods listed for this paper. Add relevant methods here