Generative Imagination Elevates Machine Translation

NAACL 2021  ·  Quanyu Long, Mingxuan Wang, Lei LI ·

There are common semantics shared across text and images. Given a sentence in a source language, whether depicting the visual scene helps translation into a target language? Existing multimodal neural machine translation methods (MNMT) require triplets of bilingual sentence - image for training and tuples of source sentence - image for inference. In this paper, we propose ImagiT, a novel machine translation method via visual imagination. ImagiT first learns to generate visual representation from the source sentence, and then utilizes both source sentence and the "imagined representation" to produce a target translation. Unlike previous methods, it only needs the source sentence at the inference time. Experiments demonstrate that ImagiT benefits from visual imagination and significantly outperforms the text-only neural machine translation baselines. Further analysis reveals that the imagination process in ImagiT helps fill in missing information when performing the degradation strategy.

PDF Abstract NAACL 2021 PDF NAACL 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Machine Translation Multi30K ImagiT BLEU (EN-DE) 38.4 # 7
Meteor (EN-DE) 55.7 # 6

Methods


No methods listed for this paper. Add relevant methods here