Multimodal machine translation is the task of doing machine translation with multiple data sources - for example, translating "a bird is flying over water" + an image of a bird over water to German text.
( Image credit: Findings of the Third Shared Task on Multimodal Machine Translation )
nmtpy has been used for LIUM's top-ranked submissions to WMT Multimodal Machine Translation and News Translation tasks in 2016 and 2017.
This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge.
In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech.
The model leverages a visual attention grounding mechanism that links the visual semantics with the corresponding textual semantics.
This paper describes the UMONS solution for the Multimodal Machine Translation Task presented at the third conference on machine translation (WMT18).
Multimodal machine translation is an attractive application of neural machine translation (NMT).