no code implementations • EAMT 2020 • António Lopes, M. Amin Farajian, Rachel Bawden, Michael Zhang, André F. T. Martins
In this paper we provide a systematic comparison of existing and new document-level neural machine translation solutions.
1 code implementation • WMT (EMNLP) 2020 • M. Amin Farajian, António V. Lopes, André F. T. Martins, Sameen Maruf, Gholamreza Haffari
We report the results of the first edition of the WMT shared task on chat translation.
no code implementations • IWSLT 2016 • M. Amin Farajian, Rajen Chatterjee, Costanza Conforti, Shahab Jalalvand, Vevake Balaraman, Mattia A. Di Gangi, Duygu Ataman, Marco Turchi, Matteo Negri, Marcello Federico
They leverage linguistic information such as lemmas and part-of-speech tags of the source words in the form of additional factors along with the words.
no code implementations • WS 2019 • Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, M. Amin Farajian, António V. Lopes, André F. T. Martins
We present the contribution of the Unbabel team to the WMT 2019 Shared Task on Quality Estimation.
no code implementations • WS 2019 • António V. Lopes, M. Amin Farajian, Gonçalo M. Correia, Jonay Trenous, André F. T. Martins
Analogously to dual-encoder architectures we develop a BERT-based encoder-decoder (BED) model in which a single pretrained BERT encoder receives both the source src and machine translation tgt strings.
no code implementations • EACL 2017 • M. Amin Farajian, Marco Turchi, Matteo Negri, Nicola Bertoldi, Marcello Federico
State-of-the-art neural machine translation (NMT) systems are generally trained on specific domains by carefully selecting the training sets and applying proper domain adaptation techniques.
no code implementations • LREC 2016 • Luisa Bentivogli, Mauro Cettolo, M. Amin Farajian, Marcello Federico
This paper presents WAGS (Word Alignment Gold Standard), a novel benchmark which allows extensive evaluation of WA tools on out-of-vocabulary (OOV) and rare words.