no code implementations • JEP/TALN/RECITAL 2022 • Melissa Ailem, Jingshu Liu, Raheel Qader
Empirical results show that our method improves upon related baselines in terms of both BLEU score and the percentage of generated constraint terms.
no code implementations • WMT (EMNLP) 2021 • Melissa Ailem, Jingshu Liu, Raheel Qader
This paper describes Lingua Custodia’s submission to the WMT21 shared task on machine translation using terminologies.
no code implementations • 25 Apr 2024 • Melissa Ailem, Katerina Marazopoulou, Charlotte Siska, James Bono
The research community often relies on a model's average performance across the test prompts of a benchmark to evaluate the model's performance.
no code implementations • 3 Nov 2021 • Melissa Ailem, Jinghsu Liu, Raheel Qader
This paper describes Lingua Custodia's submission to the WMT21 shared task on machine translation using terminologies.
no code implementations • Findings (ACL) 2021 • Melissa Ailem, Jinghsu Liu, Raheel Qader
Intuitively, this encourages the model to learn a copy behavior when it encounters constraint terms.
no code implementations • 19 Aug 2019 • Melissa Ailem, Bo-Wen Zhang, Fei Sha
In this paper, we propose a new decoder where the output summary is generated by conditioning on both the input text and the latent topics of the document.
no code implementations • 6 Jun 2019 • Yiming Yan, Melissa Ailem, Fei Sha
Classical approaches for approximate inference depend on cleverly designed variational distributions and bounds.
no code implementations • EMNLP 2018 • Melissa Ailem, Bo-Wen Zhang, Aurelien Bellet, Pascal Denis, Fei Sha
Our approach learns textual and visual representations jointly: latent visual factors couple together a skip-gram model for co-occurrence in linguistic data and a generative latent variable model for visual data.