no code implementations • EMNLP 2021 • Eva Hasler, Tobias Domhan, Jonay Trenous, Ke Tran, Bill Byrne, Felix Hieber
Building neural machine translation systems to perform well on a specific target domain is a well-studied problem.
2 code implementations • 12 Jul 2022 • Felix Hieber, Michael Denkowski, Tobias Domhan, Barbara Darques Barros, Celina Dong Ye, Xing Niu, Cuong Hoang, Ke Tran, Benjamin Hsu, Maria Nadejde, Surafel Lakew, Prashant Mathur, Anna Currey, Marcello Federico
When running comparable models, Sockeye 3 is up to 126% faster than other PyTorch implementations on GPUs and up to 292% faster on CPUs.
1 code implementation • NAACL 2022 • Tobias Domhan, Eva Hasler, Ke Tran, Sony Trenous, Bill Byrne, Felix Hieber
Vocabulary selection, or lexical shortlisting, is a well-known technique to improve latency of Neural Machine Translation models by constraining the set of allowed output words during inference.
no code implementations • EMNLP (spnlp) 2020 • Ke Tran, Ming Tan
Finally, we use an auxiliary parser (AP) to filter the generated utterances.
1 code implementation • 18 Feb 2020 • Ke Tran
With a single GPU, our approach can obtain a foreign BERT base model within a day and a foreign BERT large within two days.
no code implementations • WS 2019 • Ke Tran, Arianna Bisazza
We investigate whether off-the-shelf deep bidirectional sentence representations trained on a massively multilingual corpus (multilingual BERT) enable the development of an unsupervised universal dependency parser.
no code implementations • ACL 2018 • Ke Tran, Yonatan Bisk
To address both of these issues we introduce a model that simultaneously translates while inducing dependency trees.
1 code implementation • EMNLP 2018 • Ke Tran, Arianna Bisazza, Christof Monz
Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks such as language modeling (Linzen et al., 2016) and neural machine translation (Shi et al., 2016).
1 code implementation • 4 Dec 2017 • Mircea Mironenco, Dana Kianfar, Ke Tran, Evangelos Kanoulas, Efstratios Gavves
In this work we propose a blackbox intervention method for visual dialog models, with the aim of assessing the contribution of individual linguistic or visual components.
2 code implementations • WS 2016 • Ke Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, Kevin Knight
In this work, we present the first results for neuralizing an Unsupervised Hidden Markov Model.
2 code implementations • NAACL 2016 • Ke Tran, Arianna Bisazza, Christof Monz
In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data.