no code implementations • EMNLP 2020 • Pei Zhang, Boxing Chen, Niyu Ge, Kai Fan
In this paper, we research extensively the pros and cons of the standard transformer in document-level translation, and find that the auto-regressive property can simultaneously bring both the advantage of the consistency and the disadvantage of error accumulation.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiayi Wang, Ke Wang, Niyu Ge, Yangbing Shi, Yu Zhao, Kai Fan
With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results.
no code implementations • 3 Oct 2019 • Kai Fan, Jiayi Wang, Bo Li, Shiliang Zhang, Boxing Chen, Niyu Ge, Zhijie Yan
The performances of automatic speech recognition (ASR) systems are usually evaluated by the metric word error rate (WER) when the manually transcribed data are provided, which are, however, expensively available in the real scenario.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • ACL 2019 • Pei Zhang, Boxing Chen, Niyu Ge, Kai Fan
Recent advances in sequence modeling have highlighted the strengths of the transformer architecture, especially in achieving state-of-the-art machine translation results.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3