Learning Source Phrase Representations for Neural Machine Translation

ACL 2020 Hongfei XuJosef van GenabithDeyi XiongQiuhui LiuJingyi Zhang

The Transformer translation model (Vaswani et al., 2017) based on a multi-head attention mechanism can be computed effectively in parallel and has significantly pushed forward the performance of Neural Machine Translation (NMT). Though intuitively the attentional network can connect distant words via shorter network paths than RNNs, empirical analysis demonstrates that it still has difficulty in fully capturing long-distance dependencies (Tang et al., 2018)... (read more)

PDF Abstract ACL 2020 PDF ACL 2020 Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper