Search Results for author: Kaixin Wu

Found 2 papers, 1 papers with code

Speeding up Transformer Decoding via an Attention Refinement Network

1 code implementation COLING 2022 Kaixin Wu, Yue Zhang, Bojie Hu, Tong Zhang

Extensive experiments on ten WMT machine translation tasks show that the proposed model yields an average of 1. 35x faster (with almost no decrease in BLEU) over the state-of-the-art inference implementation.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.