Improving the Robustness of Speech Translation

2 Nov 2018  ·  Xiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, Qun Liu ·

Although neural machine translation (NMT) has achieved impressive progress recently, it is usually trained on the clean parallel data set and hence cannot work well when the input sentence is the production of the automatic speech recognition (ASR) system due to the enormous errors in the source. To solve this problem, we propose a simple but effective method to improve the robustness of NMT in the case of speech translation. We simulate the noise existing in the realistic output of the ASR system and inject them into the clean parallel data so that NMT can work under similar word distributions during training and testing. Besides, we also incorporate the Chinese Pinyin feature which is easy to get in speech translation to further improve the translation performance. Experiment results show that our method has a more stable performance and outperforms the baseline by an average of 3.12 BLEU on multiple noisy test sets, even while achieves a generalization improvement on the WMT'17 Chinese-English test set.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here