Search Results for author: Rongxiang Weng

Found 17 papers, 5 papers with code

Towards Reliable Neural Machine Translation with Consistency-Aware Meta-Learning

no code implementations20 Mar 2023 Rongxiang Weng, Qiang Wang, Wensen Cheng, Changfeng Zhu, Min Zhang

A contributing factor to this problem is that NMT models trained with the one-to-one paradigm struggle to handle the source diversity phenomenon, where inputs with the same meaning can be expressed differently.

Bilevel Optimization Machine Translation +4

Learning Decoupled Retrieval Representation for Nearest Neighbour Neural Machine Translation

no code implementations COLING 2022 Qiang Wang, Rongxiang Weng, Ming Chen

Generally, kNN-MT borrows the off-the-shelf context representation in the translation task, e. g., the output of the last decoder layer, as the query vector of the retrieval task.

Contrastive Learning Machine Translation +2

Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation

2 code implementations ACL 2022 Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, Jun Xie, Rong Jin

Although data augmentation is widely used to enrich the training data, conventional methods with discrete manipulations fail to generate diverse and faithful training samples.

Data Augmentation Machine Translation +3

Uncertainty-Aware Semantic Augmentation for Neural Machine Translation

no code implementations EMNLP 2020 Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo

As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other.

Machine Translation NMT +3

On Learning Universal Representations Across Languages

no code implementations ICLR 2021 Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, Weihua Luo

Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.

Contrastive Learning Cross-Lingual Natural Language Inference +4

Multiscale Collaborative Deep Models for Neural Machine Translation

1 code implementation ACL 2020 Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo

Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train.

Machine Translation NMT +1

AR: Auto-Repair the Synthetic Data for Neural Machine Translation

no code implementations5 Apr 2020 Shanbo Cheng, Shaohui Kuang, Rongxiang Weng, Heng Yu, Changfeng Zhu, Weihua Luo

Compared with only using limited authentic parallel data as training corpus, many studies have proved that incorporating synthetic parallel data, which generated by back translation (BT) or forward translation (FT, or selftraining), into the NMT training process can significantly improve translation quality.

Machine Translation NMT +2

GRET: Global Representation Enhanced Transformer

no code implementations24 Feb 2020 Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen

The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.

Machine Translation Sentence +3

Acquiring Knowledge from Pre-trained Model to Neural Machine Translation

no code implementations4 Dec 2019 Rongxiang Weng, Heng Yu, Shu-Jian Huang, Shanbo Cheng, Weihua Luo

The standard paradigm of exploiting them includes two steps: first, pre-training a model, e. g. BERT, with a large scale unlabeled monolingual data.

General Knowledge Knowledge Distillation +3

Improving Neural Machine Translation with Pre-trained Representation

no code implementations21 Aug 2019 Rongxiang Weng, Heng Yu, Shu-Jian Huang, Weihua Luo, Jia-Jun Chen

Then, we design a framework for integrating both source and target sentence-level representations into NMT model to improve the translation quality.

Machine Translation NMT +3

Correct-and-Memorize: Learning to Translate from Interactive Revisions

no code implementations8 Jul 2019 Rongxiang Weng, Hao Zhou, Shu-Jian Huang, Lei LI, Yifan Xia, Jia-Jun Chen

Experiments in both ideal and real interactive translation settings demonstrate that our proposed \method enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods.

Machine Translation Translation

Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation

no code implementations24 Oct 2018 Zaixiang Zheng, Shu-Jian Huang, Zewei Sun, Rongxiang Weng, Xin-yu Dai, Jia-Jun Chen

Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems.

Machine Translation NMT +2

Neural Machine Translation with Word Predictions

no code implementations EMNLP 2017 Rongxiang Weng, Shu-Jian Huang, Zaixiang Zheng, Xin-yu Dai, Jia-Jun Chen

In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence. These vectors are generated by parameters which are updated by back-propagation of translation errors through time.

Machine Translation NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.