Search Results for author: Xiangpeng Wei

Found 15 papers, 6 papers with code

Bridging the Domain Gaps in Context Representations for k-Nearest Neighbor Neural Machine Translation

1 code implementation26 May 2023 Zhiwei Cao, Baosong Yang, Huan Lin, Suhang Wu, Xiangpeng Wei, Dayiheng Liu, Jun Xie, Min Zhang, Jinsong Su

$k$-Nearest neighbor machine translation ($k$NN-MT) has attracted increasing attention due to its ability to non-parametrically adapt to new translation domains.

Domain Adaptation Machine Translation +3

From Statistical Methods to Deep Learning, Automatic Keyphrase Prediction: A Survey

no code implementations4 May 2023 Binbin Xie, Jia Song, Liangying Shao, Suhang Wu, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Jinsong Su

In this paper, we comprehensively summarize representative studies from the perspectives of dominant models, datasets and evaluation metrics.

WR-ONE2SET: Towards Well-Calibrated Keyphrase Generation

1 code implementation13 Nov 2022 Binbin Xie, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Xiaoli Wang, Min Zhang, Jinsong Su

Keyphrase generation aims to automatically generate short phrases summarizing an input document.

Keyphrase Generation

SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers

1 code implementation COLING 2022 Bowen Qin, Lihan Wang, Binyuan Hui, Bowen Li, Xiangpeng Wei, Binhua Li, Fei Huang, Luo Si, Min Yang, Yongbin Li

To improve the generalizability and stability of neural text-to-SQL parsers, we propose a model uncertainty constraint to refine the query representations by enforcing the output representations of different perturbed encoding networks to be consistent with each other.

SQL Parsing Text-To-SQL

Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation

2 code implementations ACL 2022 Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Weihua Luo, Jun Xie, Rong Jin

Although data augmentation is widely used to enrich the training data, conventional methods with discrete manipulations fail to generate diverse and faithful training samples.

Data Augmentation Machine Translation +3

Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for Open-domain Dialogue Generation

no code implementations16 Jul 2021 Yajing Sun, Yue Hu, Luxi Xing, Yuqiang Xie, Xiangpeng Wei

End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.

Dialogue Generation Response Generation

Bi-directional CognitiveThinking Network for Machine Reading Comprehension

no code implementations COLING 2020 Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Jing Yu, Yajing Sun, Xiangpeng Wei

We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) for reading comprehension from the perspective of complementary learning systems theory.

Machine Reading Comprehension

Bi-directional Cognitive Thinking Network for Machine Reading Comprehension

no code implementations20 Oct 2020 Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Jing Yu, Yajing Sun, Xiangpeng Wei

We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) for reading comprehension from the perspective of complementary learning systems theory.

Machine Reading Comprehension

Uncertainty-Aware Semantic Augmentation for Neural Machine Translation

no code implementations EMNLP 2020 Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo

As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other.

Machine Translation NMT +3

On Learning Universal Representations Across Languages

no code implementations ICLR 2021 Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, Weihua Luo

Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.

Contrastive Learning Cross-Lingual Natural Language Inference +4

Multiscale Collaborative Deep Models for Neural Machine Translation

1 code implementation ACL 2020 Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo

Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train.

Machine Translation NMT +1

Unsupervised Neural Machine Translation with Future Rewarding

no code implementations CONLL 2019 Xiangpeng Wei, Yue Hu, Luxi Xing, Li Gao

In this paper, we alleviate the local optimality of back-translation by learning a policy (takes the form of an encoder-decoder and is defined by its parameters) with future rewarding under the reinforcement learning framework, which aims to optimize the global word predictions for unsupervised neural machine translation.

Machine Translation NMT +3

Cannot find the paper you are looking for? You can Submit a new open access paper.