1 code implementation • 16 Oct 2023 • Weixiao Zhou, Gengyao Li, Xianfu Cheng, Xinnian Liang, Junnan Zhu, FeiFei Zhai, Zhoujun Li
Specifically, we first conduct domain-aware pre-training using large-scale multi-scenario multi-domain dialogue data to enhance the adaptability of our pre-trained model.
no code implementations • 6 Dec 2022 • Yang Zhao, Junnan Zhu, Lu Xiang, Jiajun Zhang, Yu Zhou, FeiFei Zhai, Chengqing Zong
To alleviate the CF, we investigate knowledge distillation based life-long learning methods.
no code implementations • 26 Nov 2019 • Jiacheng Zhang, Huanbo Luan, Maosong Sun, FeiFei Zhai, Jingfang Xu, Yang Liu
The lack of alignment in NMT models leads to three problems: it is hard to (1) interpret the translation process, (2) impose lexical constraints, and (3) impose structural constraints.
no code implementations • ACL 2019 • Yining Wang, Long Zhou, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot.
3 code implementations • EMNLP 2018 • Jiacheng Zhang, Huanbo Luan, Maosong Sun, FeiFei Zhai, Jingfang Xu, Min Zhang, Yang Liu
Although the Transformer translation model (Vaswani et al., 2017) has achieved state-of-the-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge.
no code implementations • EMNLP 2018 • Yining Wang, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
However, previous studies show that one-to-many translation based on this framework cannot perform on par with the individually trained models.
1 code implementation • 15 Jan 2017 • Feifei Zhai, Saloni Potdar, Bing Xiang, Bo-Wen Zhou
Many natural language understanding (NLU) tasks, such as shallow parsing (i. e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence.
7 code implementations • 14 Nov 2016 • Ramesh Nallapati, FeiFei Zhai, Bo-Wen Zhou
We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art.
Ranked #8 on Text Summarization on CNN / Daily Mail (Anonymized)
no code implementations • TACL 2013 • Feifei Zhai, Jiajun Zhang, Yu Zhou, Cheng-qing Zong
In current research, most tree-based translation models are built directly from parse trees.