1 code implementation • EMNLP 2021 • Xinwei Geng, Xiaocheng Feng, Bing Qin
Towards keeping the consistency of data distribution with iterative decoding, an iterative training strategy is employed to further improve the capacity of rewriting.
1 code implementation • dialdoc (ACL) 2022 • Xiachong Feng, Xiaocheng Feng, Bing Qin
Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently.
no code implementations • Findings (ACL) 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.
no code implementations • 10 Jan 2024 • Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin
To align the translation-specific understanding to the general one, we propose a novel translation process xIoD (Cross-Lingual Interpretation of Difficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.
no code implementations • 28 Dec 2023 • Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu
In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.
no code implementations • 22 Dec 2023 • Zhangyin Feng, Runyi Hu, Liangxin Liu, Fan Zhang, Duyu Tang, Yong Dai, Xiaocheng Feng, Jiwei Li, Bing Qin, Shuming Shi
Compared with autoregressive baselines that needs to run one thousand times, our model only runs 16 times to generate images of competitive quality with an order of magnitude lower inference latency.
no code implementations • 10 Nov 2023 • Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu
In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.
1 code implementation • 9 Nov 2023 • Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu
The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.
1 code implementation • 8 Oct 2023 • Zhangyin Feng, Xiaocheng Feng, Dezhi Zhao, Maojin Yang, Bing Qin
Large language models augmented with task-relevant documents have demonstrated impressive performance on knowledge-intensive tasks.
no code implementations • 7 Aug 2023 • Xiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin
However, existing work has focused on training models on centralized data, neglecting real-world scenarios where meeting data are infeasible to collect centrally, due to their sensitive nature.
no code implementations • 28 Jun 2023 • Zhangyin Feng, Yong Dai, Fan Zhang, Duyu Tang, Xiaocheng Feng, Shuangzhi Wu, Bing Qin, Yunbo Cao, Shuming Shi
Traditional multitask learning methods basically can only exploit common knowledge in task- or language-wise, which lose either cross-language or cross-task knowledge.
no code implementations • 26 May 2023 • Zhangyin Feng, Yuchen Ren, Xinmiao Yu, Xiaocheng Feng, Duyu Tang, Shuming Shi, Bing Qin
Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation.
1 code implementation • 25 May 2023 • Yichong Huang, Xiaocheng Feng, Xinwei Geng, Baohang Li, Bing Qin
Multilingual neural machine translation has witnessed remarkable progress in recent years.
no code implementations • 2 May 2023 • Xiachong Feng, Xiaocheng Feng, Bing Qin
Generative agents that simulate human society show tremendous potential for further research and practical applications.
1 code implementation • 7 Apr 2023 • Kun Zhu, Xiaocheng Feng, Xiachong Feng, Yingsheng Wu, Bing Qin
Scientific literature review generation aims to extract and organize important information from an abundant collection of reference papers and produces corresponding reviews while lacking a clear and logical hierarchy.
no code implementations • 20 Feb 2023 • Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin
Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored.
no code implementations • 23 Jan 2023 • Xiachong Feng, Xiaocheng Feng, Bing Qin
To mitigate this challenge, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C-SCL), which effectively re-calibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thus reducing the discrepancy.
1 code implementation • 16 Dec 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Weihong Zhong, Bing Qin
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples.
1 code implementation • 6 Oct 2022 • Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Bing Qin
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.
1 code implementation • 3 May 2022 • Yichong Huang, Xiaocheng Feng, Xinwei Geng, Bing Qin
In this paper, we propose a novel training strategy named LSSD (Language-Specific Self-Distillation), which can alleviate the convergence inconsistency and help MNMT models achieve the best performance on each language pair simultaneously.
no code implementations • 24 Feb 2022 • Zhangyin Feng, Duyu Tang, Cong Zhou, Junwei Liao, Shuangzhi Wu, Xiaocheng Feng, Bing Qin, Yunbo Cao, Shuming Shi
(2) how to predict a word via cloze test without knowing the number of wordpieces in advance?
no code implementations • 7 Jul 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin
We hope that this first survey of dialogue summarization can provide the community with a quick access and a general picture to this task and motivate future researches.
1 code implementation • ACL 2021 • Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
1 code implementation • 30 Apr 2021 • Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin
Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text.
1 code implementation • 7 Dec 2020 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng
First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations.
1 code implementation • COLING 2020 • Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Heng Gong, Wei Bi, Xiaocheng Feng, Bing Qin, Xiaojiang Liu, Ting Liu
Neural table-to-text models, which select and order salient data, as well as verbalizing them fluently via surface realization, have achieved promising progress.
1 code implementation • CCL 2021 • Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu
In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.
1 code implementation • 24 Feb 2020 • Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu
In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.
8 code implementations • Findings of the Association for Computational Linguistics 2020 • Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks.
Ranked #1 on Code Documentation Generation on CodeSearchNet - Go
no code implementations • IJCNLP 2019 • Shuang Chen, Jinpeng Wang, Xiaocheng Feng, Feng Jiang, Bing Qin, Chin-Yew Lin
Recent neural models for data-to-text generation rely on massive parallel pairs of data and text to learn the writing knowledge.
no code implementations • 12 Sep 2019 • Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.
1 code implementation • IJCNLP 2019 • Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu
To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.
no code implementations • EMNLP 2018 • Xinwei Geng, Xiaocheng Feng, Bing Qin, Ting Liu
Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.
no code implementations • 12 Sep 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jingjing Xu, Xiaocheng Feng, Bing Qin
Results show that our knowledge-aware model outperforms the state-of-the-art approaches.
no code implementations • 12 Sep 2018 • Yibo Sun, Daya Guo, Duyu Tang, Nan Duan, Zhao Yan, Xiaocheng Feng, Bing Qin
Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.
no code implementations • ACL 2018 • Yibo Sun, Duyu Tang, Nan Duan, Jianshu ji, Guihong Cao, Xiaocheng Feng, Bing Qin, Ting Liu, Ming Zhou
We present a generative model to map natural language questions into SQL queries.
Ranked #4 on Code Generation on WikiSQL
no code implementations • COLING 2016 • Xiaocheng Feng, Duyu Tang, Bing Qin, Ting Liu
Knowledge base (KB) such as Freebase plays an important role for many natural language processing tasks.
no code implementations • COLING 2016 • Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, Weiran Xu
Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training.
10 code implementations • COLING 2016 • Duyu Tang, Bing Qin, Xiaocheng Feng, Ting Liu
Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence.
Aspect-Based Sentiment Analysis (ABSA) General Classification +2