Search Results for author: Xiaocheng Feng

Found 42 papers, 19 papers with code

Learning to Rewrite for Non-Autoregressive Neural Machine Translation

1 code implementation EMNLP 2021 Xinwei Geng, Xiaocheng Feng, Bing Qin

Towards keeping the consistency of data distribution with iterative decoding, an iterative training strategy is employed to further improve the capacity of rewriting.

Machine Translation Translation

MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization

1 code implementation dialdoc (ACL) 2022 Xiachong Feng, Xiaocheng Feng, Bing Qin

Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently.

Benchmarking dialogue summary +1

Improving Controllable Text Generation with Position-Aware Weighted Decoding

no code implementations Findings (ACL) 2022 Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Jiaming Wu, Heng Gong, Bing Qin

Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation.

Attribute Language Modelling +2

Aligning Translation-Specific Understanding to General Understanding in Large Language Models

no code implementations10 Jan 2024 Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin

To align the translation-specific understanding to the general one, we propose a novel translation process xIoD (Cross-Lingual Interpretation of Difficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.

Machine Translation Translation

Length Extrapolation of Transformers: A Survey from the Perspective of Positional Encoding

no code implementations28 Dec 2023 Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu

In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.

Position

Emage: Non-Autoregressive Text-to-Image Generation

no code implementations22 Dec 2023 Zhangyin Feng, Runyi Hu, Liangxin Liu, Fan Zhang, Duyu Tang, Yong Dai, Xiaocheng Feng, Jiwei Li, Bing Qin, Shuming Shi

Compared with autoregressive baselines that needs to run one thousand times, our model only runs 16 times to generate images of competitive quality with an order of magnitude lower inference latency.

Denoising Text-to-Image Generation

Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications

no code implementations10 Nov 2023 Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.

knowledge editing Retrieval

A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

1 code implementation9 Nov 2023 Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.

Hallucination

Retrieval-Generation Synergy Augmented Large Language Models

1 code implementation8 Oct 2023 Zhangyin Feng, Xiaocheng Feng, Dezhi Zhao, Maojin Yang, Bing Qin

Large language models augmented with task-relevant documents have demonstrated impressive performance on knowledge-intensive tasks.

Question Answering Retrieval

Adapter-based Selective Knowledge Distillation for Federated Multi-domain Meeting Summarization

no code implementations7 Aug 2023 Xiachong Feng, Xiaocheng Feng, Xiyuan Du, Min-Yen Kan, Bing Qin

However, existing work has focused on training models on centralized data, neglecting real-world scenarios where meeting data are infeasible to collect centrally, due to their sensitive nature.

Federated Learning Knowledge Distillation +1

SkillNet-X: A Multilingual Multitask Model with Sparsely Activated Skills

no code implementations28 Jun 2023 Zhangyin Feng, Yong Dai, Fan Zhang, Duyu Tang, Xiaocheng Feng, Shuangzhi Wu, Bing Qin, Yunbo Cao, Shuming Shi

Traditional multitask learning methods basically can only exploit common knowledge in task- or language-wise, which lose either cross-language or cross-task knowledge.

Natural Language Understanding

Improved Visual Story Generation with Adaptive Context Modeling

no code implementations26 May 2023 Zhangyin Feng, Yuchen Ren, Xinmiao Yu, Xiaocheng Feng, Duyu Tang, Shuming Shi, Bing Qin

Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation.

Story Generation Story Visualization +1

The Role of Summarization in Generative Agents: A Preliminary Perspective

no code implementations2 May 2023 Xiachong Feng, Xiaocheng Feng, Bing Qin

Generative agents that simulate human society show tremendous potential for further research and practical applications.

Hierarchical Catalogue Generation for Literature Review: A Benchmark

1 code implementation7 Apr 2023 Kun Zhu, Xiaocheng Feng, Xiachong Feng, Yingsheng Wu, Bing Qin

Scientific literature review generation aims to extract and organize important information from an abundant collection of reference papers and produces corresponding reviews while lacking a clear and logical hierarchy.

Informativeness Review Generation

STOA-VLP: Spatial-Temporal Modeling of Object and Action for Video-Language Pre-training

no code implementations20 Feb 2023 Weihong Zhong, Mao Zheng, Duyu Tang, Xuan Luo, Heng Gong, Xiaocheng Feng, Bing Qin

Although large-scale video-language pre-training models, which usually build a global alignment between the video and the text, have achieved remarkable progress on various downstream tasks, the idea of adopting fine-grained information during the pre-training stage is not well explored.

Language Modelling Object +5

Semantic-aware Contrastive Learning for Electroencephalography-to-Text Generation with Curriculum Learning

no code implementations23 Jan 2023 Xiachong Feng, Xiaocheng Feng, Bing Qin

To mitigate this challenge, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C-SCL), which effectively re-calibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thus reducing the discrepancy.

Contrastive Learning EEG +1

Controllable Text Generation via Probability Density Estimation in the Latent Space

1 code implementation16 Dec 2022 Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Weihong Zhong, Bing Qin

Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-related classifiers or sampling a representation from relevant discrete samples.

Attribute Density Estimation +1

A Distributional Lens for Multi-Aspect Controllable Text Generation

1 code implementation6 Oct 2022 Yuxuan Gu, Xiaocheng Feng, Sicheng Ma, Lingyuan Zhang, Heng Gong, Bing Qin

Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.

Attribute Text Generation

Unifying the Convergences in Multilingual Neural Machine Translation

1 code implementation3 May 2022 Yichong Huang, Xiaocheng Feng, Xinwei Geng, Bing Qin

In this paper, we propose a novel training strategy named LSSD (Language-Specific Self-Distillation), which can alleviate the convergence inconsistency and help MNMT models achieve the best performance on each language pair simultaneously.

Machine Translation NMT +1

A Survey on Dialogue Summarization: Recent Advances and New Frontiers

no code implementations7 Jul 2021 Xiachong Feng, Xiaocheng Feng, Bing Qin

We hope that this first survey of dialogue summarization can provide the community with a quick access and a general picture to this task and motivate future researches.

Text Generation

Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization

1 code implementation ACL 2021 Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu

Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.

Conversational Response Generation Language Modelling +1

The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey

1 code implementation30 Apr 2021 Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin

Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text.

Abstractive Text Summarization

Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

1 code implementation7 Dec 2020 Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng

First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations.

Data Augmentation Meeting Summarization

TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching

1 code implementation COLING 2020 Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu

Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.

Few-Shot Learning Language Modelling +2

Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks

1 code implementation CCL 2021 Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu

In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.

Abstractive Dialogue Summarization dialogue summary +1

Learning to Select Bi-Aspect Information for Document-Scale Text Content Manipulation

1 code implementation24 Feb 2020 Xiaocheng Feng, Yawei Sun, Bing Qin, Heng Gong, Yibo Sun, Wei Bi, Xiaojiang Liu, Ting Liu

In this paper, we focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer and aims to preserve text styles while altering the content.

Sentence Style Transfer +2

Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning

no code implementations12 Sep 2019 Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang

Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data.

Meta-Learning Semantic Parsing +1

Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time)

1 code implementation IJCNLP 2019 Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu

To address aforementioned problems, not only do we model each table cell considering other records in the same row, we also enrich table's representation by modeling each table cell in context of other cells in the same column or with historical (time dimension) data respectively.

Table-to-Text Generation Time Series +1

Adaptive Multi-pass Decoder for Neural Machine Translation

no code implementations EMNLP 2018 Xinwei Geng, Xiaocheng Feng, Bing Qin, Ting Liu

Although end-to-end neural machine translation (NMT) has achieved remarkable progress in the recent years, the idea of adopting multi-pass decoding mechanism into conventional NMT is not well explored.

Machine Translation NMT +2

Knowledge Based Machine Reading Comprehension

no code implementations12 Sep 2018 Yibo Sun, Daya Guo, Duyu Tang, Nan Duan, Zhao Yan, Xiaocheng Feng, Bing Qin

Machine reading comprehension (MRC) requires reasoning about both the knowledge involved in a document and knowledge about the world.

Machine Reading Comprehension Question Answering +2

Bitext Name Tagging for Cross-lingual Entity Annotation Projection

no code implementations COLING 2016 Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, Weiran Xu

Instead of directly relying on word alignment results, this framework combines advantages of rule-based methods and deep learning methods by implementing two steps: First, generates a high-confidence entity annotation set on IL side with strict searching methods; Second, uses this high-confidence set to weakly supervise the model training.

named-entity-recognition Named Entity Recognition +2

Cannot find the paper you are looking for? You can Submit a new open access paper.