1 code implementation • ACL 2022 • Nan Yu, Meishan Zhang, Guohong Fu, Min Zhang
Pre-trained language models (PLMs) have shown great potentials in natural language processing (NLP) including rhetorical structure theory (RST) discourse parsing. Current PLMs are obtained by sentence-level pre-training, which is different from the basic processing unit, i. e. element discourse unit (EDU). To this end, we propose a second-stage EDU-level pre-training approach in this work, which presents two novel tasks to learn effective EDU representations continually based on well pre-trained language models. Concretely, the two tasks are (1) next EDU prediction (NEP) and (2) discourse marker prediction (DMP). We take a state-of-the-art transition-based neural parser as baseline, and adopt it with a light bi-gram EDU modification to effectively explore the EDU-level pre-trained EDU representation. Experimental results on a benckmark dataset show that our method is highly effective, leading a 2. 1-point improvement in F1-score. All codes and pre-trained models will be released publicly to facilitate future studies.
no code implementations • COLING 2022 • Yang Sun, Liangqing Wu, Shuangyong Song, Xiaoguang Yu, Xiaodong He, Guohong Fu
In this work, we investigate the problem of satisfaction states tracking and its effects on CSP in E-commerce service chatbots.
no code implementations • Findings (EMNLP) 2021 • Yang Sun, Nan Yu, Guohong Fu
In this paper, we investigate the importance of discourse structures in handling informative contextual cues and speaker-specific features for ERMC.
Ranked #20 on Emotion Recognition in Conversation on EmoryNLP
1 code implementation • COLING 2022 • Nan Yu, Guohong Fu, Min Zhang
It is believed that speaker interactions are helpful for this task.
Ranked #2 on Discourse Parsing on STAC
1 code implementation • EMNLP 2021 • Ranran Zhen, Rui Wang, Guohong Fu, Chengguo Lv, Meishan Zhang
Opinion Role Labeling (ORL), aiming to identify the key roles of opinion, has received increasing interest.
1 code implementation • 11 Oct 2023 • Yu Zhang, Yue Zhang, Leyang Cui, Guohong Fu
In this work, we propose a novel non-autoregressive text editing method to circumvent the above issues, by modeling the edit process with latent CTC alignments.
1 code implementation • 19 Sep 2023 • Juntao Li, Zecheng Tang, Yuyang Ding, Pinzheng Wang, Pei Guo, Wangjie You, Dan Qiao, Wenliang Chen, Guohong Fu, Qiaoming Zhu, Guodong Zhou, Min Zhang
This report provides the main details to pre-train an analogous model, including pre-training data processing, Bilingual Flan data collection, the empirical observations that inspire our model architecture design, training objectives of different stages, and other enhancement techniques.
1 code implementation • 16 Aug 2023 • Siqi Song, Qi Lv, Lei Geng, Ziqiang Cao, Guohong Fu
In this paper, we propose a retrieval-augmented spelling check framework called RSpell, which searches corresponding domain terms and incorporates them into CSC models.
no code implementations • 26 Oct 2022 • Dexin Kong, Nan Yu, Yun Yuan, Guohong Fu, Chen Gong
In this paper, we investigate the importance of discourse structures in handling utterance interactions and conversationspecific features for ECEC.
Ranked #5 on Causal Emotion Entailment on RECCON
no code implementations • 24 Aug 2022 • Qi Lv, Ziqiang Cao, Wenrui Xie, Derui Wang, Jingwen Wang, Zhiwei Hu, Tangkun Zhang, Ba Yuan, Yuanhang Li, Min Cao, Wenjie Li, Sujian Li, Guohong Fu
Furthermore, based on the similarity between video outlines and textual outlines, we use a large number of articles with chapter headings to pretrain our model.
no code implementations • 22 Aug 2022 • Xu Yan, Chunhui Ai, Ziqiang Cao, Min Cao, Sujian Li, Wenjie Li, Guohong Fu
While the builders of existing image-text retrieval datasets strive to ensure that the caption matches the linked image, they cannot prevent a caption from fitting other images.
1 code implementation • 21 Mar 2022 • Qi Lv, Ziqiang Cao, Lei Geng, Chunhui Ai, Xu Yan, Guohong Fu
However, there is a big gap between the real input scenario and automatic generated corpus.
1 code implementation • COLING 2022 • Yu Zhang, Qingrong Xia, Shilin Zhou, Yong Jiang, Guohong Fu, Min Zhang
Semantic role labeling (SRL) is a fundamental yet challenging task in the NLP community.
Dependency Parsing Semantic Role Labeling (predicted predicates)
no code implementations • COLING 2020 • Tao Liu, Xin Wang, Chengguo Lv, Ranran Zhen, Guohong Fu
Sentence matching aims to identify the special relationship between two sentences, and plays a key role in many natural language processing tasks.
1 code implementation • 12 Oct 2020 • Zhen Wang, Qiansheng Wang, Chengguo Lv, Xue Cao, Guohong Fu
Although stance detection has made great progress in the past few years, it is still facing the problem of unseen targets.
1 code implementation • 10 Oct 2020 • Qiansheng Wang, Yuxin Liu, Chengguo Lv, Zhen Wang, Guohong Fu
Open-domain response generation is the task of generating sensible and informative re-sponses to the source sentence.
1 code implementation • 22 Jul 2019 • Qingrong Xia, Zhenghua Li, Min Zhang, Meishan Zhang, Guohong Fu, Rui Wang, Luo Si
Semantic role labeling (SRL), also known as shallow semantic parsing, is an important yet challenging task in NLP.
1 code implementation • NAACL 2019 • Meishan Zhang, Peili Liang, Guohong Fu
Opinion role labeling (ORL) is an important task for fine-grained opinion mining, which identifies important opinion arguments such as holder and target for a given opinion trigger.
Ranked #1 on Fine-Grained Opinion Analysis on MPQA (using extra training data)
no code implementations • NAACL 2019 • Meishan Zhang, Zhenghua Li, Guohong Fu, Min Zhang
Syntax has been demonstrated highly effective in neural machine translation (NMT).
Ranked #8 on Machine Translation on IWSLT2015 English-Vietnamese
1 code implementation • 6 Nov 2018 • Zhuosheng Zhang, Hai Zhao, Kangwei Ling, Jiangtong Li, Zuchao Li, Shexia He, Guohong Fu
Representation learning is the foundation of machine reading comprehension and inference.
1 code implementation • COLING 2018 • Nan Yu, Meishan Zhang, Guohong Fu
Syntax has been a useful source of information for statistical RST discourse parsing.
Ranked #7 on Discourse Parsing on RST-DT
no code implementations • EMNLP 2017 • Meishan Zhang, Yue Zhang, Guohong Fu
Neural networks have shown promising results for relation extraction.
Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)
no code implementations • Pattern Recognition Letters 2017 • Fei Li, Meishan Zhang, Bo Tian, Bo Chen, Guohong Fu, Donghong Ji
We evaluate our models on two datasets for recognizing regular and irreg- ular biomedical entities.
no code implementations • 25 Apr 2017 • Liner Yang, Meishan Zhang, Yang Liu, Nan Yu, Maosong Sun, Guohong Fu
While part-of-speech (POS) tagging and dependency parsing are observed to be closely related, existing work on joint modeling with manually crafted feature templates suffers from the feature sparsity and incompleteness problems.
1 code implementation • COLING 2016 • Meishan Zhang, Yue Zhang, Guohong Fu
We investigate the use of neural network for tweet sarcasm detection, and compare the effects of the continuous automatic features with discrete manual features.
no code implementations • 27 Aug 2016 • Fei Li, Meishan Zhang, Guohong Fu, Tao Qian, Donghong Ji
This model divides a sentence or text segment into five parts, namely two target entities and their three contexts.