no code implementations • EMNLP 2021 • Kangli Zi, Shi Wang, Yu Liu, Jicun Li, Yanan Cao, Cungen Cao
Sentence Compression (SC), which aims to shorten sentences while retaining important words that express the essential meanings, has been studied for many years in many languages, especially in English.
no code implementations • EMNLP 2020 • Ruipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, Shi Wang
Sentence-level extractive text summarization is substantially a node classification task of network mining, adhering to the informative components and concise representations.
Ranked #1 on Extractive Text Summarization on CNN / Daily Mail
no code implementations • EMNLP 2021 • Xingwu Sun, Yanling Cui, Hongyin Tang, Fuzheng Zhang, Beihong Jin, Shi Wang
In this paper, we propose a new ranking model DR-BERT, which improves the Document Retrieval (DR) task by a task-adaptive training process and a Segmented Token Recovery Mechanism (STRM).
no code implementations • 26 Apr 2024 • Ren-xin Zhao, Shi Wang, Yaonan Wang
Quantum Convolutional Layer (QCL) is considered as one of the core of Quantum Convolutional Neural Networks (QCNNs) due to its efficient data feature extraction capability.
1 code implementation • 20 Feb 2024 • Yujie Shao, Xinrong Yao, Xingwei Qu, Chenghua Lin, Shi Wang, Stephen W. Huang, Ge Zhang, Jie Fu
These models are able to generate creative and fluent metaphor sentences more frequently induced by selected samples from our dataset, demonstrating the value of our corpus for Chinese metaphor research.
no code implementations • 29 Aug 2023 • Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, DaCheng Tao, Li Guo
We evaluate our method on both open and closed LLMs, and the experiments on the widely-used public dataset show that our method can generate more consistent responses in a long-context conversation.
1 code implementation • NeurIPS 2023 • Ruibin Yuan, Yinghao Ma, Yizhi Li, Ge Zhang, Xingran Chen, Hanzhi Yin, Le Zhuo, Yiqi Liu, Jiawen Huang, Zeyue Tian, Binyue Deng, Ningzhi Wang, Chenghua Lin, Emmanouil Benetos, Anton Ragni, Norbert Gyenge, Roger Dannenberg, Wenhu Chen, Gus Xia, Wei Xue, Si Liu, Shi Wang, Ruibo Liu, Yike Guo, Jie Fu
This is evident in the limited work on deep music representations, the scarcity of large-scale datasets, and the absence of a universal and community-driven benchmark.
no code implementations • 1 Jun 2023 • Qingyue Wang, Liang Ding, Yanan Cao, Yibing Zhan, Zheng Lin, Shi Wang, DaCheng Tao, Li Guo
Zero-shot transfer learning for Dialogue State Tracking (DST) helps to handle a variety of task-oriented dialogue domains without the cost of collecting in-domain data.
no code implementations • 22 May 2023 • Zekun Wang, Ge Zhang, Kexin Yang, Ning Shi, Wangchunshu Zhou, Shaochun Hao, Guangzheng Xiong, Yizhi Li, Mong Yuan Sim, Xiuying Chen, Qingqing Zhu, Zhenzhu Yang, Adam Nik, Qi Liu, Chenghua Lin, Shi Wang, Ruibo Liu, Wenhu Chen, Ke Xu, Dayiheng Liu, Yike Guo, Jie Fu
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP, aimed at addressing limitations in existing frameworks while aligning with the ultimate goals of artificial intelligence.
1 code implementation • 1 Jan 2023 • Ge Zhang, Yizhi Li, Yaoyao Wu, Linyuan Zhang, Chenghua Lin, Jiayi Geng, Shi Wang, Jie Fu
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese.
no code implementations • 9 Dec 2022 • Shi Wang, Daniel Tang, Luchen Zhang, Huilin Li, Ding Han
Specifically, a personalized PageRank routine is developed to capture the co-relation of codes, a bidirectional hierarchy passage encoder to capture the codes' hierarchical representations, and a progressive predicting method is then proposed to narrow down the semantic searching space of prediction.
no code implementations • 9 Dec 2022 • Xunzhu Tang, Rujie Zhu, Tiezhu Sun, Shi Wang
Recently, language representation techniques have achieved great performances in text classification.
no code implementations • 9 Dec 2022 • Xunzhu Tang, Tiezhu Sun, Rujie Zhu, Shi Wang
Recently, neural language representation models pre-trained on large corpus can capture rich co-occurrence information and be fine-tuned in downstream tasks to improve the performance.
1 code implementation • 5 Nov 2022 • Yizhi Li, Ge Zhang, Bohao Yang, Chenghua Lin, Shi Wang, Anton Ragni, Jie Fu
In addition to verifying the existence of regional bias in LMs, we find that the biases on regional groups can be strongly influenced by the geographical clustering of the groups.
no code implementations • ACL 2022 • Ruipeng Jia, Xingxing Zhang, Yanan Cao, Shi Wang, Zheng Lin, Furu Wei
In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages.
no code implementations • 28 Jan 2022 • Shi Wang, Dingding Liang, Yang Chen
A photonics-assisted joint communication-radar system is proposed and experimentally demonstrated, by introducing a quadrature phase-shift keying (QPSK)-sliced linearly frequency-modulated (LFM) signal.
no code implementations • 4 Sep 2021 • Renxin Zhao, Shi Wang
The rapid development of quantum computer hardware has laid the hardware foundation for the realization of QNN.
no code implementations • ACL 2021 • Ruipeng Jia, Yanan Cao, Fang Fang, Yuchen Zhou, Zheng Fang, Yanbing Liu, Shi Wang
In this paper, we conceptualize the single-document extractive summarization as a rebalance problem and present a deep differential amplifier framework.
no code implementations • 28 Mar 2020 • Hengzhu Tang, Yanan Cao, Zhen-Yu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, Pengfei Yin
In this paper, we propose a Hierarchical Inference Network (HIN) to make full use of the abundant information from entity level, sentence level and document level.
Ranked #51 on Relation Extraction on DocRED
no code implementations • EMNLP 2018 • Xingwu Sun, Jing Liu, Yajuan Lyu, wei he, Yanjun Ma, Shi Wang
(2) The model copies the context words that are far from and irrelevant to the answer, instead of the words that are close and relevant to the answer.