1 code implementation • ACL 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qixin Li, Jian-Guang Lou, Wanxiang Che, Min-Yen Kan
Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly align representations of similar sentences across languages.
1 code implementation • EMNLP 2021 • Libo Qin, Tianbao Xie, Shijue Huang, Qiguang Chen, Xiao Xu, Wanxiang Che
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation.
1 code implementation • COLING 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qian Liu, Shijue Huang, Wanxiang Che, Zhou Yu
Consistency identification in task-oriented dialog (CI-ToD) usually consists of three subtasks, aiming to identify inconsistency between current system response and current user response, dialog history and the corresponding knowledge base.
no code implementations • 10 Apr 2024 • Yunlong Feng, Yang Xu, Libo Qin, Yasheng Wang, Wanxiang Che
The framework motivates the model itself to automatically generate rationales on existing datasets.
no code implementations • 7 Apr 2024 • Libo Qin, Qiguang Chen, YuHang Zhou, Zhi Chen, Yinghui Li, Lizi Liao, Min Li, Wanxiang Che, Philip S. Yu
To this end, in this paper, we present a thorough review and provide a unified perspective to summarize the recent progress as well as emerging trends in multilingual large language models (MLLMs) literature.
no code implementations • 18 Feb 2024 • Yinghui Li, Shang Qin, Jingheng Ye, Shirong Ma, Yangning Li, Libo Qin, Xuming Hu, Wenhao Jiang, Hai-Tao Zheng, Philip S. Yu
To promote the CGEC field to better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC.
no code implementations • 16 Feb 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models.
1 code implementation • 6 Feb 2024 • Dechuan Teng, Chunlin Lu, Xiao Xu, Wanxiang Che, Libo Qin
Recently, Profile-based Spoken Language Understanding (SLU) has gained increasing attention, which aims to incorporate various types of supplementary profile information (i. e., Knowledge Graph, User Profile, Context Awareness) to eliminate the prevalent ambiguities in user utterances.
1 code implementation • 31 Dec 2023 • Shijue Huang, Libo Qin, Bingbing Wang, Geng Tu, Ruifeng Xu
The two core challenges for multi-modal intent detection are (1) how to effectively align and fuse different features of modalities and (2) the limited labeled multi-modal intent training data.
1 code implementation • 23 Dec 2023 • Zhangli Lu, Chuqi Lei, Kaili Wang, Libo Qin, Jing Tang, Min Li
DTIAM, for the first time, provides a unified framework for accurate and robust prediction of drug-target interactions, binding affinities, and activation/inhibition mechanisms.
no code implementations • 15 Nov 2023 • Libo Qin, Wenbo Pan, Qiguang Chen, Lizi Liao, Zhou Yu, Yue Zhang, Wanxiang Che, Min Li
End-to-end task-oriented dialogue (EToD) can directly generate responses in an end-to-end fashion without modular training, which attracts escalating popularity.
1 code implementation • 23 Oct 2023 • Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang, Wanxiang Che
The cross-lingual alignment prompting is responsible for aligning representations across different languages, whereas the task-specific solver prompting is used to generate the final chain of thoughts and results for the reasoning task.
no code implementations • 7 Aug 2023 • Shijue Huang, Bingbing Wang, Libo Qin, Qin Zhao, Ruifeng Xu
Few-shot and zero-shot entity linking focus on the tail and emerging entities, which are more challenging but closer to real-world scenarios.
1 code implementation • 14 Jul 2023 • Libo Qin, Shijue Huang, Qiguang Chen, Chenran Cai, Yudi Zhang, Bin Liang, Wanxiang Che, Ruifeng Xu
Multi-modal sarcasm detection has attracted much recent attention.
1 code implementation • 17 May 2023 • Libo Qin, Qiguang Chen, Xiao Xu, Yunlong Feng, Wanxiang Che
Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e. g., intents and slots).
no code implementations • 18 Apr 2023 • Yunlong Feng, Bohan Li, Libo Qin, Xiao Xu, Wanxiang Che
Cross-domain text classification aims to adapt models to a target domain that lacks labeled data.
1 code implementation • 13 Apr 2023 • Hao Fei, Shengqiong Wu, Jingye Li, Bobo Li, Fei Li, Libo Qin, Meishan Zhang, Min Zhang, Tat-Seng Chua
Universally modeling all typical information extraction tasks (UIE) with one generative language model (GLM) has revealed great potential by the latest study, where various IE predictions are unified into a linearized hierarchical expression under a GLM.
no code implementations • 9 Apr 2023 • Wenbo Pan, Qiguang Chen, Xiao Xu, Wanxiang Che, Libo Qin
Zero-shot dialogue understanding aims to enable dialogue to track the user's needs without any training data, which has gained increasing attention.
no code implementations • 5 Jan 2023 • Bo Zheng, Zhouyang Li, Fuxuan Wei, Qiguang Chen, Libo Qin, Wanxiang Che
Multilingual spoken language understanding (SLU) consists of two sub-tasks, namely intent detection and slot filling.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding Out-of-Distribution Generalization
1 code implementation • 18 Apr 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qixin Li, Jian-Guang Lou, Wanxiang Che, Min-Yen Kan
We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming.
no code implementations • SIGDIAL (ACL) 2022 • Zhi Chen, Lu Chen, Bei Chen, Libo Qin, Yuncong Liu, Su Zhu, Jian-Guang Lou, Kai Yu
With the development of pre-trained language models, remarkable success has been witnessed in dialogue understanding (DU).
1 code implementation • 22 Dec 2021 • Xiao Xu, Libo Qin, Kaiji Chen, Guoxing Wu, Linlin Li, Wanxiang Che
Current researches on spoken language understanding (SLU) heavily are limited to a simple setting: the plain text-based SLU that takes the user utterance as input and generates its corresponding semantic frames (e. g., intent and slots).
Ranked #1 on Semantic Frame Parsing on ProSLU
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
1 code implementation • 23 Sep 2021 • Libo Qin, Tianbao Xie, Shijue Huang, Qiguang Chen, Xiao Xu, Wanxiang Che
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation.
1 code implementation • 15 Jul 2021 • Liang Xu, Xiaojing Lu, Chenyang Yuan, Xuanwei Zhang, Huilin Xu, Hu Yuan, Guoao Wei, Xiang Pan, Xin Tian, Libo Qin, Hu Hai
While different learning schemes -- fine-tuning, zero-shot, and few-shot learning -- have been widely explored and compared for languages such as English, there is comparatively little work in Chinese to fairly and comprehensively evaluate and compare these methods and thus hinders cumulative progress.
1 code implementation • ACL 2021 • Libo Qin, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, Ting Liu
Multi-intent SLU can handle multiple intents in an utterance, which has attracted increasing attention.
Ranked #1 on Semantic Frame Parsing on MixATIS (Overall Accuracy metric)
1 code implementation • ACL 2021 • Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
1 code implementation • 4 Mar 2021 • Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu
Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system.
1 code implementation • 24 Dec 2020 • Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, Ting Liu
The dialog context information (contextual information) and the mutual interaction information are two key factors that contribute to the two related tasks.
1 code implementation • 8 Oct 2020 • Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu
Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.
1 code implementation • 8 Oct 2020 • Dechuan Teng, Libo Qin, Wanxiang Che, Sendong Zhao, Ting Liu
In this paper, we improve Chinese spoken language understanding (SLU) by injecting word information.
1 code implementation • EMNLP (ACL) 2021 • Wanxiang Che, Yunlong Feng, Libo Qin, Ting Liu
We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling).
no code implementations • 16 Aug 2020 • Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, Ting Liu
In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately.
1 code implementation • 13 Aug 2020 • Qingkai Min, Libo Qin, Zhiyang Teng, Xiao Liu, Yue Zhang
Dialogue state modules are a useful component in a task-oriented dialogue system.
no code implementations • ACL 2020 • Yangming Li, Kaisheng Yao, Libo Qin, Wanxiang Che, Xiaolong Li, Ting Liu
Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG).
1 code implementation • 11 Jun 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che
Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages.
no code implementations • 30 Apr 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Yangming Li, Ting Liu
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
1 code implementation • ACL 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu
However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains.
Ranked #1 on Task-Oriented Dialogue Systems on Kvret
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Ting Liu
Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction.
1 code implementation • IJCNLP 2019 • Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, Ting Liu
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system.
Ranked #6 on Task-Oriented Dialogue Systems on KVRET
2 code implementations • IJCNLP 2019 • Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu
In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.
Ranked #2 on Intent Detection on SNIPS
no code implementations • COLING 2018 • Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, Ting Liu
Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base.
Ranked #7 on Task-Oriented Dialogue Systems on KVRET