no code implementations • 26 Mar 2024 • Jian Yang, Hongcheng Guo, Yuwei Yin, Jiaqi Bai, Bing Wang, Jiaheng Liu, Xinnian Liang, Linzheng Cahi, Liqun Yang, Zhoujun Li
Our method aims to minimize the representation distance of different languages by regarding the image as a central language.
no code implementations • 18 Feb 2024 • Peng Xing, Yinghui Li, Shirong Ma, Xinnian Liang, Haojing Huang, Yangning Li, Hai-Tao Zheng, Wenhao Jiang, Ying Shen
Chinese Spelling Correction (CSC) aims to detect and correct spelling errors in given sentences.
1 code implementation • 18 Dec 2023 • Bing Wang, Changyu Ren, Jian Yang, Xinnian Liang, Jiaqi Bai, Linzheng Chai, Zhao Yan, Qian-Wen Zhang, Di Yin, Xing Sun, Zhoujun Li
Our framework comprises a core decomposer agent for Text-to-SQL generation with few-shot chain-of-thought reasoning, accompanied by two auxiliary agents that utilize external tools or models to acquire smaller sub-databases and refine erroneous SQL queries.
1 code implementation • 16 Oct 2023 • Weixiao Zhou, Gengyao Li, Xianfu Cheng, Xinnian Liang, Junnan Zhu, FeiFei Zhai, Zhoujun Li
Specifically, we first conduct domain-aware pre-training using large-scale multi-scenario multi-domain dialogue data to enhance the adaptability of our pre-trained model.
1 code implementation • 27 Jun 2023 • Jiaqi Bai, Zhao Yan, Jian Yang, Xinnian Liang, Hongcheng Guo, Zhoujun Li
We propose Knowledgeable Prefix Tuning (KnowPrefix-Tuning), a two-stage tuning framework, bypassing the retrieval process in a knowledge-grounded conversation system by injecting prior knowledge into the lightweight knowledge prefix.
no code implementations • 29 May 2023 • Jiaqi Bai, Hongcheng Guo, Jiaheng Liu, Jian Yang, Xinnian Liang, Zhao Yan, Zhoujun Li
However, the retrieved passages are not ideal for guiding answer generation because of the discrepancy between retrieval and generation, i. e., the candidate passages are all treated equally during the retrieval procedure without considering their potential to generate a proper answer.
1 code implementation • 26 Apr 2023 • Bing Wang, Xinnian Liang, Jian Yang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, Zhoujun Li
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information.
1 code implementation • 23 Mar 2023 • Xinnian Liang, Shuangzhi Wu, Hui Huang, Jiaqi Bai, Chao Bian, Zhoujun Li
Retrieval augmented methods have shown promising results in various classification tasks.
1 code implementation • 20 Mar 2023 • Xinnian Liang, Zefan Zhou, Hui Huang, Shuangzhi Wu, Tong Xiao, Muyun Yang, Zhoujun Li, Chao Bian
We conduct extensive experiments on various Chinese NLP tasks to evaluate existing PLMs as well as the proposed MigBERT.
1 code implementation • 29 Jan 2023 • Xinnian Liang, Shuangzhi Wu, Chenhao Cui, Jiaqi Bai, Chao Bian, Zhoujun Li
The global one aims to identify vital sub-topics in the dialogue and the local one aims to select the most important context in each sub-topic.
no code implementations • 24 Aug 2022 • Chenhao Cui, Xinnian Liang, Shuangzhi Wu, Zhoujun Li
The core of ViL-Sum is a joint multi-modal encoder with two well-designed tasks, image reordering and image selection.
1 code implementation • COLING 2022 • Xinnian Liang, Jing Li, Shuangzhi Wu, Jiali Zeng, Yufan Jiang, Mu Li, Zhoujun Li
To tackle this problem, in this paper, we proposed an efficient Coarse-to-Fine Facet-Aware Ranking (C2F-FAR) framework for unsupervised long document summarization, which is based on the semantic block.
1 code implementation • NAACL 2022 • Xinnian Liang, Shuangzhi Wu, Mu Li, Zhoujun Li
In this paper, we propose a novel method to extract multi-granularity features based solely on the original input sentences.
1 code implementation • EMNLP 2021 • Xinnian Liang, Shuangzhi Wu, Mu Li, Zhoujun Li
In terms of the local view, we first build a graph structure based on the document where phrases are regarded as vertices and the edges are similarities between vertices.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ze Yang, Wei Wu, Can Xu, Xinnian Liang, Jiaqi Bai, Liran Wang, Wei Wang, Zhoujun Li
Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training.