no code implementations • EMNLP 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
no code implementations • 1 Mar 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).
no code implementations • 17 Feb 2024 • Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che
Model quantification uses low bit-width values to represent the weight matrices of models, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs.
no code implementations • 16 Feb 2024 • Qi Shi, Han Cui, Haofeng Wang, Qingfu Zhu, Wanxiang Che, Ting Liu
Question answering over heterogeneous data requires reasoning over diverse sources of data, which is challenging due to the large scale of information and organic coupling of heterogeneous data.
no code implementations • 16 Feb 2024 • Dingzirui Wang, Longxu Dou, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che
Currently, the in-context learning method based on large language models (LLMs) has become the mainstream of text-to-SQL research.
no code implementations • 16 Feb 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models.
no code implementations • 16 Feb 2024 • Dingzirui Wang, Longxu Dou, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che
Numerical reasoning is an essential ability for NLP systems to handle numeric information.
no code implementations • 16 Feb 2024 • Xuanliang Zhang, Dingzirui Wang, Longxu Dou, Qingfu Zhu, Wanxiang Che
To reduce the effect of the similar irrelevant entity, our method focuses on unretrieved entities at each hop and considers the low-ranked tables by beam search.
1 code implementation • 13 Feb 2024 • Xuanliang Zhang, Dingzirui Wang, Longxu Dou, Qingfu Zhu, Wanxiang Che
In this paper, we analyze the mainstream techniques used to improve table reasoning performance in the LLM era, and the advantages of LLMs compared to pre-LLMs for solving table reasoning.
no code implementations • 19 Apr 2023 • Bohan Li, Longxu Dou, Yutai Hou, Yunlong Feng, Honglin Mu, Qingfu Zhu, Qinghua Sun, Wanxiang Che
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template.
no code implementations • 4 Feb 2023 • Bohan Li, Xiao Xu, Xinghao Wang, Yutai Hou, Yunlong Feng, Feng Wang, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che
In contrast, generative methods bring more image diversity in the augmented images but may not preserve semantic consistency, thus incorrectly changing the essential semantics of the original image.
no code implementations • 12 Dec 2022 • Qingfu Zhu, Xianzhen Luo, Fang Liu, Cuiyun Gao, Wanxiang Che
Natural language processing for programming aims to use NLP techniques to assist programming.
no code implementations • ACL 2021 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.
no code implementations • 29 Apr 2020 • Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang
Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.
no code implementations • ACL 2019 • Qingfu Zhu, Lei Cui, Wei-Nan Zhang, Furu Wei, Ting Liu
Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models.
no code implementations • COLING 2018 • Wei-Nan Zhang, Yiming Cui, Yifa Wang, Qingfu Zhu, Lingzhi Li, Lianqiang Zhou, Ting Liu
Despite the success of existing works on single-turn conversation generation, taking the coherence in consideration, human conversing is actually a context-sensitive process.
no code implementations • 9 Jan 2017 • Wei-Nan Zhang, Ting Liu, Yifa Wang, Qingfu Zhu
Moreover, the lexical divergence of the responses generated by the 5 personalized models indicates that the proposed two-phase approach achieves good results on modeling the responding style of human and generating personalized responses for the conversational systems.
no code implementations • 19 Aug 2016 • Qingfu Zhu, Wei-Nan Zhang, Lianqiang Zhou, Ting Liu
An obvious drawback of these work is that there is not a learnable relationship between words and the start symbol.