Search Results for author: Qingfu Zhu

Found 18 papers, 1 papers with code

Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models

no code implementations1 Mar 2024 Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che

Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).

Program Synthesis

OneBit: Towards Extremely Low-bit Large Language Models

no code implementations17 Feb 2024 Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che

Model quantification uses low bit-width values to represent the weight matrices of models, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs.

Quantization

Exploring Hybrid Question Answering via Program-based Prompting

no code implementations16 Feb 2024 Qi Shi, Han Cui, Haofeng Wang, Qingfu Zhu, Wanxiang Che, Ting Liu

Question answering over heterogeneous data requires reasoning over diverse sources of data, which is challenging due to the large scale of information and organic coupling of heterogeneous data.

Code Generation Question Answering

Improving Demonstration Diversity by Human-Free Fusing for Text-to-SQL

no code implementations16 Feb 2024 Dingzirui Wang, Longxu Dou, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che

Currently, the in-context learning method based on large language models (LLMs) has become the mainstream of text-to-SQL research.

In-Context Learning Text-To-SQL

MultiPoT: Multilingual Program of Thoughts Harnesses Multiple Programming Languages

no code implementations16 Feb 2024 Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che

In this paper, we conduct comprehensive experiments on the programming languages used in PoT and find that no single language consistently delivers optimal performance across all tasks and models.

Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes

no code implementations16 Feb 2024 Dingzirui Wang, Longxu Dou, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che

Numerical reasoning is an essential ability for NLP systems to handle numeric information.

Multi-Hop Table Retrieval for Open-Domain Text-to-SQL

no code implementations16 Feb 2024 Xuanliang Zhang, Dingzirui Wang, Longxu Dou, Qingfu Zhu, Wanxiang Che

To reduce the effect of the similar irrelevant entity, our method focuses on unretrieved entities at each hop and considers the low-ranked tables by beam search.

Table Retrieval Text-To-SQL

A Survey of Table Reasoning with Large Language Models

1 code implementation13 Feb 2024 Xuanliang Zhang, Dingzirui Wang, Longxu Dou, Qingfu Zhu, Wanxiang Che

In this paper, we analyze the mainstream techniques used to improve table reasoning performance in the LLM era, and the advantages of LLMs compared to pre-LLMs for solving table reasoning.

MixPro: Simple yet Effective Data Augmentation for Prompt-based Learning

no code implementations19 Apr 2023 Bohan Li, Longxu Dou, Yutai Hou, Yunlong Feng, Honglin Mu, Qingfu Zhu, Qinghua Sun, Wanxiang Che

Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template.

Data Augmentation Few-Shot Learning +1

Semantic-Guided Generative Image Augmentation Method with Diffusion Models for Image Classification

no code implementations4 Feb 2023 Bohan Li, Xiao Xu, Xinghao Wang, Yutai Hou, Yunlong Feng, Feng Wang, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che

In contrast, generative methods bring more image diversity in the augmented images but may not preserve semantic consistency, thus incorrectly changing the essential semantics of the original image.

Image Augmentation Image Classification +1

A Survey on Natural Language Processing for Programming

no code implementations12 Dec 2022 Qingfu Zhu, Xianzhen Luo, Fang Liu, Cuiyun Gao, Wanxiang Che

Natural language processing for programming aims to use NLP techniques to assist programming.

Neural Stylistic Response Generation with Disentangled Latent Variables

no code implementations ACL 2021 Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang

Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.

Response Generation Sentence

Counterfactual Off-Policy Training for Neural Response Generation

no code implementations29 Apr 2020 Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang

Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.

counterfactual Counterfactual Reasoning +2

Retrieval-Enhanced Adversarial Training for Neural Response Generation

no code implementations ACL 2019 Qingfu Zhu, Lei Cui, Wei-Nan Zhang, Furu Wei, Ting Liu

Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models.

Response Generation Retrieval

Context-Sensitive Generation of Open-Domain Conversational Responses

no code implementations COLING 2018 Wei-Nan Zhang, Yiming Cui, Yifa Wang, Qingfu Zhu, Lingzhi Li, Lianqiang Zhou, Ting Liu

Despite the success of existing works on single-turn conversation generation, taking the coherence in consideration, human conversing is actually a context-sensitive process.

Information Retrieval Machine Translation

Neural Personalized Response Generation as Domain Adaptation

no code implementations9 Jan 2017 Wei-Nan Zhang, Ting Liu, Yifa Wang, Qingfu Zhu

Moreover, the lexical divergence of the responses generated by the 5 personalized models indicates that the proposed two-phase approach achieves good results on modeling the responding style of human and generating personalized responses for the conversational systems.

Domain Adaptation Response Generation

Learning to Start for Sequence to Sequence Architecture

no code implementations19 Aug 2016 Qingfu Zhu, Wei-Nan Zhang, Lianqiang Zhou, Ting Liu

An obvious drawback of these work is that there is not a learnable relationship between words and the start symbol.

Machine Translation Response Generation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.