Search Results for author: Haochun Wang

Found 12 papers, 6 papers with code

AS-ES Learning: Towards Efficient CoT Learning in Small Models

no code implementations4 Mar 2024 Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu

Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.

Data Augmentation Logical Reasoning

Beyond the Answers: Reviewing the Rationality of Multiple Choice Question Answering for the Evaluation of Large Language Models

no code implementations2 Feb 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Bing Qin, Ting Liu

In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation tasks.

Multiple-choice Multiple Choice Question Answering (MCQA) +1

Beyond Direct Diagnosis: LLM-based Multi-Specialist Agent Consultation for Automatic Diagnosis

no code implementations29 Jan 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu

Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.

Natural Language Understanding

MolTailor: Tailoring Chemical Molecular Representation to Specific Tasks via Text Prompts

1 code implementation21 Jan 2024 Haoqiang Guo, Sendong Zhao, Haochun Wang, Yanrui Du, Bing Qin

The agent accentuates task-relevant features in the molecular representation by understanding the natural language description of the task, just as a tailor customizes clothes for clients.

Drug Discovery Language Modelling +2

Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making

no code implementations20 Oct 2023 Yanrui Du, Sendong Zhao, Haochun Wang, Yuhan Chen, Rui Bai, Zewen Qiang, MuZhen Cai, Bing Qin

Through extensive experiments on five reasoning datasets from the ERASER benchmark, we demonstrate that our framework not only establishes a more reliable link between the generated rationale and model decision but also achieves competitive results in task performance and the quality of rationale.

Decision Making

From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery

1 code implementation11 Sep 2023 Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin

Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery.

Descriptive Domain Adaptation +2

Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu

To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.

Domain Adaptation Hallucination +2

Manifold-based Verbalizer Space Re-embedding for Tuning-free Prompt-based Classification

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu

Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.

Llama

UniCoRN: Unified Cognitive Signal ReconstructioN bridging cognitive signals and human language

no code implementations6 Jul 2023 Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu

In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.

Brain Computer Interface EEG +2

HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge

1 code implementation14 Apr 2023 Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu

Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.

Llama

Global Prompt Cell: A Portable Control Module for Effective Prompt Tuning

no code implementations12 Apr 2023 Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin

As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer.

Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand Rare Biomedical Words

1 code implementation COLING 2022 Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu

Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.

Natural Language Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.