Search Results for author: Sendong Zhao

Found 26 papers, 13 papers with code

Less Is More: Domain Adaptation with Lottery Ticket for Reading Comprehension

1 code implementation Findings (EMNLP) 2021 Haichao Zhu, Zekun Wang, Heng Zhang, Ming Liu, Sendong Zhao, Bing Qin

Then, we only fine-tune the lottery subnetwork, a small fraction of the whole parameters, on the annotated target domain data for adaptation.

Domain Adaptation Reading Comprehension

AS-ES Learning: Towards Efficient CoT Learning in Small Models

no code implementations4 Mar 2024 Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu

Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.

Data Augmentation Logical Reasoning

Beyond the Answers: Reviewing the Rationality of Multiple Choice Question Answering for the Evaluation of Large Language Models

no code implementations2 Feb 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Bing Qin, Ting Liu

In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation tasks.

Multiple-choice Multiple Choice Question Answering (MCQA) +1

Beyond Direct Diagnosis: LLM-based Multi-Specialist Agent Consultation for Automatic Diagnosis

no code implementations29 Jan 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu

Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.

Natural Language Understanding

MolTailor: Tailoring Chemical Molecular Representation to Specific Tasks via Text Prompts

1 code implementation21 Jan 2024 Haoqiang Guo, Sendong Zhao, Haochun Wang, Yanrui Du, Bing Qin

The agent accentuates task-relevant features in the molecular representation by understanding the natural language description of the task, just as a tailor customizes clothes for clients.

Drug Discovery Language Modelling +2

Analyzing the Inherent Response Tendency of LLMs: Real-World Instructions-Driven Jailbreak

no code implementations7 Dec 2023 Yanrui Du, Sendong Zhao, Ming Ma, Yuhan Chen, Bing Qin

The jailbreak idea of our method is "Inherent Response Tendency Analysis" which identifies real-world instructions that can inherently induce LLMs to generate affirmation responses and the corresponding jailbreak strategy is "Real-World Instructions-Driven Jailbreak" which involves strategically splicing real-world instructions identified through the above analysis around the malicious instruction.

Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making

no code implementations20 Oct 2023 Yanrui Du, Sendong Zhao, Haochun Wang, Yuhan Chen, Rui Bai, Zewen Qiang, MuZhen Cai, Bing Qin

Through extensive experiments on five reasoning datasets from the ERASER benchmark, we demonstrate that our framework not only establishes a more reliable link between the generated rationale and model decision but also achieves competitive results in task performance and the quality of rationale.

Decision Making

From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery

1 code implementation11 Sep 2023 Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin

Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery.

Descriptive Domain Adaptation +2

Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu

To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.

Domain Adaptation Hallucination +2

Manifold-based Verbalizer Space Re-embedding for Tuning-free Prompt-based Classification

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu

Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.

Don't Ignore Dual Logic Ability of LLMs while Privatizing: A Data-Intensive Analysis in Medical Domain

1 code implementation8 Sep 2023 Yanrui Du, Sendong Zhao, MuZhen Cai, Ming Ma, Danyang Zhao, Jiawei Cao, Bing Qin

We conduct several experiments to analyze the dual logic ability of LLMs by examining the consistency of the stance in responses to paired questions about the same fact.

Fact Checking Knowledge Graphs

UniCoRN: Unified Cognitive Signal ReconstructioN bridging cognitive signals and human language

no code implementations6 Jul 2023 Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu

In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.

Brain Computer Interface EEG +2

HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge

1 code implementation14 Apr 2023 Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu

Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.

Global Prompt Cell: A Portable Control Module for Effective Prompt Tuning

no code implementations12 Apr 2023 Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin

As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer.

Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand Rare Biomedical Words

1 code implementation COLING 2022 Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu

Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.

Natural Language Inference

VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion

no code implementations4 Jul 2022 Tao He, Ming Liu, Yixin Cao, Tianwen Jiang, Zihao Zheng, Jingrun Zhang, Sendong Zhao, Bing Qin

In this paper, we solve the sparse KGC from these two motivations simultaneously and handle their respective drawbacks further, and propose a plug-and-play unified framework VEM$^2$L over sparse KGs.

Knowledge Distillation Missing Elements +1

Less Learn Shortcut: Analyzing and Mitigating Learning of Spurious Feature-Label Correlation

1 code implementation25 May 2022 Yanrui Du, Jing Yan, Yan Chen, Jing Liu, Sendong Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Bing Qin

In this study, we focus on the spurious correlation between word features and labels that models learn from the biased data distribution of training data.

Natural Language Inference Sentiment Analysis

Biomedical Knowledge Graph Refinement with Embedding and Logic Rules

no code implementations2 Dec 2020 Sendong Zhao, Bing Qin, Ting Liu, Fei Wang

This paper proposes a method BioGRER to improve the BioKG's quality, which comprehensively combines the knowledge graph embedding and logic rules that support and negate triplets in the BioKG.

Knowledge Graph Embedding Knowledge Graphs

A Co-Interactive Transformer for Joint Slot Filling and Intent Detection

1 code implementation8 Oct 2020 Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu

Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.

Intent Detection slot-filling +2

Biomedical Evidence Generation Engine

no code implementations11 Nov 2019 Sendong Zhao, Fei Wang

With the rapid development of precision medicine, a large amount of health data (such as electronic health records, gene sequencing, medical images, etc.)

Information Retrieval Question Answering +3

GRAPHENE: A Precise Biomedical Literature Retrieval Engine with Graph Augmented Deep Learning and External Knowledge Empowerment

no code implementations2 Nov 2019 Sendong Zhao, Chang Su, Andrea Sboner, Fei Wang

GRAPHENE consists of three main different modules 1) graph-augmented document representation learning; 2) query expansion and representation learning and 3) learning to rank biomedical articles.

Learning-To-Rank Representation Learning +1

A Neural Multi-Task Learning Framework to Jointly Model Medical Named Entity Recognition and Normalization

1 code implementation14 Dec 2018 Sendong Zhao, Ting Liu, Sicheng Zhao, Fei Wang

State-of-the-art studies have demonstrated the superiority of joint modelling over pipeline implementation for medical named entity recognition and normalization due to the mutual benefits between the two processes.

Medical Named Entity Recognition Multi-Task Learning +2

A Multi-View Ensemble Classification Model for Clinically Actionable Genetic Mutations

2 code implementations26 Jun 2018 Xi Sheryl Zhang, Dandi Chen, Yongjun Zhu, Chao Che, Chang Su, Sendong Zhao, Xu Min, Fei Wang

This paper presents details of our winning solutions to the task IV of NIPS 2017 Competition Track entitled Classifying Clinically Actionable Genetic Mutations.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.