1 code implementation • EMNLP (sdp) 2020 • Swarup Satish, Zonghai Yao, Andrew Drozdov, Boris Veytsman
We study whether novel ideas in biomedical literature appear first in preprints or traditional journals.
no code implementations • 27 Feb 2024 • Junda Wang, Zhichao Yang, Zonghai Yao, Hong Yu
To further improve the performance of these systems in the medical domain, we introduce an innovative method that jointly trains an Information Retrieval (IR) system and an LLM during the fine-tuning phase.
no code implementations • 21 Feb 2024 • Prakamya Mishra, Zonghai Yao, Parth Vashisht, Feiyun ouyang, Beining Wang, Vidhi Dhaval Mody, Hong Yu
Large Language Models (LLMs) such as GPT and Llama have demonstrated significant achievements in summarization tasks but struggle with factual inaccuracies, a critical issue in clinical NLP applications where errors could lead to serious consequences.
no code implementations • 21 Feb 2024 • Vijeta Deshpande, Minhwa Lee, Zonghai Yao, Zihao Zhang, Jason Brian Gibbons, Hong Yu
Prior research on Twitter (now X) data has provided positive evidence of its utility in developing supplementary health surveillance systems.
no code implementations • 29 Dec 2023 • Xiaocheng Zhang, Zonghai Yao, Hong Yu
Through a comprehensive evaluation of the entire dataset using LLM assessment and a rigorous manual evaluation of 64 instances, we showcase the potential of LLMs in patient education.
no code implementations • 24 Dec 2023 • Zonghai Yao, Nandyala Siddharth Kantu, Guanghao Wei, Hieu Tran, Zhangqi Duan, Sunjae Kwon, Zhichao Yang, README annotation team, Hong Yu
The advancement in healthcare has shifted focus toward patient-centric approaches, particularly in self-care and patient education, facilitated by access to Electronic Health Records (EHR).
no code implementations • 16 Nov 2023 • Zonghai Yao, Ahmed Jaafar, Beining Wang, Zhichao Yang, Hong Yu
We recommend a two-phase optimization process, leveraging APO-GPT4 for consistency and expert input for personalization.
no code implementations • 12 Nov 2023 • Jiachen Zhao, Zonghai Yao, Zhichao Yang, Hong Yu
Large language models (LLMs) can generate intermediate reasoning steps.
no code implementations • 30 Oct 2023 • Zihao Zhang, Zonghai Yao, Huixue Zhou, Feiyun ouyang, Hong Yu
This paper presents EHRTutor, an innovative multi-component framework leveraging the Large Language Model (LLM) for patient education through conversational question-answering.
1 code implementation • 30 Oct 2023 • Prakamya Mishra, Zonghai Yao, Shuwei Chen, Beining Wang, Rohan Mittal, Hong Yu
In this work, we propose a new pipeline using ChatGPT instead of human experts to generate high-quality feedback data for improving factual consistency in the clinical note summarization task.
no code implementations • 30 Oct 2023 • Hieu Tran, Zhichao Yang, Zonghai Yao, Hong Yu
We also examined whether categories(e. g., QA, IE, and generation) of instructions impact model performance.
1 code implementation • 24 Oct 2023 • Junda Wang, Zonghai Yao, Zhichao Yang, Huixue Zhou, Rumeng Li, Xun Wang, Yucheng Xu, Hong Yu
We introduce NoteChat, a novel cooperative multi-agent framework leveraging Large Language Models (LLMs) to generate patient-physician dialogues.
1 code implementation • 9 Oct 2023 • Zonghai Yao, Benjamin J Schloss, Sai P. Selvaraj
Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likelihood training.
1 code implementation • 7 Aug 2023 • Pengshan Cai, Zonghai Yao, Fei Liu, Dakuo Wang, Meghan Reilly, Huixue Zhou, Lingxi Li, Yi Cao, Alok Kapoor, Adarsha Bajracharya, Dan Berlowitz, Hong Yu
Patient portal allows discharged patients to access their personalized discharge instructions in electronic health records (EHRs).
1 code implementation • 29 Jun 2023 • Junda Wang, Zonghai Yao, Avijit Mitra, Samuel Osebe, Zhichao Yang, Hong Yu
This paper presents UMASS_BioNLP team participation in the MEDIQA-Chat 2023 shared task for Task-A and Task-C. We focus especially on Task-C and propose a novel LLMs cooperation system named a doctor-patient loop to generate high-quality conversation data sets.
1 code implementation • 20 May 2023 • Haw-Shiuan Chang, Zonghai Yao, Alolika Gon, Hong Yu, Andrew McCallum
Is the output softmax layer, which is adopted by most language models (LMs), always the best way to compute the next word probability?
no code implementations • 9 Mar 2023 • Yucheng Xu, Li Nanbo, Arushi Goel, Zijian Guo, Zonghai Yao, Hamidreza Kasaei, Mohammadreze Kasaei, Zhibin Li
Videos depict the change of complex dynamical systems over time in the form of discrete image sequences.
1 code implementation • 6 Dec 2022 • Zonghai Yao, Jack Tsai, Weisong Liu, David A. Levy, Emily Druhl, Joel I Reisman, Hong Yu
Materials and Methods: We first defined eviction status (eviction presence and eviction period) and then annotated eviction status in 5000 EHR notes from the Veterans Health Administration (VHA).
1 code implementation • 24 Nov 2022 • Zhichao Yang, Sunjae Kwon, Zonghai Yao, Hong Yu
This task is challenging due to the high-dimensional space of multi-label assignment (155, 000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically.
no code implementations • 18 Nov 2022 • Zonghai Yao, Yi Cao, Zhichao Yang, Hong Yu
Different from the previous known-unknown evaluation criteria, we propose the concept of "Misunderstand" in LAMA for the first time.
1 code implementation • 12 Oct 2022 • Sunjae Kwon, Zonghai Yao, Harmon S. Jordan, David A. Levy, Brian Corner, Hong Yu
We first present a novel and publicly available dataset with expert-annotated medical jargon terms from 18K+ EHR note sentences ($MedJ$).
no code implementations • 26 Aug 2022 • Zonghai Yao, Yi Cao, Zhichao Yang, Vijeta Deshpande, Hong Yu
In order to make LMs as KBs more in line with the actual application scenarios of the biomedical domain, we specifically add EHR notes as context to the prompt to improve the low bound in the biomedical domain.
no code implementations • ACL 2021 • Zonghai Yao, Hong Yu
Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zonghai Yao, Liangliang Cao, Huapu Pan
This paper considers the problem of zero-shot entity linking, in which a link in the test time may not present in training.