1 code implementation • 2 Apr 2024 • Sherin Muckatira, Vijeta Deshpande, Vladislav Lialin, Anna Rumshisky
Large language models can solve new tasks without task-specific fine-tuning.
no code implementations • 21 Feb 2024 • Vijeta Deshpande, Minhwa Lee, Zonghai Yao, Zihao Zhang, Jason Brian Gibbons, Hong Yu
Prior research on Twitter (now X) data has provided positive evidence of its utility in developing supplementary health surveillance systems.
1 code implementation • 26 May 2023 • Vijeta Deshpande, Dan Pechi, Shree Thatte, Vladislav Lialin, Anna Rumshisky
The majority of recent scaling laws studies focused on high-compute high-parameter count settings, leaving the question of when these abilities begin to emerge largely unanswered.
no code implementations • 28 Mar 2023 • Vladislav Lialin, Vijeta Deshpande, Anna Rumshisky
This paper presents a systematic overview and comparison of parameter-efficient fine-tuning methods covering over 40 papers published between February 2019 and February 2023.
no code implementations • 26 Aug 2022 • Zonghai Yao, Yi Cao, Zhichao Yang, Vijeta Deshpande, Hong Yu
In order to make LMs as KBs more in line with the actual application scenarios of the biomedical domain, we specifically add EHR notes as context to the prompt to improve the low bound in the biomedical domain.