no code implementations • 1 Jan 2024 • Yining Hua, Fenglin Liu, Kailai Yang, Zehan Li, Yi-han Sheu, Peilin Zhou, Lauren V. Moran, Sophia Ananiadou, Andrew Beam
Objective: The growing use of large language models (LLMs) stimulates a need for a comprehensive review of their applications and outcomes in mental health care contexts.
1 code implementation • 9 Dec 2023 • David R. Bellamy, Bhawesh Kumar, Cindy Wang, Andrew Beam
In this work we introduce Labrador, a pre-trained Transformer model for laboratory data.
1 code implementation • 28 May 2023 • Bhawesh Kumar, Charlie Lu, Gauri Gupta, Anil Palepu, David Bellamy, Ramesh Raskar, Andrew Beam
In this work, we explore how conformal prediction can be used to provide uncertainty quantification in language models for the specific task of multiple-choice question-answering.
no code implementations • 27 Oct 2022 • Bhawesh Kumar, Anil Palepu, Rudraksh Tuwani, Andrew Beam
Self-supervised models trained with a contrastive loss such as CLIP have shown to be very powerful in zero-shot classification settings.
1 code implementation • 29 Nov 2020 • Allen Schmaltz, Andrew Beam
We present a novel end-to-end language model for joint retrieval and classification, unifying the strengths of bi- and cross- encoders into a single language model via a coarse-to-fine memory matching search procedure for learning and inference.
1 code implementation • 6 Oct 2020 • Benjamin Kompa, Jasper Snoek, Andrew Beam
Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes, real-world settings.
no code implementations • 7 Apr 2020 • Allen Schmaltz, Andrew Beam
These challenges are compounded for modalities such as text, where the feature space is very high-dimensional, and often contains considerable amounts of noise.