1 code implementation • 9 Dec 2023 • David R. Bellamy, Bhawesh Kumar, Cindy Wang, Andrew Beam
In this work we introduce Labrador, a pre-trained Transformer model for laboratory data.
1 code implementation • 28 May 2023 • Bhawesh Kumar, Charlie Lu, Gauri Gupta, Anil Palepu, David Bellamy, Ramesh Raskar, Andrew Beam
In this work, we explore how conformal prediction can be used to provide uncertainty quantification in language models for the specific task of multiple-choice question-answering.
no code implementations • 27 Oct 2022 • Bhawesh Kumar, Anil Palepu, Rudraksh Tuwani, Andrew Beam
Self-supervised models trained with a contrastive loss such as CLIP have shown to be very powerful in zero-shot classification settings.