1 code implementation • 14 Mar 2024 • Jennifer Hsia, Afreen Shaikh, Zhiruo Wang, Graham Neubig
RAGGED offers further insights into LMs' context utilization habits, where we find that encoder-decoder models rely more on contexts and are thus more sensitive to retrieval quality, while decoder-only models tend to rely on knowledge memorized during training.
no code implementations • 28 Aug 2023 • Jennifer Hsia, Danish Pruthi, Aarti Singh, Zachary C. Lipton
First, we show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs.