Search Results for author: Elizabeth Snell Okada

Found 1 papers, 0 papers with code

LLMAuditor: A Framework for Auditing Large Language Models Using Human-in-the-Loop

no code implementations14 Feb 2024 Maryam Amirizaniani, Jihan Yao, Adrian Lavergne, Elizabeth Snell Okada, Aman Chadha, Tanya Roosta, Chirag Shah

A case study using questions from the TruthfulQA dataset demonstrates that we can generate a reliable set of probes from one LLM that can be used to audit inconsistencies in a different LLM.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.