1 code implementation • 25 May 2023 • Niels Mündler, Jingxuan He, Slobodan Jenko, Martin Vechev
Large language models (large LMs) are susceptible to producing text that contains hallucinated content.
Hallucination Pair-wise Detection (1-ref) Informativeness +2