Hallucination Evaluation

11 papers with code • 0 benchmarks • 1 datasets

Evaluate the ability of LLM to generate non-hallucination text or assess the capability of LLM to recognize hallucinations.

Most implemented papers

TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space

ictnlp/truthx 27 Feb 2024

During inference, by editing LLM's internal representations in truthful space, TruthX effectively enhances the truthfulness of LLMs.