Search Results for author: Mu-Nan Ning

Found 1 papers, 1 papers with code

LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples

1 code implementation2 Oct 2023 Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, Li Yuan

This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs.

Hallucination

Cannot find the paper you are looking for? You can Submit a new open access paper.