1 code implementation • 14 Nov 2023 • Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu, ChangShui Zhang
In this paper, we take a closer look at the self-verification abilities of LLMs in the context of logical reasoning, focusing on their ability to identify logical fallacies accurately.