IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning

27 Jun 2023  ·  Zihang Xu, Ziqing Yang, Yiming Cui, Shijin Wang ·

In the field of machine reading comprehension (MRC), existing systems have surpassed the average performance of human beings in many tasks like SQuAD. However, there is still a long way to go when it comes to logical reasoning. Although some methods for it have been put forward, they either are designed in a quite complicated way or rely too much on external structures. In this paper, we proposed IDOL (InDicator-Oriented Logic Pre-training), an easy-to-understand but highly effective further pre-training task which logically strengthens the pre-trained models with the help of 6 types of logical indicators and a logically rich dataset LGP (LoGic Pre-training). IDOL achieves state-of-the-art performance on ReClor and LogiQA, the two most representative benchmarks in logical reasoning MRC, and is proven to be capable of generalizing to different pre-trained models and other types of MRC benchmarks like RACE and SQuAD 2.0 while keeping competitive general language understanding ability through testing on tasks in GLUE. Besides, at the beginning of the era of large language models, we take several of them like ChatGPT into comparison and find that IDOL still shows its advantage.

PDF Abstract

Datasets


Introduced in the Paper:

LGP

Used in the Paper:

GLUE MultiNLI RACE ReClor LogiQA

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Reading Comprehension ReClor Rational Reasoner / IDOL Test 80.6 # 1

Methods


No methods listed for this paper. Add relevant methods here