Learning to Prove Theorems by Learning to Generate Theorems

NeurIPS 2020  ·  Mingzhe Wang, Jia Deng ·

We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover and advances the state of the art of automated theorem proving in Metamath. Code is available at https://github.com/princeton-vl/MetaGen.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Automated Theorem Proving Metamath set.mm MetaGen-IL + Holophrasm Percentage correct 22.1 # 2

Methods


No methods listed for this paper. Add relevant methods here