ISARSTEP: A BENCHMARK FOR HIGH-LEVEL MATHEMATICAL REASONING

Abstract

A well-defined benchmark is essential for measuring and accelerating research progress of machine learning models. In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models. We build a non-synthetic dataset from the largest repository of proofs written by human experts in a theorem prover. The dataset has a broad coverage of undergraduate and research-level mathematical and computer science theorems. In our defined task, a model is required to fill in a missing intermediate proposition given surrounding proofs. This task provides a starting point for the long-term goal of having machines generate human-readable proofs automatically. Our experiments and analysis reveal that while the task is challenging, neural models can capture non-trivial mathematical reasoning. We further design a hierarchical transformer that outperforms the transformer baseline. The dataset and models are available from:

1. INTRODUCTION

Neural networks have achieved outstanding performance on a wide range of problems in natural language processing, computer vision, and speech recognition. However, research investigating their capacity of doing mathematical reasoning is still limited, with earlier attempts focusing on simple arithmetic tasks like integer addition and multiplication (Zaremba & Sutskever, 2014; Kaiser & Sutskever, 2016; Trask et al., 2018) . More recently, there has been work on solving school-level mathematical problems (Saxton et al., 2019 ), logical reasoning (Evans et al., 2018) , and problems of function integration, ordinary differential equations (Lample & Charton, 2020), and properties of differential systems (Charton et al., 2020) . While these are valuable contributions to the machine learning community, they focused on generating answers to questions from a specific domain and were carried out on synthetic datasets with small vocabulary (e.g. up to 100 unique tokens). In this paper, we consider general undergraduate and research-level mathematical proofs as a target for neural networks. When humans prove a theorem, a crucial step is to propose an intermediate proposition to bridge the gap between the goal and the currently known facts. This step requires complicated reasoning capabilities such as creative thinking, inference, understanding existing conditions, and symbolic manipulation of rules. For example, consider the following proof of the irrationality of √ 2: Proof of irrationality of √ 2. Assume √ 2 is rational. Then there exists a pair of coprime integers a and b such that √ 2 = a/b, and it follows that 2 = a 2 /b 2 and then 2b 2 = a 2 . Hence a is even. Thus there exists an integer c such that a = 2c, which combined with 2b 2 = a 2 yields 2c 2 = b 2 : hence b is also even. So a and b are both even although they are coprime, contradiction.

availability

https://github.com/ Wenda302/IsarStep.

