Automated theorem proving

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

Logical foundations[edit]

While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic.[1] His Foundations of Arithmetic, published 1884,[2] expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913,[3] and with a revised second edition in 1927.[4] Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems.[5]

In 1929, Mojżesz Presburger showed that the theory of natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false.[6][7] However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for undecidable questions.

First implementations[edit]

Shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum tube computer at the Princeton Institute for Advanced Study. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even".[7][8] More ambitious was the Logic Theory Machine in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theory Machine constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia.[7]

The "heuristic" approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious.[7][9]

Decidability of the problem[edit]

Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized.

The above applies to first order theories, such as Peano arithmetic. However, for a specific model that may be described by a first order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any theory whose proper axioms are true for the natural numbers cannot prove all first order statements true for the natural numbers, even if the list of proper axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first order theory (such as the integers).

Related problems[edit]

A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable.

Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed.

Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture.[10][11] However, these successes are sporadic, and work on hard problems usually requires a proficient user.

Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force).

There are hybrid theorem proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof which was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by first player.

Industrial uses[edit]

Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors.

First-order theorem proving[edit]

In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, Java etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using J.A. Robinson's resolution Principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published.

First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as higher order logics, allow the convenient expression of a wider range of problems than first order logic, but theorem proving for these logics is less well developed.

Benchmarks, Competitions, and Sources[edit]

The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples — the Thousands of Problems for Theorem Provers (TPTP) Problem Library[12] — as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems.

Some important systems (all have won at least one CASC competition division) are listed below.

The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above.

Popular techniques[edit]

Software systems[edit]

Comparison
Name License type Web service Library Standalone Last update (YYYY-mm-dd format)
ACL2 3-clause BSD No No Yes March 2017
Prover9/Otter Public Domain Via System on TPTP Yes No 2009
Metis MIT License No Yes No March 1, 2018
MetiTarski MIT Via System on TPTP Yes Yes October 21, 2014
Jape GPLv2 Yes Yes No May 15, 2015
PVS GPLv2 No Yes No January 14, 2013
Leo II BSD License Via System on TPTP Yes Yes 2013
EQP ? No Yes No May 2009
SAD GPLv3 Yes Yes No August 27, 2008
PhoX ? No Yes No September 28, 2017
KeYmaera GPL Via Java Webstart Yes Yes March 11, 2015
Gandalf ? No Yes No 2009
E GPL Via System on TPTP No Yes July 4, 2017
SNARK Mozilla Public License 1.1 No Yes No 2012
Vampire Vampire License Via System on TPTP Yes Yes December 14, 2017
Theorem Proving System (TPS) TPS Distribution Agreement No Yes No February 4, 2012
SPASS FreeBSD license Yes Yes Yes November 2005
IsaPlanner GPL No Yes Yes 2007
KeY GPL Yes Yes Yes October 11, 2017
Princess lgpl v2.1 Via Java Webstart and System on TPTP Yes Yes January 27, 2018
iProver GPL Via System on TPTP No Yes 2018
Meta Theorem Freeware No No Yes 2019

Free software[edit]

Proprietary software[edit]

Notable people[edit]

See also[edit]

Notes[edit]

  1. ^ Frege, Gottlob (1879). Begriffsschrift. Verlag Louis Neuert.
  2. ^ Frege, Gottlob (1884). Die Grundlagen der Arithmetik (PDF). Breslau: Wilhelm Kobner. Archived from the original (PDF) on 2007-09-26. Retrieved 2012-09-02.
  3. ^ Bertrand Russell; Alfred North Whitehead (1910–1913). Principia Mathematica (1st ed.). Cambridge University Press.
  4. ^ Bertrand Russell; Alfred North Whitehead (1927). Principia Mathematica (2nd ed.). Cambridge University Press.
  5. ^ Herbrand, Jaques (1930). Recherches sur la théorie de la démonstration.
  6. ^ Presburger, Mojżesz (1929). "Über die Vollständigkeit eines gewissen Systems der Arithmetik ganzer Zahlen, in welchem die Addition als einzige Operation hervortritt". Comptes Rendus du I congrès de Mathématiciens des Pays Slaves. Warszawa: 92–101.
  7. ^ a b c d Davis, Martin (2001), "The Early History of Automated Deduction", in Robinson, Alan; Voronkov, Andrei, Handbook of Automated Reasoning, 1, Elsevier)
  8. ^ Bibel, Wolfgang (2007). "Early History and Perspectives of Automated Deduction" (PDF). KI 2007. LNAI. Springer (4667): 2–18. Retrieved 2 September 2012.
  9. ^ Gilmore, Paul (1960). "A proof procedure for quantification theory: its justification and realisation". IBM Journal of Research and Development. 4: 28–35. doi:10.1147/rd.41.0028.
  10. ^ W.W. McCune (1997). "Solution of the Robbins Problem". Journal of Automated Reasoning. 19 (3).
  11. ^ Gina Kolata (December 10, 1996). "Computer Math Proof Shows Reasoning Power". The New York Times. Retrieved 2008-10-11.
  12. ^ Sutcliffe, Geoff. "The TPTP Problem Library for Automated Theorem Proving". Retrieved 8 September 2012.
  13. ^ Bundy, Alan. The automation of proof by mathematical induction. 1999.
  14. ^ Artosi, Alberto, Paola Cattabriga, and Guido Governatori. "Ked: A deontic theorem prover." Eleventh International Conference on Logic Programming (ICLP’94). 1994.
  15. ^ Otten, Jens, and Wolfgang Bibel. "leanCoP: lean connection-based theorem proving." Journal of Symbolic Computation 36.1-2 (2003): 139-161.
  16. ^ del Cerro, Luis Farinas, et al. "Lotrec: the generic tableau prover for modal and description logics." International Joint Conference on Automated Reasoning. Springer, Berlin, Heidelberg, 2001.
  17. ^ Hickey, Jason, et al. "MetaPRL–a modular logical environment." International Conference on Theorem Proving in Higher Order Logics. Springer, Berlin, Heidelberg, 2003.
  18. ^ "SRI International Computer Science Laboratory – John Rushby". SRI International. Retrieved 22 September 2012.

References[edit]

External links[edit]