# Finitary

This article needs additional citations for verification. (April 2012) (Learn how and when to remove this template message) |

In mathematics or logic, a **finitary operation ** is an operation of finite arity, that is an operation that takes a finite number of input values. By contrast, an operation that may take an infinite number of input values is said to be **infinitary**. In standard mathematics, an operation is, by definition, finitary. Therefore these terms are used only in the context of infinitary logic.

## Finitary argument[edit]

A **finitary argument** is one which can be translated into a finite set of symbolic propositions starting from a finite^{[1]} set of axioms. In other words, it is a proof (including all assumptions) that can be written on a large enough sheet of paper.

By contrast, **infinitary logic** studies logics that allow infinitely long statements and proofs. In such a logic, one can regard the existential quantifier, for instance, as derived from an infinitary disjunction.

## History[edit]

The emphasis on finitary methods has historical roots.

In the early 20th century, logicians aimed to solve the problem of foundations; that is, answer the question: "What is the true base of mathematics?" The program was to be able to rewrite all mathematics using an entirely syntactical language *without semantics*. In the words of David Hilbert (referring to geometry), "it does not matter if we call the things *chairs*, *tables* and *beer mugs* or *points*, *lines* and *planes*."

The stress on finiteness came from the idea that human *mathematical* thought is based on a finite number of principles^{[citation needed]} and all the reasonings follow essentially one rule: the *modus ponens*. The project was to fix a finite number of symbols (essentially the numerals 1, 2, 3, ... the letters of alphabet and some special symbols like "+", "->", "(", ")", etc.), give a finite number of propositions expressed in those symbols, which were to be taken as "foundations" (the axioms), and some rules of inference which would model the way humans make conclusions. From these, *regardless of the semantic interpretation of the symbols* the remaining theorems should follow *formally* using only the stated rules (which make mathematics look like a *game with symbols* more than a *science*) without the need to rely on ingenuity. The hope was to prove that from these axioms and rules *all* the theorems of mathematics could be deduced. That aim is known as logicism.

Kurt Gödel's incompleteness theorem is sometimes alleged to undermine logicism because it shows that no particular axiomatization of mathematics can decide all statements, although such theorem itself is based in logic.

## Notes[edit]

**^**The number of axioms*referenced*in the argument will necessarily be finite since the proof is finite, but the number of axioms from which these are*chosen*is infinite when the system has axiom schemes, as for example the axiom schemes of propositional calculus.