Finitary: Difference between revisions
m links |
mNo edit summary |
||
Line 1: | Line 1: | ||
In [[mathematics]] or [[logic]], a '''finitary operation''' is one like those of [[arithmetic]], that take a number of input values to produce an output. An operation such as taking an [[integral]] of a [[function]], in [[calculus]], is defined in such a way as to depend on all the values of the function (infinitely many of them, in general), and is so not ''prima facie'' finitary. In the logic proposed for [[quantum mechanics]], depending on the use of subspaces of [[Hilbert space]] as [[proposition]]s, operations such as taking the [[intersection]] of subspaces are used; this in general cannot be considered a finitary operation. |
In [[mathematics]] or [[logic]], a '''finitary operation''' is one like those of [[arithmetic]], that take a number of input values to produce an output. An operation such as taking an [[integral]] of a [[function]], in [[calculus]], is defined in such a way as to depend on all the values of the function (infinitely many of them, in general), and is so not ''prima facie'' finitary. In the logic proposed for [[quantum mechanics]], depending on the use of subspaces of [[Hilbert space]] as [[proposition]]s, operations such as taking the [[intersection]] of subspaces are used; this in general cannot be considered a finitary operation. |
||
A '''finitary argument''' is one which can be translated into a [[finite]] set of symbolic propositions starting from a finite set of [[ |
A '''finitary argument''' is one which can be translated into a [[finite]] set of symbolic propositions starting from a finite set of [[axiom]]s. In other words, it is a [[proof]] that can be written on a large enough sheet of paper (including all assumptions). |
||
The emphasis on finitary methods has historical roots. |
The emphasis on finitary methods has historical roots. |
Revision as of 17:31, 16 December 2003
In mathematics or logic, a finitary operation is one like those of arithmetic, that take a number of input values to produce an output. An operation such as taking an integral of a function, in calculus, is defined in such a way as to depend on all the values of the function (infinitely many of them, in general), and is so not prima facie finitary. In the logic proposed for quantum mechanics, depending on the use of subspaces of Hilbert space as propositions, operations such as taking the intersection of subspaces are used; this in general cannot be considered a finitary operation.
A finitary argument is one which can be translated into a finite set of symbolic propositions starting from a finite set of axioms. In other words, it is a proof that can be written on a large enough sheet of paper (including all assumptions).
The emphasis on finitary methods has historical roots.
In the early 20th century, a branch of logic (or mathematics, the distinction is not clear at this level) was developed which aimed to solve the problem of foundations, that is, answer to the question: what is the true base of Mathematics? The program was to be able to rewrite all Mathematics starting using a language without semantics, just syntactical. In David Hilbert words (referring to Geometry), it does not matter if we call the things chairs, tables and cans of beer or points, lines and planes.
The stress on finiteness came from the idea that human mathematical thought is based on a finite number of principles and all the reasonings follow essentially one rule: the modus ponens. The project was to fix a finite number of symbols (essentially the numerals 1,2,3,... the letters of alphabet and some especial symbols like "+", "->", "(", ")", etc.), give a finite number of propositions expressed in those symbols, which werer to be taken as "foundations" (the axioms), and some rules of inference which would model the way humans make conclussions. From these, regardless of the semantic interpretation of the symbols the remaining theorems should follow automatically using the stated rules (which make mathematics look like a game with symbols more than a science). The hope was to prove that from these axioms and rules 'all the theorems of Mathematics could be deduced.
The aim itself was proved impossible by Kurt Godel in 1931, with his Incompleteness Theorem, but the spirit has been kept since, as it has proved the safest way (known) of doing mathematics in a global context, especially because since then Mathematicians no longer need to resort to imagination in order to define anything: as long as they are able to express their concepts in the language, the concepts can (in principle) be understood by anyone. This has made Mathematics the Science without interpretations which others sometimes tend to imitate (especially modern Physics).