User:Robert Kowalski/sandbox: Difference between revisions
AI Logic section |
|||
(27 intermediate revisions by 2 users not shown) | |||
Line 2: | Line 2: | ||
<!-- EDIT BELOW THIS LINE --> |
<!-- EDIT BELOW THIS LINE --> |
||
==Problems with the Logic section== |
|||
=== Semantics === |
|||
# "Logical [[inference]] (or [[Deductive reasoning|deduction]]) is the process of [[logical proof|proving]] a new statement ([[Logical consequence|conclusion]]) from other statements that are already known to be true (the [[premise]]s)." |
|||
::: Replace "known to be true" by "assumed to be true". |
|||
# |
|||
=== Logic === |
|||
{{main|Syntax and semantics of logic programming}} |
|||
Formal [[Logic]] is used for [[knowledge representation]] and [[automatic reasoning|reasoning]].<ref name="Logic"> |
|||
[[Logic]]: |
|||
* {{Harvtxt|Russell|Norvig|2021|loc=chpt. 6–9}} |
|||
* {{Harvtxt|Luger|Stubblefield|2004|pp=35–77}} |
|||
* {{Harvtxt|Nilsson|1998|loc=chpt. 13–16}} |
|||
</ref> |
|||
Formal logic comes in two main forms: [[propositional logic]] (which operates on statements that are true or false and uses [[logical connective]]s such as "and", "or", "not" and "implies")<ref name="Propositional logic"> |
|||
[[Propositional logic]]: |
|||
* {{Harvtxt|Russell|Norvig|2021|loc=chpt. 6}} |
|||
* {{Harvtxt|Luger|Stubblefield|2004|pp=45–50}} |
|||
* {{Harvtxt|Nilsson|1998|loc=chpt. 13}}</ref> |
|||
and [[predicate logic]] (which also operates on objects, predicates and relations and uses [[Quantifier (logic)|quantifier]]s such as "''Every'' ''X'' is a ''Y''" and "There are ''some'' ''X''s that are ''Y''s").<ref name="Predicate logic"> |
|||
[[First-order logic]] and features such as [[Equality (mathematics)|equality]]: |
|||
* {{Harvtxt|Russell|Norvig|2021|loc=chpt. 7}} |
|||
* {{Harvtxt|Poole|Mackworth|Goebel|1998|pp=268–275}}, |
|||
* {{Harvtxt|Luger|Stubblefield|2004|pp=50–62}}, |
|||
* {{Harvtxt|Nilsson|1998|loc=chpt. 15}} |
|||
</ref> |
|||
[[Deductive reasoning]] in logic is the process of [[logical proof|proving]] a new statement ([[Logical consequence|conclusion]]) from other statements that are given and assumed to be true (the [[premise]]s).<ref name="Inference"> |
|||
Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original ''[[logical consequence]] semantics'', which understands solving a goal as showing that the goal is a theorem that is true in all [[Structure_(mathematical_logic)#Structures_and_first-order_logic|models]] of the program. |
|||
[[Logical inference]]: |
|||
* {{Harvtxt|Russell|Norvig|2021|loc = chpt. 10}} |
|||
</ref> Proofs can be structured as proof [[tree structure|trees]], in which nodes are labelled by sentences, and children nodes are connected to parent nodes by [[inference rule]]s. |
|||
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or [[axiom]]s. In the case of [[Horn clause]]s, problem-solving search can be performed by reasoning [[Forward chaining|forwards]] from the premises or [[backward chaining|backwards]] from the problem.<ref name="Logic as search">logical deduction as search: |
|||
In this approach, computation is theorem-proving in [[first-order logic]]; and both [[backward chaining|backward reasoning]], as in SLD resolution, and [[forward chaining|forward reasoning]], as in hyper-resolution, are correct and complete theorem-proving methods. Sometimes such theorem-proving methods are also regarded as providing a separate [[proof-theoretic semantics|proof-theoretic (or operational) semantics]] for logic programs. But from a logical point of view, they are proof methods, rather than semantics. |
|||
* {{Harvtxt|Russell|Norvig|2021|loc= §9.3, §9.4}} |
|||
* {{Harvtxt|Poole|Mackworth|Goebel|1998|pp=~46–52}} |
|||
* {{Harvtxt|Luger|Stubblefield|2004|pp=62–73}} |
|||
* {{Harvtxt|Nilsson|1998|loc=chpt. 4.2, 7.2}} |
|||
</ref> In the more general case of the clausal form of first-order logic, [[resolution (logic)|resolution]] is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that <ref name="Resolution"> |
|||
[[Resolution (logic)|Resolution]] and [[unification (computer science)|unification]]: |
|||
* {{Harvtxt|Russell|Norvig|2021|loc = §7.5.2, §9.2, §9.5}} |
|||
</ref>include the negation of the problem to be solved. |
|||
Inference in both Horn clause logic and [[first-order logic]] is [[Undecidable problem|undecidable]], and therefore [[Intractable problem|intractable]]. However, backward reasoning with Horn clauses, which underpins computation in the [[logic programming]] language [[Prolog]], is [[Turing completeness|Turing complete]]. Moreover, its efficiency is competitive with computation in other [[symbolic programming]] languages.<ref>{{cite journal |last1=Warren |first1=D.H. |last2=Pereira |first2=L.M. |last3=Pereira |first3=F. |date=1977 |title=Prolog-the language and its implementation compared with Lisp |journal=[[ACM SIGPLAN Notices]] |volume=12 |issue=8 |pages=109–115|doi=10.1145/872734.806939 }}</ref> |
|||
The other approach to the declarative semantics of Horn clause programs is the ''[[satisfiability]] semantics'', which understands solving a goal as showing that the goal is true (or satisfied) in some [[intended interpretation|intended (or standard) model]] of the program. For Horn clause programs, there always exists such a standard model: It is the unique ''minimal model'' of the program. |
|||
Informally speaking, a minimal model is a model that, when it is viewed as the set of all (variable-free) facts that are true in the model, contains no smaller set of facts that is also a model of the program. |
|||
For example, the following facts represent the minimal model of the family relationships example in the introduction of this article. All other variable-free facts are false in the model: |
|||
<syntaxhighlight lang="prolog"> |
|||
mother_child(elizabeth, charles). |
|||
father_child(charles, william). |
|||
father_child(charles, harry). |
|||
parent_child(elizabeth, charles). |
|||
parent_child(charles, william). |
|||
parent_child(charles, harry). |
|||
grandparent_child(elizabeth, william). |
|||
grandparent_child(elizabeth, harry). |
|||
</syntaxhighlight> |
|||
[[Fuzzy logic]] assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.<ref name="Fuzzy logic"> |
|||
The satisfiability semantics also has an alternative, more mathematical characterisation as the [[least fixed point]] of the function that derives new facts from existing facts in one step of inference. |
|||
Fuzzy logic: |
|||
* {{Harvtxt|Russell|Norvig|2021|pp=214, 255, 459}} |
|||
* {{Harvtxt|Scientific American|1999}} |
|||
</ref> |
|||
[[Non-monotonic logic]]s, including Horn clause logic augmented with [[negation as failure]], are designed to handle [[default reasoning]].<ref name="Default reasoning and non-monotonic logic"/> |
|||
Other specialized versions of logic have been developed to describe many complex domains (see [[#Knowledge representation|knowledge representation]] above). |
|||
Remarkably, the same problem-solving methods of forward and backward reasoning, which were originally developed for the logical consequence semantics, are equally applicable to the satisfiability semantics: Forward reasoning generates the minimal model of a Horn clause program, by deriving new facts from existing facts, until no new additional facts can be generated. Backward reasoning, which succeeds by reducing a goal to subgoals, until all subgoals are solved by facts, ensures that the goal is true in the minimal model, without generating the model explicitly.<ref>{{cite journal|last1=Van Emden|first1=M.H.|last2=Kowalski|first2=R.A.|date=October 1976|title=The semantics of predicate logic as a programming language|journal=[[Journal of the ACM]]|volume=23|issue=4|pages=733–742|doi=10.1145/321978.321991 |s2cid=11048276 |doi-access=free}}</ref> |
|||
==Recursion in Logic Programming== |
|||
The difference between the two declarative semantics can be seen with the Horn clause definitions of addition and multiplication in [[Peano arithmetic#Defining arithmetic operations and relations |successor arithmetic]]: |
|||
<syntaxhighlight lang="prolog"> |
|||
add(X, 0, X). |
|||
add(X, s(Y), s(Z)) :- add(X, Y, Z). |
|||
In the procedural interpretation of [[logic programming|logic programs]], clauses (or rules) of the form <syntaxhighlight inline lang="prolog">A :- B</syntaxhighlight> are treated as procedures, which reduce goals of the form <code>A</code> to subgoals of the form <code>B</code>. |
|||
multiply(X, 0, 0). |
|||
For example, the [[Prolog]] clauses: |
|||
multiply(X, s(Y), W) :- multiply(X, Y, Z), add(X, Z, W). |
|||
</syntaxhighlight> |
|||
Here the function symbol <code>s</code> represents the successor function, and the composite term <code>s(X)</code> represents the successor of <code>X,</code> namely <code>X + 1.</code> The predicate <code>add(X, Y, Z)</code> represents <code>X + Y = Z,</code> and <code>multiply(X, Y, Z)</code> represents <code>X <math>\cdot</math> Y = Z.</code> |
|||
Both semantics give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example: |
|||
<syntaxhighlight lang="prolog"> |
<syntaxhighlight lang="prolog"> |
||
path(X,Y) :- arc(X,Y). |
|||
?- multiply(s(s(0)), s(s(0)), Z). |
|||
path(X,Y) :- arc(X,Z), path(Z,Y). |
|||
Z = s(s(s(s(0)))). |
|||
?- multiply(X, X, Y), add(X, X, Y). |
|||
X = 0, Y = 0. |
|||
X = s(s(0)), Y = s(s(s(s(0)))). |
|||
</syntaxhighlight> |
</syntaxhighlight> |
||
define a procedure, which can be used to search for a path from X to Y, either by finding a direct arc from X to Y, or by finding an arc from X to Z, and then searching recursively for a path from Z to Y. Prolog executes the procedure by [[backward chaining|reasoning top-down (or backwards)]] and searching the space of possible paths depth-first, one branch at a time. If it tries the second clause, and finitely fails to find a path from Z to Y, it backtracks and tries to find an arc from X to another node, and then searches for a path from that other node to Y. |
|||
However, with the logical-consequence semantics, there are non-standard models of the program in which, for example, <code>add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))),</code> i.e. <code>2 + 2 = 5.</code> is true. But with the satisfiability semantics, there is only one model, namely the standard model of arithmetic, in which <code>2 + 2 = 5</code> is false. |
|||
In both semantics, the goal <syntaxhighlight inline lang="prolog">?- add(s(s(0)), s(s(0)), s(s(s(s(s(0))))))</syntaxhighlight> fails. In the satisfiability semantics, the failure of the goal means that the goal is false. But in the logical consequence semantics, the failure means that the goal is not believed. |
|||
===Negation as failure=== |
|||
{{Main|Negation as failure}} |
|||
[[Negation as failure]] (NAF), as a way of reasoning with negative conditions, was already a feature of early Prolog systems. The resulting extension of [[SLD resolution]] is called [[SLD resolution#SLDNF | SLDNF]]. A similar construct, called "thnot", also existed in [[Micro-Planner (programming language)|Micro-Planner]]. |
|||
The logical semantics of negation as failure was unresolved until [[Keith Clark (computer scientist)|Keith Clark]]<ref>{{cite book |last=Clark |first=K.L. |title=Logic and Data Bases |chapter=Negation as Failure |author-link=Keith Clark (computer scientist) |date=1977 |pages=293–322 |doi=10.1007/978-1-4684-3384-5_11 |location=Boston, MA |publisher=Springer US|isbn=978-1-4684-3386-9 }}</ref> showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the logical consequence semantics using the completion of a logic program in first-order logic. |
|||
Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say: |
|||
:<kbd>A :- Body<sub>1</sub>.</kbd> |
|||
:<kbd> …</kbd> |
|||
:<kbd>A :- Body<sub>k</sub>.</kbd> |
|||
as a definition of the predicate: |
|||
:<kbd>A iff (Body<sub>1</sub> or … or Body<sub>k</sub>)</kbd> |
|||
where <kbd>iff</kbd> means "if and only if". The completion also includes axioms of equality, which correspond to [[Unification (computer science) | unification]]. Clark showed that proofs generated by SLDNF are structurally similar to proofs generated by a natural deduction style of reasoning with the completion of the program. |
|||
Consider, for example, the following program: |
|||
<syntaxhighlight lang="prolog"> |
|||
should_receive_sanction(X, punishment) :- |
|||
is_a_thief(X), |
|||
not should_receive_sanction(X, rehabilitation). |
|||
should_receive_sanction(X, rehabilitation) :- |
|||
is_a_thief(X), |
|||
is_a_minor(X), |
|||
not is_violent(X). |
|||
is_a_thief(tom). |
|||
</syntaxhighlight> |
|||
Given the goal of determining whether tom should receive a sanction, the first rule succeeds in showing that tom should be punished: |
|||
<syntaxhighlight lang="prolog"> |
|||
?- should_receive_sanction(tom, Sanction). |
|||
Sanction = punishment. |
|||
</syntaxhighlight> |
|||
This is because tom is a thief, and it cannot be shown that tom should be rehabilitated. It cannot be shown that tom should be rehabilitated, because it cannot be shown that tom is a minor. |
|||
If, however, we receive new information that tom is indeed a minor, the previous conclusion that tom should be punished is replaced by the new conclusion that tom should be rehabilitated: |
|||
<syntaxhighlight lang="prolog"> |
|||
minor(tom). |
|||
?- should_receive_sanction(tom, Sanction). |
|||
Sanction = rehabilitation. |
|||
</syntaxhighlight> |
|||
This property of withdrawing a conclusion when new information is added, is called non-monotonicity, and it makes logic programming a [[non-monotonic logic]]. |
|||
But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated: |
|||
<syntaxhighlight lang="prolog"> |
|||
violent(tom). |
|||
?- should_receive_sanction(tom, Sanction). |
|||
Sanction = punishment. |
|||
</syntaxhighlight> |
|||
The completion of this program is: |
|||
<syntaxhighlight lang="prolog"> |
|||
should_receive_sanction(X, Sanction) iff |
|||
Sanction = punishment, is_a_thief(X), |
|||
not should_receive_sanction(X, rehabilitation) |
|||
or Sanction = rehabilitation, is_a_thief(X), is_a_minor(X), |
|||
not is_violent(X). |
|||
is_a_thief(X) iff X = tom. |
|||
is_a_minor(X) iff X = tom. |
|||
is_violent(X) iff X = tom. |
|||
</syntaxhighlight> |
|||
The notion of completion is closely related to [[John McCarthy (computer scientist) | John McCarthy's]] [[Circumscription (logic)|circumscription]] semantics for default reasoning,<ref>{{cite journal | last1=Gelfond |first1=M. |last2=Przymusinska |first2=H. |last3=Przymusinski |first3=T. |date=1989 |title=On the relationship between circumscription and negation as failure |journal=[[Artificial Intelligence (journal)|Artificial Intelligence]] |volume=38 |issue=1 |pages=75–94|doi=10.1016/0004-3702(89)90068-4 }}</ref> and to [[Raymond Reiter | Ray Reiter's]] [[closed world assumption]].<ref>{{cite journal |last=Shepherdson |first=J.C. |date=1984 |title=Negation as failure: a comparison of Clark's completed data base and Reiter's closed world assumption |journal=[[The Journal of Logic Programming]] |volume=1 |issue=1 |pages=51–79|doi=10.1016/0743-1066(84)90023-2 }}</ref> |
|||
The completion semantics for negation is a logical consequence semantics, for which SLDNF provides a proof-theoretic implementation. However, in the 1980s, an alternative ''[[satisfiability]] semantics'' began to become more popular. In the satisfiability semantics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic program. Implementations of the satisfiability semantics either generate an entire standard model in which the program and any goal are true, or they compute an instance of the goal which is true in a standard model, without necessarily computing the model in its entirety. |
|||
For example, here is the list of all variable-free (ground) facts that are true in the standard model of the program above. All other ground facts are false: |
|||
<syntaxhighlight lang="prolog"> |
|||
should_receive_sanction(tom, punishment). |
|||
is_a_thief(tom). |
|||
is_a_minor(tom). |
|||
is_violent(tom). |
|||
</syntaxhighlight> |
|||
Attempts to understand negation in logic programming have also contributed to the development of [[Argumentation framework | abstract argumentation frameworks]].<ref>{{cite journal |author=Phan Minh Dung |title=On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games |journal=Artificial Intelligence |volume=77 |issue= 2|pages=321–357 |year=1995 |doi=10.1016/0004-3702(94)00041-X|doi-access=free }}</ref> In an argumentation interpretation of negation, the initial argument that tom should be punished because he is a thief, is attacked by the argument that he should be rehabilitated because he is a minor. But the fact that tom is violent undermines the argument that tom should be rehabilitated and reinstates the argument that tom should be punished. |
|||
However, in the logical reading of logic programs, clauses are understood as universally quantified conditionals. For example, the recursive clause of the path-finding procedure is understood [[declarative program|declaratively]] as [[Knowledge representation and reasoning|representing the knowledge]] that, for any X, Y and Z, if there is an arc from X to Z and a path from Z to Y then there is a path from X to Y. In symbolic form: |
|||
:<math>\forall X, Y, Z(arc(X,Z)\land path(Z,Y) \rightarrow path(X,Y)).</math> |
|||
The logical reading is independent of the way the knowledge is used to solve problems. The knowledge can be used top-down, as in Prolog, to reduce problems to subproblems. Or it can be used [[forward chaining|bottom-up (or forwards)]], as in [[Datalog]], to derive conclusions from conditions. This [[separation of concerns]] is a form of [[Abstraction (computer science)|abstraction]], which [[Algorithm#Algorithm = Logic + Control|separates declarative knowledge from problems solving methods]].<ref>{{Cite journal|last=Kowalski|first=Robert|author-link=Robert Kowalski|title=Algorithm=Logic+Control|journal=[[Communications of the ACM]]|volume=22|issue=7|pages=424–436|year=1979|doi=10.1145/359131.359136|s2cid=2509896|doi-access=free}}</ref><ref>{{Cite book |
|||
| first1 = Stuart J. | last1 = Russell | author1-link = Stuart J. Russell |
|||
| first2 = Peter. | last2 = Norvig | author2-link = Peter Norvig |
|||
| title=[[Artificial Intelligence: A Modern Approach]] §9.3, §9.4 |
|||
| year = 2021 |
|||
| edition = 4th |
|||
| isbn = 978-0134610993 |
|||
| lccn = 20190474 |
|||
| publisher = Pearson | location = Hoboken |
|||
}}</ref> |
|||
_________________________________________ |
|||
In the case of logic programs with negative conditions, there are two main variants of the model-theoretic semantics: In the [[well-founded semantics]], the intended model of a logic program is a unique, three-valued, minimal model, which always exists. The well-founded semantics generalises the notion of [[inductive definition]] in mathematical logic.<ref>{{cite journal |last1=Denecker |first1=M. |last2=Ternovska |first2=E. |title=A logic of nonmonotone inductive definitions |journal=[[ACM Transactions on Computational Logic]] |volume=9 |issue=2 |pages=14:1–14:52 |date=2008|doi=10.1145/1342991.1342998 |s2cid=13156469 |url=https://lirias.kuleuven.be/handle/123456789/222628 }}</ref> [[XSB|XSB Prolog]]<ref>{{cite conference |last1=Rao |first1=P. |last2=Sagonas |first2=K. |last3=Swift |first3=T. |last4=Warren |first4=D.S. |last5=Freire |first5=J. |title=XSB: A system for efficiently computing well-founded semantics |conference=Logic Programming And Nonmonotonic Reasoning: 4th International Conference, LPNMR'97 |location=Dagstuhl Castle, Germany |date=July 28–31, 1997 |pages=430–440 |publisher=Springer Berlin Heidelberg |doi=10.1007/3-540-63255-7_33}}</ref> implements the well-founded semantics using SLG resolution.<ref>{{cite journal |author1=W. Chen |author2=D. S. Warren |title=Tabled Evaluation with Delaying for General Logic Programs |
|||
_________________________________________ |
|||
|journal=[[Journal of the ACM]] |volume=43 |issue=1 |pages=20–74 |date=January 1996 |doi=10.1145/227595.227597|s2cid=7041379 |doi-access=free }}</ref> |
|||
In the alternative [[stable model semantics]], there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins [[Answer set programming]] (ASP). Most implementations of ASP proceed in two steps: First they instantiate the program in all possible ways, reducing it to a propositional logic program (known as ''grounding''). Then they apply a propositional logic problem solver, such as the [[DPLL algorithm]] or a [[Boolean SAT solver]]. However, some implementations, such as s(CASP)<ref>{{cite journal |first1=J. |last1=Arias |first2=M. |last2=Carro |first3=E. |last3=Salazar |first4=K. |last4=Marple |first5=G. |last5=Gupta |title=Constraint Answer Set Programming without Grounding |journal=Theory and Practice of Logic Programming |volume=18 |issue=3–4 |pages=337–354 |date=2018|doi=10.1017/S1471068418000285 |s2cid=13754645 |doi-access=free |arxiv=1804.11162 }}</ref> use a goal-directed, top-down, SLD resolution-like procedure without |
|||
grounding. |
|||
[[User:Robert Kowalski#test|definition]] |
|||
{{See also|Definition#Logic programs}} |
|||
<syntaxhighlight inline lang="prolog">tick</syntaxhighlight> |
|||
Notes for [[Recursion (computer science)]]: |
|||
#Recursion solves such recursive problems by using functions that call themselves from within their own code. '''Replace function'''. |
|||
# In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. '''This assumes a top-down or backward reasoning strategy'''. |
|||
#Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition: '''Note that grammars can be parsed top-down or bottom-up'''. See [[Parsing#Types of parsers]]. |
|||
#This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from. '''In LP there is no difference between accessing and creating'''. |
|||
#Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data. '''This distinction may not arise with logic programs'''. |
|||
__________________________________________________________________________ |
__________________________________________________________________________ |
||
Latest revision as of 22:37, 26 March 2024
Problems with the Logic section
[edit]- "Logical inference (or deduction) is the process of proving a new statement (conclusion) from other statements that are already known to be true (the premises)."
- Replace "known to be true" by "assumed to be true".
Logic
[edit]Formal Logic is used for knowledge representation and reasoning.[1] Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies")[2] and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").[3]
Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises).[4] Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem.[5] In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that [6]include the negation of the problem to be solved.
Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages.[7]
Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.[8]
Non-monotonic logics, including Horn clause logic augmented with negation as failure, are designed to handle default reasoning.[9]
Other specialized versions of logic have been developed to describe many complex domains (see knowledge representation above).
Recursion in Logic Programming
[edit]In the procedural interpretation of logic programs, clauses (or rules) of the form A :- B
are treated as procedures, which reduce goals of the form A
to subgoals of the form B
.
For example, the Prolog clauses:
path(X,Y) :- arc(X,Y).
path(X,Y) :- arc(X,Z), path(Z,Y).
define a procedure, which can be used to search for a path from X to Y, either by finding a direct arc from X to Y, or by finding an arc from X to Z, and then searching recursively for a path from Z to Y. Prolog executes the procedure by reasoning top-down (or backwards) and searching the space of possible paths depth-first, one branch at a time. If it tries the second clause, and finitely fails to find a path from Z to Y, it backtracks and tries to find an arc from X to another node, and then searches for a path from that other node to Y.
However, in the logical reading of logic programs, clauses are understood as universally quantified conditionals. For example, the recursive clause of the path-finding procedure is understood declaratively as representing the knowledge that, for any X, Y and Z, if there is an arc from X to Z and a path from Z to Y then there is a path from X to Y. In symbolic form:
The logical reading is independent of the way the knowledge is used to solve problems. The knowledge can be used top-down, as in Prolog, to reduce problems to subproblems. Or it can be used bottom-up (or forwards), as in Datalog, to derive conclusions from conditions. This separation of concerns is a form of abstraction, which separates declarative knowledge from problems solving methods.[10][11]
_________________________________________ _________________________________________
tick
Notes for Recursion (computer science):
- Recursion solves such recursive problems by using functions that call themselves from within their own code. Replace function.
- In a properly designed recursive function, with each recursive call, the input problem must be simplified in such a way that eventually the base case must be reached. This assumes a top-down or backward reasoning strategy.
- Language designers often express grammars in a syntax such as Backus–Naur form; here is such a grammar, for a simple language of arithmetic expressions with multiplication and addition: Note that grammars can be parsed top-down or bottom-up. See Parsing#Types of parsers.
- This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to access the contents of the data structure—namely, via the accessor functions head and tail—and what those contents may be, whereas the inductive definition specifies how to create the structure and what it may be created from. In LP there is no difference between accessing and creating.
- Some authors classify recursion as either "structural" or "generative". The distinction is related to where a recursive procedure gets the data that it works on, and how it processes that data. This distinction may not arise with logic programs.
__________________________________________________________________________
The text below was originally intended for the Computer program entry. I edited the entry with a much shorter text.
The remainder can be used elsewhere. But where?
____________________________________________________________________
In the simplest case of Horn clauses (or "definite" clauses), all of the H, B1, ..., Bn are atomic formulae of the form p (t1 ,…, tn), where p is a predicate symbol naming a relation, and the tn are terms naming objects (or individuals). Terms can be constants, variables or composite terms of the form f (t1 ,…, tn), where f is a functor (or function symbol) and the tn are terms. Variables are written as a string beginning with an upper case letter or an underscore.
For example[12]:
- All dragons smoke, or equivalently, a thing smokes if the thing is a dragon:
smokes(X) :-
is_a_dragon(X).
- A creature smokes if one of its parents smokes:
smokes(X) :-
is_a_creature(X),
is_a_parent_of(Y,X),
smokes(Y).
- A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:
is_a_parent_of(X, Y):- is_the_mother_of(X, Y).
is_a_parent_of(X, Y):- is_the_father_of(X, Y).
- A thing is a creature if the thing is a dragon:
is_a_creature(X) :-
is_a_dragon(X).
- Alice is a dragon, and Bob is a creature. Alice is the mother of Bob.
is_a_dragon(alice).
is_a_creature(bob).
is_the_mother_of(alice, bob).
The example illustrates several features of Prolog and of logic programs more generally: All of the variables in a clause are universally quantified, meaning that the clause holds for all instances in which the variables are replaced by terms not containing variables. Also, all variables are local to the clause in which they occur. So, there is no relationship, for example, between the variables X and Y in the second clause and the variables X and Y in the third clause. (In fact, their roles in the parent relation in the two clauses are actually reversed.)
Notice that the second clause is a recursive (or inductive) definition. It can be understood purely declaratively, without any need to understand how it is executed. The third clause shows how functions, as in functional programming, are represented by relations. Here the mother and father relations are functions in the sense that every individual has only one mother and only one father.
Prolog is an untyped language. However, types (or classes) can be represented by predicates. The fourth clause defines dragon as a subtype (or subclass) of creature.
Computation in Prolog can be regarded as question-answering or theorem-proving. It is initiated by a query or theorem to be proved, and solutions are generated by backward reasoning. For example:
- Answer the query: which thing smokes? Or, equivalently, prove that some thing smokes!
?- smokes(X).
All variables in queries are existentially quantified, and execution of the program generates instances of the query that make the query true.
In the case where all clauses are Horn clauses, backward reasoning is performed by using SLD resolution, which is a variant of SL resolution for Horn clauses (or definite clauses). In this example, backward reasoning uses the first clause to reduce the goal to the subgoal:
?- is_a_dragon(X).
and it uses the second clause to reduce the goal to the subgoals:
?- is_a_creature(X),
is_a_parent_of(Y,X),
smokes(Y).
It solves the subgoal is_a_dragon(X)
generated by the first clause, by unifying it with the fact is_a_dragon(alice)
, instantiating the variable X
to the constant alice
, giving the solution, X = alice
. It solves the three subgoals generated by the second clause, by unifying them with the facts is_a_creature(bob), is_a_parent_of(alice, bob)
and smokes(alice)
, instantiating the variable X
to the constant bob
, giving a second solution, X = bob
.
Goal-reduction, performed in this way, generates a generalised form of and-or tree of goals and subgoals. Prolog explores this tree depth-first, trying different clauses unifying with the same subgoal in the order in which the clauses are written, and trying to solve different subgoals coming from the same clause in the order in which the subgoals are written in the clause. In this example, the computation will generate the solution, X = alice
, followed by X = bob
.
Prolog programs can also include negative conditions in the body of clauses, as in:
- A creature is healthy if the creature does not smoke:
is_healthy(X) :-
is_a_creature(X),
not smokes(X).
Prolog uses negation as failure (NAF) [13] to solve negative subgoals of the form not p
by showing that the corresponding unnegated subgoal p
fails to hold. For example, the query:
?- not healthy(bob).
returns the answer true, because the query
?- healthy(bob).
fails, because the subgoal:
?- not smokes(bob).
fails, because:
?- smokes(bob).
succeeds.
Prolog is a homoiconic language, in which atomic formulae and clauses are represented by composite terms. This is a powerful feature which has many applications, including its use for implementing metainterpreters and for representing and reasoning about propositional attitudes. For example:
- It is prohibited for a creature that the creature teaches deportment if the creature smokes. A creature is ostracised if it is prohibited for the creature that a thing and the thing holds:
prohibited(X, teaches(X, deportment)) :- creature(X), smokes(X).
ostracised(X) :- creature(X), prohibited(X, P), holds(P).
The simplest case in which a proposition P
holds is defined by:
holds(A) :- clause(A, true).
where
clause(Head, Body)
is a built-in metapredicate that holds when Head
can be unified with the head of a clause and Body
can be unified with the corresponding clause body, where Body = true
if the clause is a fact.
Given the fact (assumption or premise):
teaches(bob, deportment).
and the query:
ostracised(X).
Prolog derives the answer X = bob
These examples illustrate some of the unique features of Prolog, which distinguish it from more conventional programming languages, and which highlight its use as a logic-based language for knowledge representation and problem solving in Artificial Intelligence.
Prolog has many applications. But many large-scale applications require the use of Prolog's "impure features", which do not have an obvious logical interpretation, or they are implemented in combination with other programming languages. The broad range of Prolog applications, both in isolation and in combination with other languages is highlighted in the Year of Prolog Book [14], celebrating the 50 year anniversary of Prolog.
- ^
Logic:
- Russell & Norvig (2021, chpt. 6–9)
- Luger & Stubblefield (2004, pp. 35–77)
- Nilsson (1998, chpt. 13–16)
- ^
Propositional logic:
- Russell & Norvig (2021, chpt. 6)
- Luger & Stubblefield (2004, pp. 45–50)
- Nilsson (1998, chpt. 13)
- ^
First-order logic and features such as equality:
- Russell & Norvig (2021, chpt. 7)
- Poole, Mackworth & Goebel (1998, pp. 268–275) ,
- Luger & Stubblefield (2004, pp. 50–62) ,
- Nilsson (1998, chpt. 15)
- ^
Logical inference:
- Russell & Norvig (2021, chpt. 10)
- ^ logical deduction as search:
- Russell & Norvig (2021, §9.3, §9.4)
- Poole, Mackworth & Goebel (1998, pp. ~46–52)
- Luger & Stubblefield (2004, pp. 62–73)
- Nilsson (1998, chpt. 4.2, 7.2)
- ^
Resolution and unification:
- Russell & Norvig (2021, §7.5.2, §9.2, §9.5)
- ^ Warren, D.H.; Pereira, L.M.; Pereira, F. (1977). "Prolog-the language and its implementation compared with Lisp". ACM SIGPLAN Notices. 12 (8): 109–115. doi:10.1145/872734.806939.
- ^
Fuzzy logic:
- Russell & Norvig (2021, pp. 214, 255, 459)
- Scientific American (1999)
- ^ Cite error: The named reference
Default reasoning and non-monotonic logic
was invoked but never defined (see the help page). - ^ Kowalski, Robert (1979). "Algorithm=Logic+Control". Communications of the ACM. 22 (7): 424–436. doi:10.1145/359131.359136. S2CID 2509896.
- ^ Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach §9.3, §9.4 (4th ed.). Hoboken: Pearson. ISBN 978-0134610993. LCCN 20190474.
- ^ Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In Prolog: The Next 50 Years (pp. 287-299). Cham: Springer Nature Switzerland.
- ^ Krzysztof Apt and Maarten van Emden, Contributions to the Theory of Logic Programming, Journal of the Association for Computing Machinery. Vol, 1982 - portal.acm.org
- ^ Cite error: The named reference
Prolog Book
was invoked but never defined (see the help page).