Jump to content

User:Robert Kowalski/sandbox: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Line 139: Line 139:
In the alternative [[stable model semantics]], there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins [[answer set programming]] (ASP).
In the alternative [[stable model semantics]], there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins [[answer set programming]] (ASP).


Both the well-founded and stable model semantics apply to arbitrary logic programs with negation. However, both semantics coincide for stratified logic programs. For example, the program for sanctioning thieves is (locally) stratified, and all three semantics for the program determine the same intended model:
Both the well-founded and stable model semantics apply to arbitrary logic programs with negation. However, both semantics coincide for [[Syntax and semantics of logic programming#Stratified negation|stratified]] logic programs. For example, the program for sanctioning thieves is (locally) stratified, and all three semantics for the program determine the same intended model:


<syntaxhighlight lang="prolog">
<syntaxhighlight lang="prolog">

Revision as of 09:20, 1 December 2023

Semantics

Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original logical consequence semantics, which understands solving a goal as showing that the goal is a theorem that is true in all models of the program.

In this approach, computation is theorem-proving in first-order logic; and both backward reasoning, as in SLD resolution, and forward reasoning, as in hyper-resolution, are correct and complete theorem-proving methods. Sometimes such theorem-proving methods are also regarded as providing a separate proof-theoretic (or operational) semantics for logic programs. But from a logical point of view, they are proof methods, rather than semantics.

The other approach to the declarative semantics of Horn clause programs is the satisfiability semantics, which understands solving a goal as showing that the goal is true (or satisfied) in some intended (or standard) model of the program. For Horn clause programs, there always exists such a standard model: It is the unique minimal model of the program.

Informally speaking, a minimal model is a model that, when it is viewed as the set of all (variable-free) facts that are true in the model, contains no smaller set of facts that is also a model of the program.

For example, the following facts represent the minimal model of the family relationships example in the introduction of this article. All other variable-free facts are false in the model:

mother_child(elizabeth, charles).
father_child(charles, william).
father_child(charles, harry).
parent_child(elizabeth, charles).
parent_child(charles, william).
parent_child(charles, harry).
grandparent_child(elizabeth, william).
grandparent_child(elizabeth, harry).

The satisfiability semantics also has an alternative, more mathematical characterisation as the least fixed point of the function that uses the rules in the program to derive new facts from existing facts in one step of inference.

Remarkably, the same problem-solving methods of forward and backward reasoning, which were originally developed for the logical consequence semantics, are equally applicable to the satisfiability semantics: Forward reasoning generates the minimal model of a Horn clause program, by deriving new facts from existing facts, until no new additional facts can be generated. Backward reasoning, which succeeds by reducing a goal to subgoals, until all subgoals are solved by facts, ensures that the goal is true in the minimal model, without generating the model explicitly.[1]

The difference between the two declarative semantics can be seen with the Horn clause definitions of addition and multiplication in successor arithmetic:

add(X, 0, X).
add(X, s(Y), s(Z)) :- add(X, Y, Z).

multiply(X, 0, 0).
multiply(X, s(Y), W) :- multiply(X, Y, Z), add(X, Z, W).

Here the function symbol s represents the successor function, and the composite term s(X) represents the successor of X, namely X + 1. The predicate add(X, Y, Z) represents X + Y = Z, and multiply(X, Y, Z) represents X Y = Z.

Both semantics give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example:

?- multiply(s(s(0)), s(s(0)), Z).
Z = s(s(s(s(0)))).

?- multiply(X, X, Y), add(X, X, Y).
X = 0, Y = 0.
X = s(s(0)), Y = s(s(s(s(0)))).

However, with the logical-consequence semantics, there are non-standard models of the program in which, for example, add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))), i.e. 2 + 2 = 5. is true. But with the satisfiability semantics, there is only one model, namely the standard model of arithmetic, in which 2 + 2 = 5 is false.

In both semantics, the goal ?- add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))) fails. In the satisfiability semantics, the failure of the goal means that the goal is false. But in the logical consequence semantics, the failure means that whether or not the goal is true is unknown.

Negation as failure

Negation as failure (NAF), as a way of concluding that a negative condition not p holds by showing that the positive condition p fails to hold, was already a feature of early Prolog systems. The resulting extension of SLD resolution is called SLDNF. A similar construct, called "thnot", also existed in Micro-Planner.

The logical semantics of NAF was unresolved until Keith Clark[2] showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the logical consequence semantics using the completion of a logic program in first-order logic.

Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:

A :- Body1.
A :- Bodyk.

as a definition of the predicate:

A iff (Body1 or … or Bodyk)

where iff means "if and only if". The completion also includes axioms of equality, which correspond to unification. Clark showed that proofs generated by SLDNF are structurally similar to proofs generated by a natural deduction style of reasoning with the completion of the program.

Consider, for example, the following program:

should_receive_sanction(X, punishment) :- 
    is_a_thief(X),
    not should_receive_sanction(X, rehabilitation).
    
should_receive_sanction(X, rehabilitation) :-
    is_a_thief(X),
    is_a_minor(X),
    not is_violent(X).
    
is_a_thief(tom).

Given the goal of determining whether tom should receive a sanction, the first rule succeeds in showing that tom should be punished:

?- should_receive_sanction(tom, Sanction).
Sanction = punishment.

This is because tom is a thief, and it cannot be shown that tom should be rehabilitated. It cannot be shown that tom should be rehabilitated, because it cannot be shown that tom is a minor.

If, however, we receive new information that tom is indeed a minor, the previous conclusion that tom should be punished is replaced by the new conclusion that tom should be rehabilitated:

minor(tom).

?- should_receive_sanction(tom, Sanction).
Sanction = rehabilitation.

This property of withdrawing a conclusion when new information is added, is called non-monotonicity, and it makes logic programming a non-monotonic logic.

But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated:

violent(tom).

?- should_receive_sanction(tom, Sanction).
Sanction = punishment.

The completion of this program is:

should_receive_sanction(X, Sanction) iff 
    Sanction = punishment, is_a_thief(X), 
    not should_receive_sanction(X, rehabilitation)
 or Sanction = rehabilitation, is_a_thief(X), is_a_minor(X),
    not is_violent(X).
    
is_a_thief(X) iff X = tom.
is_a_minor(X) iff X = tom.
is_violent(X) iff X = tom.

The notion of completion is closely related to John McCarthy's circumscription semantics for default reasoning,[3] and to Ray Reiter's closed world assumption.[4]

The completion semantics for negation is a logical consequence semantics, for which SLDNF provides a proof-theoretic implementation. However, in the 1980s, the satisfiability semantics became more popular for logic programs with negation. In the satisfiability semantics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic program.

In the case of logic programs with negative conditions, there are two main variants of the model-theoretic semantics: In the well-founded semantics, the intended model of a logic program is a unique, three-valued, minimal model, which always exists. The well-founded semantics generalises the notion of inductive definition in mathematical logic.[5] XSB Prolog[6] implements the well-founded semantics using SLG resolution.[7]

In the alternative stable model semantics, there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins answer set programming (ASP).

Both the well-founded and stable model semantics apply to arbitrary logic programs with negation. However, both semantics coincide for stratified logic programs. For example, the program for sanctioning thieves is (locally) stratified, and all three semantics for the program determine the same intended model:

should_receive_sanction(tom, punishment).
is_a_thief(tom).
is_a_minor(tom).
is_violent(tom).

Attempts to understand negation in logic programming have also contributed to the development of abstract argumentation frameworks.[8] In an argumentation interpretation of negation, the initial argument that tom should be punished because he is a thief, is attacked by the argument that he should be rehabilitated because he is a minor. But the fact that tom is violent undermines the argument that tom should be rehabilitated and reinstates the argument that tom should be punished.



__________________________________________________________________________

The text below was originally intended for the Computer program entry. I edited the entry with a much shorter text.

The remainder can be used elsewhere. But where?

____________________________________________________________________

In the simplest case of Horn clauses (or "definite" clauses), all of the H, B1, ..., Bn are atomic formulae of the form p (t1 ,…, tn), where p is a predicate symbol naming a relation, and the tn are terms naming objects (or individuals). Terms can be constants, variables or composite terms of the form f (t1 ,…, tn), where f is a functor (or function symbol) and the tn are terms. Variables are written as a string beginning with an upper case letter or an underscore.

For example[9]:

  • All dragons smoke, or equivalently, a thing smokes if the thing is a dragon:
smokes(X) :-
    is_a_dragon(X).
  • A creature smokes if one of its parents smokes:
smokes(X) :-
    is_a_creature(X),
    is_a_parent_of(Y,X),
    smokes(Y).
  • A thing X is a parent of a thing Y if X is the mother of Y or X is the father of Y:
 
is_a_parent_of(X, Y):- is_the_mother_of(X, Y).
is_a_parent_of(X, Y):- is_the_father_of(X, Y).
  • A thing is a creature if the thing is a dragon:
is_a_creature(X) :-
    is_a_dragon(X).
  • Alice is a dragon, and Bob is a creature. Alice is the mother of Bob.
is_a_dragon(alice). 
is_a_creature(bob).
is_the_mother_of(alice, bob).

The example illustrates several features of Prolog and of logic programs more generally: All of the variables in a clause are universally quantified, meaning that the clause holds for all instances in which the variables are replaced by terms not containing variables. Also, all variables are local to the clause in which they occur. So, there is no relationship, for example, between the variables X and Y in the second clause and the variables X and Y in the third clause. (In fact, their roles in the parent relation in the two clauses are actually reversed.)

Notice that the second clause is a recursive (or inductive) definition. It can be understood purely declaratively, without any need to understand how it is executed. The third clause shows how functions, as in functional programming, are represented by relations. Here the mother and father relations are functions in the sense that every individual has only one mother and only one father.

Prolog is an untyped language. However, types (or classes) can be represented by predicates. The fourth clause defines dragon as a subtype (or subclass) of creature.

Computation in Prolog can be regarded as question-answering or theorem-proving. It is initiated by a query or theorem to be proved, and solutions are generated by backward reasoning. For example:

  • Answer the query: which thing smokes? Or, equivalently, prove that some thing smokes!
 ?- smokes(X).

All variables in queries are existentially quantified, and execution of the program generates instances of the query that make the query true.

In the case where all clauses are Horn clauses, backward reasoning is performed by using SLD resolution, which is a variant of SL resolution for Horn clauses (or definite clauses). In this example, backward reasoning uses the first clause to reduce the goal to the subgoal:

 ?- is_a_dragon(X).

and it uses the second clause to reduce the goal to the subgoals:

 ?-  is_a_creature(X),
    is_a_parent_of(Y,X),
    smokes(Y).

It solves the subgoal is_a_dragon(X) generated by the first clause, by unifying it with the fact is_a_dragon(alice), instantiating the variable X to the constant alice, giving the solution, X = alice. It solves the three subgoals generated by the second clause, by unifying them with the facts is_a_creature(bob), is_a_parent_of(alice, bob) and smokes(alice), instantiating the variable X to the constant bob, giving a second solution, X = bob.

Goal-reduction, performed in this way, generates a generalised form of and-or tree of goals and subgoals. Prolog explores this tree depth-first, trying different clauses unifying with the same subgoal in the order in which the clauses are written, and trying to solve different subgoals coming from the same clause in the order in which the subgoals are written in the clause. In this example, the computation will generate the solution, X = alice, followed by X = bob.

Prolog programs can also include negative conditions in the body of clauses, as in:

  • A creature is healthy if the creature does not smoke:
is_healthy(X) :-
    is_a_creature(X),
    not smokes(X).

Prolog uses negation as failure (NAF) [10] to solve negative subgoals of the form not pby showing that the corresponding unnegated subgoal p fails to hold. For example, the query:

 ?- not healthy(bob).

returns the answer true, because the query

 ?- healthy(bob).

fails, because the subgoal:

 ?- not smokes(bob).

fails, because:

 ?- smokes(bob).

succeeds.

Prolog is a homoiconic language, in which atomic formulae and clauses are represented by composite terms. This is a powerful feature which has many applications, including its use for implementing metainterpreters and for representing and reasoning about propositional attitudes. For example:

  • It is prohibited for a creature that the creature teaches deportment if the creature smokes. A creature is ostracised if it is prohibited for the creature that a thing and the thing holds:
prohibited(X, teaches(X, deportment)) :- creature(X), smokes(X).
ostracised(X) :- creature(X), prohibited(X, P), holds(P).

The simplest case in which a proposition P holds is defined by:

holds(A) :- clause(A, true).

where

clause(Head, Body)

is a built-in metapredicate that holds when Head can be unified with the head of a clause and Body can be unified with the corresponding clause body, where Body = true if the clause is a fact.

Given the fact (assumption or premise):

teaches(bob, deportment).

and the query:

ostracised(X).

Prolog derives the answer X = bob

These examples illustrate some of the unique features of Prolog, which distinguish it from more conventional programming languages, and which highlight its use as a logic-based language for knowledge representation and problem solving in Artificial Intelligence.


Prolog has many applications. But many large-scale applications require the use of Prolog's "impure features", which do not have an obvious logical interpretation, or they are implemented in combination with other programming languages. The broad range of Prolog applications, both in isolation and in combination with other languages is highlighted in the Year of Prolog Book [11], celebrating the 50 year anniversary of Prolog.

  1. ^ Van Emden, M.H.; Kowalski, R.A. (October 1976). "The semantics of predicate logic as a programming language". Journal of the ACM. 23 (4): 733–742. doi:10.1145/321978.321991. S2CID 11048276.
  2. ^ Clark, K.L. (1977). "Negation as Failure". Logic and Data Bases. Boston, MA: Springer US. pp. 293–322. doi:10.1007/978-1-4684-3384-5_11. ISBN 978-1-4684-3386-9.
  3. ^ Gelfond, M.; Przymusinska, H.; Przymusinski, T. (1989). "On the relationship between circumscription and negation as failure". Artificial Intelligence. 38 (1): 75–94. doi:10.1016/0004-3702(89)90068-4.
  4. ^ Shepherdson, J.C. (1984). "Negation as failure: a comparison of Clark's completed data base and Reiter's closed world assumption". The Journal of Logic Programming. 1 (1): 51–79. doi:10.1016/0743-1066(84)90023-2.
  5. ^ Denecker, M.; Ternovska, E. (2008). "A logic of nonmonotone inductive definitions". ACM Transactions on Computational Logic. 9 (2): 14:1–14:52. doi:10.1145/1342991.1342998. S2CID 13156469.
  6. ^ Rao, P.; Sagonas, K.; Swift, T.; Warren, D.S.; Freire, J. (July 28–31, 1997). XSB: A system for efficiently computing well-founded semantics. Logic Programming And Nonmonotonic Reasoning: 4th International Conference, LPNMR'97. Dagstuhl Castle, Germany: Springer Berlin Heidelberg. pp. 430–440. doi:10.1007/3-540-63255-7_33.
  7. ^ W. Chen; D. S. Warren (January 1996). "Tabled Evaluation with Delaying for General Logic Programs". Journal of the ACM. 43 (1): 20–74. doi:10.1145/227595.227597. S2CID 7041379.
  8. ^ Phan Minh Dung (1995). "On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games". Artificial Intelligence. 77 (2): 321–357. doi:10.1016/0004-3702(94)00041-X.
  9. ^ Kowalski, R., Dávila, J., Sartor, G. and Calejo, M., 2023. Logical English for law and education. In Prolog: The Next 50 Years (pp. 287-299). Cham: Springer Nature Switzerland.
  10. ^ Krzysztof Apt and Maarten van Emden, Contributions to the Theory of Logic Programming, Journal of the Association for Computing Machinery. Vol, 1982 - portal.acm.org
  11. ^ Cite error: The named reference Prolog Book was invoked but never defined (see the help page).