What is meant by "logical purity" in Prolog? - prolog

What is meant by "logical purity" (in the context of Prolog programming)? The logical-purity tag info says "programs using only Horn clauses", but then, how would predicates like if_/3 qualify, using as much as it does the cut, and the various meta-logical (what's the proper terminology? var/1 and such) predicates, i.e. the low-level stuff.
I get it that it achieves some "pure" effect, but what does this mean, precisely?
For a more concrete illustration, please explain how does if_/3 qualify as logically pure, seen in use e.g. in this answer?

Let us first get used to a declarative reading of logic programs.
Declaratively, a Prolog program states what is true.
For example
natural_number(0).
natural_number(s(X)) :-
natural_number(X).
The first clause states: 0 is a natural number.
The second clause states: If X is a natural number, then s(X) is a natural number.
Let us now consider the effect of changes to this program. For example, what changes when we change the order of these two clauses?
natural_number(s(X)) :-
natural_number(X).
natural_number(0).
Declaratively, exchanging the order of clauses does not change the intended meaning of the program in any way (disjunction is commutative).
Operationally, that is, taking into account the actual execution strategy of Prolog, different clause orders clearly often make a signifcant difference.
However, one extremely nice property of pure Prolog code is preserved regardless of chosen clause ordering:
If a query Q succeeds with respect to a clause ordering O1, then
Q does not fail with a different ordering O2.
Note that I am not saying that Q always also succeeds with a different ordering: This is because the query may also loop or yield an error with different orderings.
For two queries Q1 and Q2, we say that G1 is more general iff it subsumes G2 with respect to syntactic unification. For example, the query ?- parent_child(P, C). is more general than the query ?- parent_child(0, s(0))..
Now, with pure Prolog programs, another extremely nice property holds:
If a query Q1 succeeds, then every more general query Q2 does not
fail.
Note, again, that Q2 may loop instead of succeeding.
Consider now the case of var/1 which you mention, and think of the related predicate nonvar/1. Suppose we have:
my_pred(V) :-
nonvar(V).
When does this hold? Clearly, it holds iff the argument is not a variable.
As expected, we get:
?- my_pred(a).
true.
However, for the more general query ?- my_pred(X)., we get:
?- my_pred(X).
false.
Such a predicate is called non-monotonic, and you cannot treat it as a true relation due to this property: This is because the answer false above logically means that there are no solutions whatsoever, yet in the immediately preceding example, we see that there is a solution. So, illogically, a more specific query, built by adding a constraint, makes the query succeed:
?- X = a, my_pred(X).
true.
Thus, reasoning about such predicates is extremely complicated, to the point that it is no fun at all to program with them. It makes declarative debugging impossible, and hard to state any properties that are preserved. For instance, just swapping the order of subgoals in the above conjunctive query will make it fail:
?- my_pred(X), X = a.
false.
Hence, I strongly suggest to stay within the pure monotonic subset of Prolog, which allows the declarative reasoning along the lines outlined above.
CLP(FD) constraints, dif/2 etc. are all pure in this sense: You cannot trick these predicates into giving logically invalid answers, no matter the modes, orders etc. in which you use them. if_/3 also satisfies this property. On the other hand, var/1, nonvar/1, integer/1, !/0, predicates with side-effects etc. are all extra-logically referencing something outside the declarative world that is being described, and can thus not be considered pure.
EDIT: To clarify: The nice properties I mention here are in no way exhaustive. Pure Prolog code exhibits many other extremely valuable properties through which you can perceive the glory of logic programming. For example, in pure Prolog code, adding a clause can at most extend, never narrow, the set of solutions; adding a goal can at most narrow, never extend, it etc.
Using a single extra-logical primitive may, and typically will, already destroy many of these properties. Therefore, for example, every time you use !/0, consider it a cut right into the heart of purity, and try to feel regret and shame for wounding these properties.
A good Prolog book will at least begin to introduce or contain many hints to encourage such a declarative view, guide you to think about more general queries, properties that are preserved etc. Bad Prolog books will not say much about this and typically end up using exactly those impure language elements that destroy the language's most valuable and beautiful properties.
An awesome Prolog teaching environment that makes extensive use of these properties to implement declarative debugging is called GUPU, I highly recommend to check out these ideas. Ulrich Neumerkel has generously made one core idea that is used in his environment partly available as library(diadem). See the source file for a good example on how to declaratively debug a goal that fails unexpectedly: The library systematically builds generalizations of the query that still fail. This reasoning of course works perfectly with pure code.

Related

Does a rule without passing a variable against the philosophy of declarative programming or prolog?

cancer():-
pain(strong),
mood(depressed),
fever(mild),
bowel(bloody),
miscellaneous(giddy).
diagnose():-
nl,
cancer()->write("has cancer").
for example, dog(X) says that X is a dog but my cancer statement just checks whether the following conditions meet. Is there a better way to do that?
In pure Prolog, a predicate without any arguments can only succeed or fail (or not terminate at all).
Thus, it can encode only very little information. A predicate that always succeeds is already available: true/0, having zero arguments. A predicate that always fails is also already available: false/0, also having zero arguments. A predicate that never terminates can be easily constructed.
So, in this sense, you do not need more predicates with zero arguments, and I think you are perfectly justified in being suspicous about such predicates.
Predicates with zero arguments are of limited use since they are so specific. They may however be used for example to describe a fixed set of tests, or be useful only for their side-effects. This is also what you are using, by emitting output on the terminal in case the predicate succeeds.
This means that you are leaving the pure subset of Prolog, and now relying on features that are beyond pure logic.
This is typically a very bad idea, because it:
prevents or at least complicates many forms of reasoning about your program
makes it much harder to test your predicates
is not thread safe in general
etc.
Therefore, suppose your write your program as follows:
cancer(Patient):-
patient_pain(Patient, strong),
patient_mood(Patient, depressed),
patient_fever(Patient, mild),
patient_bowel(Patient, bloody),
patient_miscellaneous(Patient, giddy).
This predicate is now parametrized by a patient, and thus significantly more general than what you have posted.
It can now be used to reason about several patients, it can be used to reason in parallel about different patients, you can use a Prolog query to test the predicate etc.
You can further generalize the predicate by defining for example patient_diagnosis/2, keeping everything completely pure and benefiting from the above advantages. Note that a patient may have several illnesses, which can be emitted on backtracking.
Thus: Yes, a rule without arguments is at least suspicious and atypical if it arises in your actual code. Leaving aside scenarios such as "test case" and "consistency check", it can only be useful for its side-effects, and I recommend you avoid side-effects if you can.
For more information about this topic, see logical-purity.
cancer() isn't legal syntax, but the idea's perfectly fine.
Just do the call as
cancer
and define it as a fact or rule.
cancer. % fact
cancer :- blah blah %rule
in fact, you use a system predicate with no args in your program -
nl is a predicate that always succeeds, and prints a newline.
There are many reasons to have a predicate with no arguments. Suppose you have a server that runs in a slightly different configuration in production than in development. Developer access API is off in production.
my_handler(Request) :-
development,
blah blah
development only succeeds if we're in development environment
or you might have a side effect set off, or be using state.

Prolog SLD-Tree generator

I was given the task to write a tool that visualizes the SLD-Tree for a given Prolog-program and query.
So since I'd rather not implement a whole Prolog-parser and interpreter myself, I am looking for a library or program which generates that tree for me, so that I only need to do the visualization part.
The best case would be a C++ library, but something in any common language would do (or a program which outputs the tree as xml document, or anything alike)
So far I could not find anything, so I place my hopes on you guys.
Best regards
Uzaku
You may write a Prolog meta-interpreter in Prolog that produces a representation of the tree to a file and then use it as input for you tool. The simplest of meta-interpreters is this one, that only deals with conjunction (the comma operator)
prolog(true) :- !.
prolog((X,Y)) :- !, prolog(X), prolog(Y).
prolog(H) :- clause(H,Body), prolog(Body).
You should add more clauses for other language constructs, like the disjunction (the semicolon), and support for the cut if needed. For producing the tree you also need to have one or more arguments in the predicate. And you should take care with infinite trees, a limit on the tree depth being a good idea. Finally the use of clause/2 in some Prolog systems require some kind of previous declaration of the predicates whose clauses you want to access.
UPDATE: no need for a nonvar/1 in 3rd clause.

Implementing arithmetic for Prolog

I'm implementing a Prolog interpreter, and I'd like to include some built-in mathematical functions (sum, product, etc). For example, I would like to be able to make calculations using knowledge bases like this one:
NetForce(F) :- Mass(M), Acceleration(A), Product(M, A, F)
Mass(10) :- []
Acceration(12) :- []
So then I should be able to make queries like ?NetForce(X). My question is: what is the right way to build functionality like this into my interpreter?
In particular, the problem I'm encountering is that, in order to evaluate Sum, Product, etc., all their arguments have to be evaluated (i.e. bound to numerical constants) first. For example, while to code above should evaluate properly, the permuted rule:
NetForce(F) :- Product(M, A, F), Mass(M), Acceleration(A)
wouldn't, because M and A aren't bound when the Product term is processed. My current approach is to simply reorder the terms so that mathematical expressions appear last. This works in simple cases, but it seems hacky, and I would expect problems to arise in situations with multiple mathematical terms, or with recursion. Is there a better solution?
The functionality you are describing exists in existing systems as constraint extensions. There is CLP(Q) over the rationals, CLP(R) over the reals - actually floats, and last but not least CLP(FD) which is often extended to a CLP(Z). See for example
library(clpfd).
In any case, starting a Prolog implementation from scratch will be a non-trivial effort, you will have no time to investigate what you want to implement because you will be inundated by much lower level details. So you will have to use a more economical approach and clarify what you actually want to do.
You might study and implement constraint languages in existing systems. Or you might want to use a meta-interpreter based approach. Or maybe you want to implement a Prolog system from scratch. But don't expect that you succeed in all of it.
And to save you another effort: Reuse existing standard syntax. The syntax you use would require you to build an extra parser.
You could use coroutining to delay the evaluation of the product:
product(X, A, B) :- freeze(A, freeze(B, X is A*B))
freeze/2 delays the evaluation of its second argument until its first argument is ground. Used nested like this, it only evaluates X is A*B after both A and B are bound to actual terms.
(Disclaimer: I'm not an expert on advanced Prolog topics, there might be an even simpler way to do this - e.g. I think SICStus Prolog has "block declarations" which do pretty much the same thing in a more concise way and generalized over all declarations of the predicate.)
Your predicates would not be clause order independent, which is pretty important. You need to determine usage modes of your predicates - what will the usage mode of NetForce() be? If I were designing a predicate like Force, I would do something like
force(Mass,Acceleration,Force):- Force is Mass * Acceleration.
This has a usage mode of +,+,- meaning you give me Mass and Acceleration and I will give you the Force.
Otherwise, you are depending on the facts you have defined to unify your variables, and if you pass them to Product first they will continue to unify and unify and you will never stop.

Guard clauses in prolog?

Do they exist? How are they implemented?
The coroutining predicates of SWI-Prolog (freeze, when, dif etc.) have the functionality of guards. How do they fit in the preferred Prolog programming style?
I am very new to logic programming (with Prolog and altogether) and somewhat confused by the fact that it is not purely declarative, and requires procedural considerations even in very simple cases (see this question about using \== or dif). Am I missing something important?
First a terminological issue: Neither freeze/2 nor when/2 nor dif/2 are called guards under any circumstance. Guards appear in such extensions as CHR, or related languages as GHC (link in Japanese) or other Concurrent logic programming languages ; you even (under certain restrictions) might consider clauses of the form
Head :- Guard, !, ...
as clauses containing a guard and the cut would be in this case rather called a commit. But none applies to above primitives. Guards are rather inspired by Dijkstra's Guarded Command Language of 1975.
freeze(X, Goal) (originally called geler) is the same as when(nonvar(X), Goal) and they both are declaratively equivalent to Goal. There is no direct relation to the functionality of guards. However, when used together with if-then-else you might implement such a guard. But that is pretty different.
freeze/2 and similar constructs were for some time considered as a general way to improve Prolog's execution mechanism. However, they turned out to be very brittle to use. Often, they were too conservative thus delaying goals unnecessarily. That is, almost every interesting query produced a "floundering" answer as the query below. Also, the borderline between terminating and non-terminating programs is now much more complex. For pure monotonic Prolog programs that terminate, adding some terminating goal into the program will preserve termination of the entire program. However, with freeze/2 this is no longer the case. Then from a conceptual viewpoint, freeze/2 was not very well supported by the toplevels of systems: Only a few systems showed the delayed goals in a comprehensive manner (e.g. SICStus) which is crucial to understand the difference between success/answers and solution. With delayed goals Prolog may now produce an answer that has no solution as this one:
?- freeze(X, X = 1), freeze(X, X = 2).
freeze(X, X=1), freeze(X, X=2).
Another difficulty with freeze/2 was that termination conditions are much more difficult to determine. So, while freeze was supposed to solve all the problems with termination, it often created new problems.
And there are also more technical difficulties related to freeze/2 in particular w.r.t tabling and other techniques to prevent loops. Consider a goal freeze(X, Y = 1) clearly, Y is now 1 even if it is not yet bound, it still awaits X to be bound first. Now, an implementation might consider tabling for a goal g(Y). g(Y) will now have either no solution or exactly one solution Y = 1. This result would now be stored as the only solution for g/1 since the freeze-goal was not directly visible to the goal.
It is for such reasons that freeze/2 is considered the goto of constraint logic programming.
Another issue is dif/2 which today is considered a constraint. In contrast to freeze/2 and the other coroutining primitives, constraints are much better able to manage consistency and also maintain much better termination properties. This is primarily due to the fact that constraints introduce a well defined language were concrete properties can be proven and specific algorithms have been developed and do not permit general goals. However, even for them it is possible to obtain answers that are not solutions. More about answer and success in CLP.
freeze/2 and when/2 are like a "goto" of logic programming. They are not pure, commutative, etc.
dif/2, on the other hand, is completely pure and declarative, monotonic, commutative etc. dif/2 is a declarative predicate, it holds if its arguments are different. As to the "preferred Prolog programming style": State what holds. If you want to express the constraint that two general terms are different, dif/2 is exactly what states this.
Procedural considerations are typically most needed when you do not program in the pure declarative subset of Prolog, but use impure predicates that are not commutative etc. In modern Prolog systems, there is little reason to ever leave the pure and declarative subset.
There is a paper by Evan Tick explaining CC:
Informally, a procedure invocation commits to a clause by matching the
head arguments (passive unification) and satisfying the guard goals.
When a goal can commit to more than one clause in a procedure, it
commits to one of them non- deterministically (the other candidates
are thrown away). Structures appearing in the head and guard of a
clause cause suspension of execution if the correspond- ing argument
of the goal is not sufficiently instantiated. A suspended invocation
may be resumed later when the variable associated with the suspended
invocation becomes sufficiently instantiated.
THE DEEVOLUTION OF CONCURRENT LOGIC PROGRAMMING LANGUAGES
Evan Tick - 1995
https://core.ac.uk/download/pdf/82787481.pdf
So I guess with some when/2 magic, a commited choice code can
be rewritten into ordinary Prolog. The approach would be as
follows. For a set of commited choice rules of the same predicate:
H1 :- G1 | B1.
...
H2 :- Gn | Bn.
This can be rewritten into the following, where Hi' and Gi'
need to implement passive unification. For example by using
ISO corrigendum subsumes_term/2.
H1' :- G1', !, B1.
..
H1' :- Gn', !, Bn.
H :- term_variables(H, L), when_nonvar(L, H).
The above translation wont work for CHR, since CHR doesn't
throw away candidates.

Theorem Proof Using Prolog

How can I write theorem proofs using Prolog?
I have tried to write it like this:
parallel(X,Y) :-
perpendicular(X,Z),
perpendicular(Y,Z),
X \== Y,
!.
perpendicular(X,Y) :-
perpendicular(X,Z),
parallel(Z,Y),
!.
Can you help me?
I was reluctant to post an Answer because this Question is poorly framed. Thanks to theJollySin for adding clean formatting! Something omitted in the rewrite, indicative of what Aman had in mind, was "I inter in Loop" (sic).
We don't know what query was entered that resulted in this looping, so speculation is required. The two rules suggest that Goal involved either the parallel/2 or the perpendicular/2 predicate.
With practice it's not hard to understand what the Prolog engine will do when a query is posed, especially a single goal query. Prolog uses a pretty simple "follow your nose" strategy in attempting to satisfy a goal. Look for the rules for whichever predicate is invoked. Then see if any of those rules, starting with the first and going down in the list of them, can be applied.
There are three topics that beginning Prolog programmers will typically struggle with. One is the recursive nature of the search the Prolog engine makes. Here the only rule for parallel/2 has a right-hand side that invokes two subgoals for perpendicular/2, while the only rule for perpendicular/2 invokes both a subgoal for itself and another subgoal for parallel/2. One should expect that trying to satisfy either kind of query inevitably leads to a Hydra-like struggle with bifurcating heads.
The second topic we see in this example is the use of free variables. If we are to gain knowledge about perpendicularity or parallelism of two specific lines (geometry), then somehow the query or the rules need to provide "binding" of variables to "ground" terms. Again without the actual Goal being queried, it's hard to guess how Aman expected that to work. Perhaps there should have been "facts" supplied about specific lines that are perpendicular or parallel. Lines could be represented merely as atoms (perhaps lowercase letters), but Prolog variables are names that begin with an uppercase letter (as in the two given rules) or with an underscore (_) character.
Finally the third topic that can be quite confusing is how Prolog handles negation. There's only a touch of that in these rules, the place where X\==Y is invoked. But even that brief subgoal requires careful understanding. Prolog implements "negation as failure", so that X\==Y succeeds if and only if X==Y does not succeed. This latter goal is also subtle, because it asks whether X and Y are the same without trying to do any unification. Thus if these are different variables, both free, then X==Y fails (and X\==Ysucceeds). On the other hand, the only way for X==Yto succeed (and thus for X\==Y to fail) would be if both variables were bound to the same ground term. As discussed above the two rules as stated don't provide a way for that to be the case, though something might have taken care of this in the query Goal.
The homework assignment for Aman is to learn about these Prolog topics:
recursion
free and bound variables
negation
Perhaps more concrete suggestions can then be made about Prolog doing geometry proofs!
Added: PTTP (Prolog Technology Theorem Prover) was written by M.E. Stickel in the late 1980's, and this 2006 web page describes it and links to a download.
It also summarizes succinctly why Prolog alone is not " a full general-purpose theorem-proving system." Pointers to later, more capable theorem provers can be followed there as well.

Resources