Defining the material conditional in Prolog - prolog

I have been trying to acclimate to Prolog and Horn clauses, but the transition from formal logic still feels awkward and forced. I understand there are advantages to having everything in a standard form, but:
What is the best way to define the material conditional operator --> in Prolog, where A --> B succeeds when either A = true and B = true OR B = false? That is, an if->then statement that doesn't fail when if is false without an else.
Also, what exactly are the non-obvious advantages of Horn clauses?

What is the best way to define the material conditional operator --> in Prolog
When A and B are just variables to be bound to the atoms true and false, this is easy:
cond(false, _).
cond(_, true).
But in general, there is no best way because Prolog doesn't offer proper negation, only negation as failure, which is non-monotonic. The closest you can come with actual propositions A and B is often
(\+ A ; B)
which tries to prove A, then goes on to B if A cannot be proven (which does not mean that it is false due to the closed-world assumption).
Negation, however, should be used with care in Prolog.
Also, what exactly are the non-obvious advantages of Horn clauses?
That they have a straightforward procedural reading. Prolog is a programming language, not a theorem prover. It's possible to write programs that have a clear logical meaning, but they're still programs.
To see the difference, consider the classical problem of sorting. If L is a list of numbers without duplicates, then
sort(L, S) :-
permutation(L, S),
sorted(S).
sorted([]).
sorted([_]).
sorted([X,Y|L]) :-
X < Y,
sorted([Y|L]).
is a logical specification of what it means for S to contain the elements of L in sorted order. However, it also has a procedural meaning, which is: try all the permutations of L until you have one that it sorted. This procedure, in the worst case, runs through all n! permutations, even though sorting can be done in O(n lg n) time, making it a very poor sorting program.
See also this question.

Related

Does the Prolog symbol :- mean Implies, Entails or Proves?

In Prolog we can write very simple programs like this:
mammal(dog).
mammal(cat).
animal(X) :- mammal(X).
The last line uses the symbol :- which informally lets us read the final fact as: if X is a mammal then it is also an animal.
I am beginning to learn Prolog and trying to establish which of the following is meant by the symbol :-
Implies (⇒)
Entails (⊨)
Provable (⊢)
In addition, I am not clear on the difference between these three. I am trying to read threads like this one, but the discussion is at a level above my capability, https://math.stackexchange.com/questions/286077/implies-rightarrow-vs-entails-models-vs-provable-vdash.
My thinking:
Prolog works by pattern-matching symbols (unification and search) and so we might be tempted to say the symbol :- means 'syntactic entailment'. However this would only be true of queries that are proven to be true as a result of that syntactic process.
The symbol :- is used to create a database of facts, and therefore is semantic in nature. That means it could be one of Implies (⇒) or Entails (⊨) but I don't know which.
Neither. Or, rather if at all, then it's the implication. The other symbols are above, that is meta-language. The Mathematics Stack Exchange answers explain this quite nicely.
So why :- is not that much of an implication, consider:
p :- p.
In logic, both truth values make this a valid sentence. But in Prolog we stick to the minimal model. So p is false. Prolog uses a subset of predicate logic such that there actually is only one minimal model. And worse, Prolog's actual default execution strategy makes this an infinite loop.
Nevertheless, the most intuitive way to read LHS :- RHS. is to see it as a way to generate new knowledge. Provided RHS is true it follows that also LHS is true. This way one avoids all the paradoxa related to implication.
The direction right-to-left is a bit counter intuitive. This direction is motivated by Prolog's actual execution strategy (which goes left-to-right in this representation).
:- is usually read as if, so something like:
a :- b, c .
reads as
| a is true if b and c are true.
In formal logic, the above would be written as
| a ← b ∧ c
Or
| b and c imply a

Difference between two variant implementations

Is there any logical difference between these two implementations of a variant predicate?
variant1(X,Y) :-
subsumes_term(X,Y),
subsumes_term(Y,X).
variant2(X_,Y_) :-
copy_term(X_,X),
copy_term(Y_,Y),
numbervars(X, 0, N),
numbervars(Y, 0, N),
X == Y.
Neither variant1/2 nor variant2/2 implement a test for being a syntactic variant. But for different reasons.
The goal variant1(f(X,Y),f(Y,X)) should succeed but fails. For some cases where the same variable appears on both sides, variant1/2 does not behave as expected. To fix this, use:
variant1a(X, Y) :-
copy_term(Y, YC),
subsumes_term(X, YC),
subsumes_term(YC, X).
The goal variant2(f('$VAR'(0),_),f(_,'$VAR'(0))) should fail but succeeds. Clearly, variant2/2 assumes that no '$VAR'/1 occur in its arguments.
ISO/IEC 13211-1:1995 defines variants as follows:
7.1.6.1 Variants of a term
Two terms are variants if there is a bijection s of the
variables of the former to the variables of the latter such that
the latter term results from replacing each variable X in the
former by Xs.
NOTES
1 For example, f(A, B, A) is a variant of f(X, Y, X),
g(A, B) is a variant of g(_, _), and P+Q is a variant of
P+Q.
2 The concept of a variant is required when defining bagof/3
(8.10.2) and setof/3 (8.10.3).
Note that the Xs above is not a variable name but rather (X)s. So s is here a bijection, which is a special case of a substitution.
Here, all examples refer to typical usages in bagof/3 and setof/3 where variables happen to be always disjoint, but the more subtle case is when there are common variables.
In logic programming, the usual definition is rather:
V is a variant of T iff there exists σ and θ such that
Vσ and T are identical
Tθ and V are identical
In other words, they are variants if both match each other. However, the notion of matching is pretty alien to Prolog programmers, that is, the notion of matching as used in formal logic. Here is a case which lets many Prolog programmers panic:
Consider f(X) and f(g(X)). Does f(g(X)) match f(X) or not? Many Prolog programmers will now shrug their shoulders and mumble something about the occurs-check. But this is entirely unrelated to the occurs-check. They match, yes, because
f(X){ X ↦ g(X) } is identical to f(g(X)).
Note that this substitution replaces all X and substitutes them for g(X). How can this happen? In fact, it cannot happen with Prolog's typical term representation as a graph in memory. In Prolog the node X is a real address somehow in memory, and you cannot do such an operation at all. But in logic things are on an entirely textual level. It's just like
sed 's/\<X\>/g(X)/g'
except that one can also replace variables simultaneously. Think of { X ↦ Y, Y ↦ X}. They have to be replaced at once, otherwise f(X,Y) would shrink into f(X,X) or f(Y,Y).
So this definition, while formally perfect, relies on notions that have no direct correspondence in Prolog systems.
Similar problems happen when one-sided unification is considered which is not matching, but the common case between unification and matching.
According to ISO/IEC 13211-1:1995 Cor.2:2012 (draft):
8.2.4 subsumes_term/2
This built-in predicate provides a test for syntactic one-sided unification.
8.2.4.1 Description
subsumes_term(General, Specific) is true iff there is a
substitution θ such
that
a) Generalθ
and Specificθ are identical, and
b) Specificθ and Specific
are identical.
Procedurally, subsumes_term(General, Specific) simply
succeeds or fails accordingly. There is no side effect or
unification.
For your definition of variant1/2, subsumes_term(f(X,Y),f(Y,X)) already fails.

Prolog without if and else statements

I am currently trying to learn some basic prolog. As I learn I want to stay away from if else statements to really understand the language. I am having trouble doing this though. I have a simple function that looks like this:
if a > b then 1
else if
a == b then c
else
-1;;
This is just very simple logic that I want to convert into prolog.
So here where I get very confused. I want to first check if a > b and if so output 1. Would I simply just do:
sample(A,B,C,O):-
A > B, 1,
A < B, -1,
0.
This is what I came up with. o being the output but I do not understand how to make the 1 the output. Any thoughts to help me better understand this?
After going at it some more I came up with this but it does not seem to be correct:
Greaterthan(A,B,1.0).
Lessthan(A,B,-1.0).
Equal(A,B,C).
Sample(A,B,C,What):-
Greaterthan(A,B,1.0),
Lessthan(A,B,-1.0),
Equal(A,B,C).
Am I headed down the correct track?
If you really want to try to understand the language, I recommend using CapelliC's first suggestion:
sample(A, B, _, 1) :- A > B.
sample(A, B, C, C) :- A == B.
sample(A, B, _, -1) :- A < B.
I disagree with CappeliC that you should use the if/then/else syntax, because that way (in my experience) it's easy to fall into the trap of translating the different constructs, ending up doing procedural programming in Prolog, without fully grokking the language itself.
TL;DR: Don't.
You are trying to translate constructs you know from other programming languages to Prolog. With the assumption that learning Prolog means essentially mapping one construct after the other into Prolog. After all, if all constructs have been mapped, you will be able to encode any program into Prolog.
However, by doing that you are missing the essence of Prolog altogether.
Prolog consists of a pure, monotonic core and some procedural adornments. If you want to understand what distinguishes Prolog so much from other programming languages you really should study its core first. And that means, you should ignore those other parts. You have only so much attention span, and if you waste your time with going through all of these non-monotonic, even procedural constructs, chances are that you will miss its essence.
So, why is a general if-then-else (as it has been proposed by several answers) such a problematic construct? There are several reasons:
In the general case, it breaks monotonicity. In pure monotonic Prolog programs, adding a new fact will increase the set of true statements you can derive from it. So everything that was true before adding the fact, will be true thereafter. It is this property which permits one to reason very effectively over programs. However, note that monotonicity means that you cannot model every situation you might want to model. Think of a predicate childless/1 that should succeed if a person does not have a child. And let's assume that childless(john). is true. Now, if you add a new fact about john being the parent of some child, it will no longer hold that childless(john) is true. So there are situations that inherently demand some non-monotonic constructs. But there are many situations that can be modeled in the monotonic part. Stick to those first.
if-then-else easily leads to hard-to-read nesting. Just look at your if-then-else-program and try to answer "When will the result be -1"? The answer is: "If neither a > b is true nor a == b is true". Lengthy, isn't it? So the people who will maintain, revise and debug your program will have to "pay".
From your example it is not clear what arguments you are considering, should you be happy with integers, consider to use library(clpfd) as it is available in SICStus, SWI, YAP:
sample(A,B,_,1) :- A #> B.
sample(A,B,C,C) :- A #= B.
sample(A,B,_,-1) :- A #< B.
This definition is now so general, you might even ask
When will -1 be returned?
?- sample(A,B,C,-1).
A = B, C = -1, B in inf..sup
; A#=<B+ -1.
So there are two possibilities.
Here are some addenda to CapelliC's helpful answer:
When starting out, it is sometimes easy to mistakenly conceive of Prolog predicates functionally. They are either not functions at all, or they are n-ary functions which only ever yield true or false as outputs. However, I often find it helpful to forget about functions and just think of predicates relationally. When we define a predicate p/n, we're describing a relation between n elements, and we've named the relation p.
In your case, it sounds like we're defining conditions on an ordered triplet, <A, B, C>, where the value of C depends upon the relation between A and B. There are three relevant relationships between A and B (here, since we are dealing with a simple case, these three are exhaustive for the kind of relationship in question), and we can simply describe what value C should have in the three cases.
sample(A, B, 1.0) :-
A > B.
sample(A, B, -1.0) :-
A < B.
sample(A, B, some_value) :-
A =:= B.
Notice that I have used the arithmetical operator =:=/2. This is more specific than ==/2, and it lets us compare mathematical expressions for numerical equality. ==/2 checks for equivalence of terms: a == a, 2 == 2, 5+7 == 5+7 are all true, because equivalent terms stand on the left and right of the operator. But 5+7 == 7+5, 5+7 == 12, A == a are all false, since it are the terms themselves which are being compared and, in the first case the values are reversed, in the second we're comparing +(5,7) with an integer and in the third we're comparing a free variable with an atom. The following, however, are true: 2 =:= 2, 5 + 7 =:= 12, 2 + 2 =:= 4 + 0. This will let us unify A and B with evaluable mathematical expressions, rather than just integers or floats. We can then pose queries such as
?- sample(2^3, 2+2+2, X).
X = 1.0
?- sample(2*3, 2+2+2, X).
X = some_value.
CapelliC points out that when we write multiple clauses for a predicate, we are expressing a disjunction. He is also careful to note that this particular example works as a plain disjunction only because the alternatives are by nature mutually exclusive. He shows how to get the same exclusivity entailed by the structure of your first "if ... then ... else if ... else ..." by intervening in the resolution procedure with cuts. In fact, if you consult the swi-prolog docs for the conditional ->/2, you'll see the semantics of ->/2 explained with cuts, !, and disjunctions, ;.
I come down midway between CapelliC and SQB in prescribing use of the control predicates. I think you are wise to stick with defining such things with separate clauses while you are still learning the basics of the syntax. However, ->/2 is just another predicate with some syntax sugar, so you oughtn't be afraid of it. Once you start thinking relationally instead of functionally or imperatively, you might find that ->/2 is a very nice tool for giving concise expression to patterns of relation. I would format my clause using the control predicates thus:
sample(A, B, Out) :-
( A > B -> Out = 1.0
; A =:= B -> Out = some_value
; Out = -1.0
).
Your code has both syntactic and semantic issues.
Predicates starts lower case, and the comma represent a conjunction. That is, you could read your clause as
sample(A,B,C,What) if
greaterthan(A,B,1.0) and lessthan(A,B,-1.0) and equal(A,B,C).
then note that the What argument is useless, since it doesn't get a value - it's called a singleton.
A possible way of writing disjunction (i.e. OR)
sample(A,B,_,1) :- A > B.
sample(A,B,C,C) :- A == B.
sample(A,B,_,-1) :- A < B.
Note the test A < B to guard the assignment of value -1. That's necessary because Prolog will execute all clause if required. The basic construct to force Prolog to avoid some computation we know should not be done it's the cut:
sample(A,B,_,1) :- A > B, !.
sample(A,B,C,C) :- A == B, !.
sample(A,B,_,-1).
Anyway, I think you should use the if/then/else syntax, even while learning.
sample(A,B,C,W) :- A > B -> W = 1 ; A == B -> W = C ; W = -1.

Does Prolog use Eager Evaluation?

Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.

PROLOG predicate order

I've got a very large number of equations which I am trying to use PROLOG to solve. However, I've come a minor cropper in that they are not specified in any sort of useful order- that is, some, if not many variables, are used before they are defined. These are all specified within the same predicate. Can PROLOG cope with the predicates being specified in a random order?
Absolutely... ni (in Italian, Yes and Not)
That is, ideally Prolog requires that you specify what must be computed, not how, writing down the equations controlling the solution in a fairly general logical form, Horn clauses.
But this ideal is far from reach, and this is the point where we, as programmers, play a role. You should try to topologically sort formulae, if you want Prolog just apply arithmetic/algorithms.
But at this point, Prolog is not more useful than any other procedural language. It just make easier to do such topological sort, in sense that formulas can be read (this builtin it's a full Prolog parser!), variables identified and quantified easily, terms transformed, evaluated, and the like (metalanguages features, a strong point of Prolog).
Situation changes if you can use CLP(FD). Just an example, a bidirectional factorial (cool, isn't it?), from the documentation of the shining implementation that Markus Triska developed for SWI-Prolog:
You can also use CLP(FD) constraints as a more declarative alternative for ordinary integer arithmetic with is/2, >/2 etc. For example:
:- use_module(library(clpfd)).
n_factorial(0, 1).
n_factorial(N, F) :- N #> 0, N1 #= N - 1, F #= N * F1, n_factorial(N1, F1).
This predicate can be used in all directions. For example:
?- n_factorial(47, F).
F = 258623241511168180642964355153611979969197632389120000000000 ;
false.
?- n_factorial(N, 1).
N = 0 ;
N = 1 ;
false.
?- n_factorial(N, 3).
false.
To make the predicate terminate if any argument is instantiated, add the (implied) constraint F #\= 0 before the recursive call. Otherwise, the query n_factorial(N, 0) is the only non-terminating case of this kind.
Thus if you write your equations in CLP(FD) you get much more chances to have your 'equation system' solved as is. SWI-Prolog has dedicated debugging for the low level details used to solve CLP(FD).
HTH

Resources