I'm learning Prolog at my university and I'm stuck with a question. Note that I'm a newbie in Prolog and I don't even know the correct spelling of Prolog elements.
I need to define a recursive rule in my .pl file and I don't know if I need a "base step" on my rule. Check my rule:
recur_disciplinas(X, Y) :- requisito(X, Y).
recur_disciplinas(X, Y) :- requisito(X, Z), recur_disciplinas(Z, Y).
This is working, but couldn't I do something like the following?
recur_disciplinas(X, Y) :- requisito(X, Z), recur_disciplinas(Z, Y).
What happens when I declare the same "rule name" (recur_disciplinas(X,Y) :-) two times? Occurs somewhat like an overwrite?
I'm currently using swi-prolog. Thank you so much, guys!
The best way how to understand Prolog rules is to look at the :- operator which is a 1970ies rendering of an arrow (yes, the assignment operator := in Pascal was meant as an arrow, too). So you look what is there on the right-hand side and say: Provided all that is true, I can conclude what is on the left-hand side. So you are reading right-to-left with your rule:
recur_disciplinas(X, Y) :- requisito(X, Z), recur_disciplinas(Z, Y).
% ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ read
You say: provided there is some X, Y and Z such that the right-hand is true, we can conclude that recur_disciplians(X, Y) holds. Now, lets generalize this by removing requisito(X, Z). What is left now is:
recur_disciplinas(X, Y) :- /******/ recur_disciplinas(Z, Y).
So you can conclude from recur_disciplinas(Z, Y) that recur_disciplinas(X, Y) holds. But you have nothing to start with that conclusion! So effectively this means that there is no solution to this relation at all.
Its like saying, provided I can fly, I will fly like a bird.
Maybe that is true, but as long as you do not fly, it is all in vain.
See this answer how to permit to express your relation more compactly. A goal closure(requisito, X, Y) suffices! And it would even deal with potential loops.
As a side remark, I suspect that recur is some verb, even an imperative. Right? Try to avoid imperatives for relations. Imperatives are good for changing things. Like "switch on the light" which changes the world from a world with a light switched off to one where it is switched on. Imperatives are good for telling a mindless entity what to do. If you rather want to reason about things, imperatives are just malaprop. Focus instead on what should be the case and what not.
If you have a rule name more than one time, it creates an or-branch in your control flow. Prolog will try to unify the first clause. If it will fail, it will try the second clause, and the third, etc.
In the code above, the recur_disciplinas rule will first try to find a matching requisito. If it will fail, it will try to find a requisito-of-a-requisito, transitively, and recursively.
If you don't put a base clause, Prolog will always try the recursive clause, thus it may enter an infinite loop.
Writing base conditions is not unique to Prolog. It is the same with every language that allows recursion. If there is no halting condition, your function will enter an infinite loop.
Consider this equivalent procedural pseudo-code:
def find_disciplinas(X, Y):
if find_requisito(X,Y): # halting condition
return (X, Y)
else: # recursive call
for all Z such that find_requisito(X, Z):
return find_disciplinas(X, Z)
if your "requisito" records include a cycle, and you remove the halting condition, the above procedure will loop indefinitely.
Here we say recur_disciplinas/2 is a predicate with two arguments, and you have asked about whether two clauses (rules) for the predicate are necessary.
As the other Answers have said, one needs a "base case" in recursion so that the recursion terminates, as is usually desirable! The most common arrangement is like your first example: the first rule is the terminating condition (base case) and the second rule is the recursive step (induction case). Someone reading your code will likely find this arrangement familiar and easy to understand.
However the base case and the recursion step MAY be combined into a single rule, and this is sometimes useful. For example, we could use the OR syntax:
recur_disciplinas(X, Y) :-
requisito(X, Y) ; ( requisito(X, Z), recur_disciplinas(Z, Y) ).
Here ; means OR, and this single rule produces essentially the same search for solutions as your original two-rule version.
It is also possible that there can be multiple base cases, each with their own rules or written into a more complicated "combination" rule. As with any programming discipline, clarity and correctness should be prized over mere brevity in code.
In some unusual circumstances it can be advantageous to position the recursive step as the first rule, and move the base case (or cases) into following rules. This would require extra care to ensure the termination condition will always be reached, since it is unlikely you want code that can loop endlessly. The Prolog engine always starts with the first rule when a predicate is invoked; the following rules are tried only once the first rule fails.
Related
I have the following Prolog Program:
p(f(X), Y) :- p(g(X), g(Y)).
p(g(X), Y) :- p(f(Y), f(X)).
p(f(a), g(b)).
The prolog proof tree has to be drawn for the predicate p(X, Y).
Question:
Why is Y matched to Y1/Y and not to Y/Y1 and why is Y used further on?
if I match a predicate (e.g. p(X, Y)), I get a new predicate (e.g. p(g(X1), g(Y))) - why contains p(g(X1), g(Y)) just one subtree? I mean, shouldn't it have 3 because the knowledgebase contains 3 statements - instead of just 1?
And why is at each layer of the tree matched with something like X2/X1 and so on ? and not with the predicate before ?
Shouldn't it be g(X1)/fX5, g(Y1)/Y5 ?
Note: Maybe it seems that I have never done a tutorial or something. But I did.. I appreciate every help.
To be honest, I have rarely seen a worse method to explain Prolog than what you show here.
Yes, I expect the author meant Y/Y1 instead of Y1/Y in both cases, otherwise the notation would be quite inconsistent.
As to your other questions: You are facing the usual problems that arise when taking such an extremely operational view of Prolog. The core issue is that this method doesn't scale: You do not have the mental capacity to carry this approach through. Don't take this personal: Humans in general are bad at keeping all details of an execution tree that grows exponentially in mind. This makes the whole approach extremely cumbersome and error-prone. For comparison, consider why human grandmasters have stopped competing against chess computers already many years ago. In this concrete case, note for example that the rightmost branch does not even arise in actual Prolog execution, but the graph wrongly suggests that it does!
Part of the problem here is a confusion in terminology: Please note that Prolog uses unification (not "matching", which is one-sided unification). When you unify a goal with a clause head and the unification succeeds, then you get bindings for variables. You continue with these bindings in place.
To make the whole approach remotely feasible, consider fragments of your program.
For example, suppose I only give you the following fact:
p(f(a), g(b)).
And you then query:
?- p(X, Y).
X = f(a),
Y = g(b).
This answers shows the bindings for X and Y. First make sure you understand this, and understand the difference between these bindings and a "new predicate" (which does not arise!).
Also, there are no "statements", but 3 clauses, which are logical alternatives.
Now, again to simplify the whole task, consider the following fragment of your program, in which I only look at the two rules:
p(f(X), Y) :- p(g(X), g(Y)).
p(g(X), Y) :- p(f(Y), f(X)).
Already with this program, we get:
?- p(X, Y).
nontermination
Adding a further pure clause cannot prevent this nontermination. Thus, I recommend you start with this reduced version of your program, and consider it in more depth.
From there, you can add the remaining fact again, and consider the differences.
Very good questions!
Why is Y matched to Y1/Y and not to Y/Y1 and why is Y used further on?
The naming here seems a little arbitrary in that they could have used Y/Y1 but then would need to use Y1 further on. In this case, they chose Y1/Y and use Y further on. Although the author of this expression tree was inconsistent in their convention, I wouldn't be too concerned about the naming as much as whether they follow the variable correctly down the tree.
if I match a predicate (e.g. p(X, Y)), I get a new predicate (e.g. p(g(X1), g(Y))) - why contains p(g(X1), g(Y)) just one subtree? I mean, should'nt it have 3 because the knowledgebase contains 3 statements - instead of just 1?
First a word on term versus predicate. A term is only a predicate in the context of Head :- Body in which case Head is a term that forms the head of a predicate clause. If a term is an argument to a predicate (for example, p(g(X1), g(Y)), the g(X1) and g(Y) are not predicates. They are just terms.
More specifically in this case, the term p(g(X1), g(Y)) only has one subtree because it only matches the head of one of the 3 predicate clauses which is the one with the head p(g(X), Y) (it matches with X = X1 and Y = g(Y)). The other two can't match since they're of the form p(f(...), ...) and the f(...) term cannot match the g(X1) term.
And why is at each layer of the tree matched with something like X2/X1 and so on ? and not with the predicate before ?
Shouldn't it be g(X1)/fX5, g(Y1)/Y5 ?
I'm not sure I'm following this question, but the principle to follow is that the tree is attempting to use the same variable name if it applies to the same variable in memory, whereas a different variable name (e.g., X1 versus X) is used if it's a different X. For example, if I have foo(X, Y) :- <some code>, bar(f(X), Y). and I have bar(X, Y) :- blah(X), ... then the X referred to in the bar predicate is different than the X referred to in the foo predicate. So we might say, in the call to foo(X, Y) we're calling bar(f(X), Y), or alternatively, bar(X1, Y) where X1 = f(X).
I am reading through Learn Prolog Now! 's chapter on cuts and at the same time Bratko's Prolog Programming for Artificial Intelligence, Chapter 5: Controlling Backtracking. At first it seemed that a cut was a straight-forward way to mimic an if-else clause known from other programming languages, e.g.
# Find the largest number
max(X,Y,Y):- X =< Y,!.
max(X,Y,X).
However, as is noted down the line this code will fail in cases where all variables are instantiated even when we expect false, e.g.
?- max(2,3,2).
true.
The reason is clear: the first rule fails, the second does not have any conditions connected to it anymore, so it will succeed. I understand that, but then a solution is proposed (here's a swish):
max(X,Y,Z):- X =< Y,!, Y = Z.
max(X,Y,X).
And I'm confused how I should read this. I thought ! meant: 'if everything that comes before this ! is true, stop termination including any other rules with the same predicate'. That can't be right, though, because that would mean that the instantiation of Y = Z only happens in case of failure, which would be useless for that rule.
So how should a cut be read in a 'human' way? And, as an extension, how should I read the proposed solution for max/3 above?
See also this answer and this question.
how should I read the proposed solution for max/3 above?
max(X,Y,Z):- X =< Y, !, Y = Z.
max(X,Y,X).
You can read this as follows:
When X =< Y, forget the second clause of the predicate, and unify Y and Z.
The cut throws away choice points. Choice points are marks in the proof tree that tell Prolog where to resume the search for more solutions after finding a solution. So the cut cuts away parts of the proof tree. The first link above (here it is again) discusses cuts in some detail, but big part of that answer is just citing what others have said about cuts elsewhere.
I guess the take home message is that once you put a cut in a Prolog program, you force yourself to read it operationally instead of declaratively. In order to understand which parts of the proof tree will be cut away, you (the programmer) have to go through the motions, consider the order of the clauses, consider which subgoals can create choice points, consider which solutions are lost. You need to build the proof tree (instead of letting Prolog do it).
There are many techniques you can use to avoid creating choice points you know you don't need. This however is a bit of a large topic. You should read the available material and ask specific questions.
The problem with your code is that the cut is never reached when evaluating your query.
The first step of trying to evaluate a goal with a rule is pattern matching.
The goal max(2,3,2) doesn't match the pattern max(X,Y,Y), since the second and third arguments are the same in the pattern and 3 and 2 don't pattern-match with each other. As such, this rule has already failed at the pattern matching stage, thus the evaluator doesn't get as far as testing X =< Y, let alone reaching the !.
But your understanding of cuts is pretty much correct. Given this code
a(X) :- b(X).
a(X) :- c(X).
b(X) :- d(X), !.
b(X) :- e(X).
c(3).
d(4).
d(5).
e(6).
and the goal
?- a(X).
The interpreter will begin with the first rule, by trying to satisfy b(X). In the process, it discovers that d(4) provides a solution, so binds the value 4 to X. Then the cut kicks in, which discards the backtracking on b(X), thus no further solutions to b(X) are found. However, it does not remove the backtracking on a(X), therefore if you ask Prolog to find another solution then it will find X = 3 through the a(X) :- c(X). rule. If you changed the first rule to a(X) :- b(X), !. then it would fail to find X = 3.
Although the cut means no X = 5 solution is found, if your query is
?- a(5).
then the interpreter will return true. This is because the a(5) calls b(5), which calls d(5), which is defined to be true. The d(4) fact fails pattern matching, therefore it does not trigger the cut like it does when querying a(X).
This is an example of a red cut (see my comment on user1812457's answer). Perhaps a good reason to avoid red cuts, besides them breaking logical purity, is to avoid bugs resulting from this behaviour.
I am trying to understand how to translate the prolog rule
brother(g(x), g(y)) :- brother(x,y).
brother(n,n).
to first order logic.
is ∀x,y(brother(x,y) -> brother(g(x), g(y)) a correct answer?
No, the answer is not correct.
First, decide whether x, y and n in the Prolog program are actually meant to be logical variables. In that case, you need to change the program: A Prolog variable begins with an uppercase letter or underscore. So, suppose you change the program to:
brother(g(X), g(Y)) :- brother(X, Y).
brother(N, N).
Then the translation you give is still not sufficient to capture the declarative meaning of this logic program.
For example, using just the implication you give, can you derive a single statement that actually holds?
Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.
Say I have the following theory:
a(X) :- \+ b(X).
b(X) :- \+ c(X).
c(a).
It simply says true, which is of course correct, a(X) is true because there is no b(X) (with negation as finite failure). Since there is only a b(X) if there is no c(X) and we have c(a), one can state this is true. I was wondering however why Prolog does not provide the answer X = a? Say for instance I introduce some semantics:
noOrphan(X) :- \+ orphan(X).
orphan(X) :- \+ parent(_,X).
parent(david,michael).
Of course if I query noOrphan(michael), this will result in true and noOrphan(david) in false (since I didn't define a parent for david)., but I was wondering why there is no proactive way of detecting which persons (michael, david,...) belong to the noOrphan/1 relation?
This probably is a result of the backtracking mechanism of Prolog, but Prolog could maintain a state which validates if one is searching in the positive way (0,2,4,...) negations deep, or the negative way (1,3,5,...) negations deep.
Let's start with something simpler. Say \+ X = Y. Here, the negated goal is a predefined built-in predicate. So things are even clearer: X and Y should be different. However, \+ X = Y fails, because X = Y succeeds. So no trace is left under which precise condition the goal failed.
Thus, \+ \+ X = Y does produce an empty answer, and not the expected X = Y. See this answer for more.
Given that such simple queries already show problems, you cannot expect too much of user defined goals such as yours.
In the general case, you would have to first reconsider what you actually mean by negation. The answer is much more complex than it seems at first glance. Think of the program p :- \+ p. should p succeed or fail? Should p be true or not? There are actually two models here which no longer fits into Prolog's view of going with the minimal model. Considerations as these opened new branches to Logic Programming like Answer Set Programming (ASP).
But let's stick to Prolog. Negation can only be used in very restricted contexts, such as when the goal is sufficiently instantiated and the definition is stratified. Unfortunately, there are no generally accepted criteria for the safe execution of a negated goal. We could wait until the goal is variable free (ground), but this means quite often that we have to wait way too long - in jargon: the negated goal flounders.
So effectively, general negation does not go very well together with pure Prolog programs. The heart of Prolog really is the pure, monotonic subset of the language. Within the constraint part of Prolog (or its respective extensions) negation might work quite well, though.
I might be misunderstanding the question, and I don't understand the last paragraph.
Anyway, there is a perfectly valid way of detecting which people are not orphans. In your example, you have forgotten to tell the computer something that you know, namely:
person(michael).
person(david).
% and a few more
person(anna).
person(emilia).
not_orphan(X) :- \+ orphan(X).
orphan(X) :- person(X), \+ parent(_, X).
parent(david, michael).
parent(anna, david).
?- orphan(X).
X = anna ;
X = emilia.
?- not_orphan(X).
X = michael ;
X = david ;
false.
I don't know how exactly you want to define an "orphan", as this definition is definitely a bit weird, but that's not the point.
In conclusion: you can't expect Prolog to know that michael and david and all others are people unless you state it explicitly. You also need to state explicitly that orphan or not_orphan are relationships that only apply to people. The world you are modeling could also have:
furniture(red_sofa).
furniture(kitchen_table).
abstract_concept(love).
emotion(disbelief).
and you need a way of leaving those out of your family affairs.
I hope that helps.