I'm currently learning SWI-Prolog. I want to implement a function factorable(X) which is true if X can be written as X = n*b.
This is what I've gotten so far:
isTeiler(X,Y) :- Y mod X =:= 0.
hatTeiler(X,X) :- fail,!.
hatTeiler(X,Y) :- isTeiler(Y,X), !; Z is Y+1, hatTeiler(X,Z),!.
factorable(X) :- hatTeiler(X,2).
My problem is now that I don't understand how to end the recursion with a fail without backtracking. I thought the cut would do the job but after hatTeilerfails when both arguments are equal it jumps right to isTeiler which is of course true if both arguments are equal. I also tried using \+ but without success.
It looks like you add cuts to end a recursion but this is usually done by making rule heads more specific or adding guards to a clause.
E.g. a rule:
x_y_sum(X,succ(Y,1),succ(Z,1)) :-
x_y_sum(X,Y,Z).
will never be matched by x_y_sum(X,0,Y). A recursion just ends in this case.
Alternatively, a guard will prevent the application of a rule for invalid cases.
hatTeiler(X,X) :- fail,!.
I assume this rule should prevent matching of the rule below with equal arguments. It is much easier just to add the inequality of X and Y as a conditon:
hatTeiler(X,Y) :-
Y>X,
isTeiler(Y,X),
!;
Z is Y+1,
hatTeiler(X,Z),
!.
Then hatTeiler(5,5) fails automatically. (*)
You also have a disjunction operator ; that is much better written as two clauses (i drop the cuts or not all possibilities will be explored):
hatTeiler(X,Y) :- % (1)
Y > X,
isTeiler(Y,X).
hatTeiler(X,Y) :- % (2)
Y > X,
Z is Y+1,
hatTeiler(X,Z).
Now we can read the rules declaratively:
(1) if Y is larger than X and X divides Y without remainder, hatTeiler(X,Y) is true.
(2) if Y is larger than X and (roughly speaking) hatTeiler(X,Y+1) is true, then hatTeiler(X, Y) is also true.
Rule (1) sounds good, but (2) sounds fishy: for specific X and Y we get e.g.: hatTeiler(4,15) is true when hatTeiler(4,16) is true. If I understand correctly, this problem is about divisors so I would not expect this property to hold. Moreover, the backwards reasoning of prolog will then try to deduce hatTeiler(4,17), hatTeiler(4,18), etc. which leads to non-termination. I guess you want the cut to stop the recursion but it looks like you need a different property.
Coming from the original property, you want to check if X = N * B for some N and B. We know that 2 <= N <= X and X mod N = 0. For the first one there is even a built-in called between/2 that makes the whole thing a two-liner:
hT(X,B) :-
between(2, X, B),
0 is (X mod B).
?- hT(12,X).
X = 2 ;
X = 3 ;
X = 4 ;
X = 6 ;
X = 12.
Now you only need to write your own between and you're done - all without cuts.
(*) The more general hasTeiler(X,X) fails because is (and <) only works when the right hand side (both sides) is variable-free and contains only arithmetic terms (i.e. numbers, +, -, etc).
If you put cut before the fail, it will be freeze the backtracking.
The cut operation freeze the backtracking , if prolog cross it.
Actually when prolog have failed, it backtracks to last cut.
for example :
a:- b,
c,!,
d,
e,!,
f.
Here, if b or c have failed, backtrack do not freeze.
if d or f have failed, backtrack Immediately freeze, because before it is a cut
if e have failed , it can backtrack just on d
I hope it be useful
Related
A paper I'm reading says the following:
Plaisted [3] showed that it is possible to write formally correct
PROLOG programs using first-order predicate-calculus semantics and yet
derive nonsense results such as 3 < 2.
It is referring to the fact that Prologs didn't use the occurs check back then (the 1980s).
Unfortunately, the paper it cites is behind a paywall. I'd still like to see an example such as this. Intuitively, it feels like the omission of the occurs check just expands the universe of structures to include circular ones (but this intuition must be wrong, according to the author).
I hope this example isn't
smaller(3, 2) :- X = f(X).
That would be disappointing.
Here is the example from the paper in modern syntax:
three_less_than_two :-
less_than(s(X), X).
less_than(X, s(X)).
Indeed we get:
?- three_less_than_two.
true.
Because:
?- less_than(s(X), X).
X = s(s(X)).
Specifically, this explains the choice of 3 and 2 in the query: Given X = s(s(X)) the value of s(X) is "three-ish" (it contains three occurrences of s if you don't unfold the inner X), while X itself is "two-ish".
Enabling the occurs check gets us back to logical behavior:
?- set_prolog_flag(occurs_check, true).
true.
?- three_less_than_two.
false.
?- less_than(s(X), X).
false.
So this is indeed along the lines of
arbitrary_statement :-
arbitrary_unification_without_occurs_check.
I believe this is the relevant part of the paper you can't see for yourself (no paywall restricted me from viewing it when using Google Scholar, you should try accessing this that way):
Ok, how does the given example work?
If I write it down:
sm(s(s(s(z))),s(s(z))) :- sm(s(X),X). % 3 < 2 :- s(X) < X
sm(X,s(X)). % forall X: X < s(X)
Query:
?- sm(s(s(s(z))),s(s(z)))
That's an infinite loop!
Turn it around
sm(X,s(X)). % forall X: X < s(X)
sm(s(s(s(z))),s(s(z))) :- sm(s(X),X). % 3 < 2 :- s(X) < X
?- sm(s(s(s(z))),s(s(z))).
true ;
true ;
true ;
true ;
true ;
true
The deep problem is that X should be Peano number. Once it's cyclic, one is no longer in Peano arithmetic. One has to add some \+cyclic_term(X) term in there. (maybe later, my mind is full now)
I have understood the theory part of Recursion. I have seen exercises but I get confused. I've tried to solve some, some I understand and some I don't. This exercise is confusing me. I can't understand why, so I use comments to show you my weak points. I should have power (X,N,P) so P=X^N.
Some examples:
?- power(3,5,X).
X = 243
?- power(4,3,X).
X = 64
?- power(2,4,X).
X = 16
The solution of this exercise is: (See comments too)
power(X,0,1). % I know how works recursion,but those numbers 0 or 1 why?
power(X,1,X). % X,1,X i can't get it.
power(X,N,P) :- % X,N,P if only
N1 is N-1, % N1=N-1 ..ok i understand
power(X,N1,P1), % P1 is used to reach the the P
P is P1*X. % P = P1*X
What I know recursion, I use a different my example
related(X, Y) :-
parent(X, Z),
related(Z, Y).
Compare my example with the exercise. I could say that my first line, what I think. Please help me out with it is a lot of confusing.
related(X, Y) :- is similar to power(X,N,P) :- . Second sentence of my example parent(X, Z), is similar to N1 is N-1, and the third sentence is related(Z, Y). similar to power(X,N1,P1), and P is P1*X..
Let's go over the definition of the predicate step by step. First you have the fact...
power(X,0,1).
... that states: The 0th power of any X is 1. Then there is the fact...
power(X,1,X).
... that states: The 1st power of any X is X itself. Finally, you have a recursive rule that reads:
power(X,N,P) :- % P is the Nth power of X if
N1 is N-1, % N1 = N-1 and
power(X,N1,P1), % P1 is the N1th power of X and
P is P1*X. % P = P1*X
Possibly your confusion is due to the two base cases that are expressed by the two facts (one of those is actually superfluous). Let's consider the following queries:
?- power(5,0,X).
X = 1 ;
ERROR: Out of local stack
The answer 1 is certainly what we expect, but then the predicate loops until it runs out of stack. That's certainly not desirable. And this query...
?- power(5,1,X).
X = 5 ;
X = 5 ;
ERROR: Out of local stack
... yields the correct answer twice before running out of stack. The reason for the redundant answer is that the recursive rule can reduce any given N to zero and to one thus yielding the same answer twice. If you look at the structure of your recursive rule, it is obvious that the first base case is sufficient, so let's remove the second. The reason for looping out of stack is that, after N becomes zero, the recursive rule will search for other solutions (for N=-1, N=-2, N=-3,...) that do not exist. To avoid that, you can add a goal that prevents the recursive rule from further search, if N is equal to or smaller than zero. That leaves you with following definition:
power(X,0,1). % the 0th power of any X is 1
power(X,N,P) :- % P is the Nth power of X if
N > 0, % N > 0 and
N1 is N-1, % N1 = N-1 and
power(X,N1,P1), % P1 is the N1th power of X and
P is P1*X. % P = P1*X
Now the predicate works as expected:
?- power(5,0,X).
X = 1 ;
false.
?- power(5,1,X).
X = 5 ;
false.
?- power(5,3,X).
X = 125 ;
false.
I hope this alleviates some of your confusions.
I want to write a predicate that determines if a number is prime or not. I am doing this by a brute force O(sqrt(n)) algorithm:
1) If number is 2, return true and do not check any more predicates.
2) If the number is even, return false and do no more checking predicates.
3) If the number is not even, check the divisors of the number up to the square root. Note that
we need only to check the odd divisors starting at 3 since if we get to this part of
the program the number is not even. Evens were eliminated in step 2.
4) If we find an even divisor, return false and do not check anything else.
5) If the divisor we are checking is larger than the square root of the number,
return true, we found no divisors. Do no more predicate checking.
Here is the code I have:
oddp(N) :- M is N mod 2, M = 1.
evenp(N) :- not(oddp(N)).
prime(2) :- !.
prime(X) :- X < 2, write_ln('case 1'), false, !.
prime(X) :- evenp(X), write_ln('case 2'), false, !.
prime(X) :- not(evenp(X)), write_ln('calling helper'),
prime_helper(X,3).
prime_helper(X, Divisor) :- K is X mod Divisor, K = 0,
write_ln('case 3'), false, !.
prime_helper(X, Divisor) :- Divisor > sqrt(X),
write_ln('case 4'), !.
prime_helper(X, Divisor) :- write_ln('case 5'),
Temp is Divisor + 2, prime_helper(X,Temp).
I am running into problems though. For example, if I query prime(1). the program is still checking the divisors. I thought that adding '!' would make the program stop checking if the prior conditions were true. Can someone tell me why the program is doing this? Keep in mind I am new at this and I know the code can be simplified. However, any tips would be appreciated!
#Paulo cited the key issues with the program that cause it to behave improperly and a couple of good tips. I'll add a few more tips on this particular program.
When writing a predicate, the focus should be on what's true. If your
predicate properly defines successful cases, then you don't need to explicitly
define the failure cases since they'll fail by default. This means your statements #2 and #4 don't need to be specifically defined as clauses.
You're using a lot of cuts which is usually a sign that your program
isn't defined efficiently or properly.
When writing the predicates, it's helpful to first state the purpose in logical language form (which you have done in your statements 1 through 5, but I'll rephrase here):
A number is prime if it is 2 (your statement #1), or if it is odd and it is not divisible by an odd divisor 3 or higher (your statement #3). If we write this out in Prolog, we get:
prime(X) :- % X is prime if...
oddp(X), % X is odd, AND
no_odd_divisors(X). % X has no odd divisors
prime(2). % 2 is prime
A number X is odd if X module 2 evaluates to 1.
oddp(X) :- X mod 2 =:= 1. % X is odd if X module 2 evaluates to 1
Note that rather than create a helper which essentially fails when I want success, I'm going to create a helper which succeeds when I want it to. no_odd_divisors will succeeds if X doesn't have any odd divisors >= 3.
A number X has no odd divisors if it is not divisible by 3, and if it's not divisible by any number 3+2k up to sqrt(X) (your statement #5).
no_odd_divisors(X) :- % X has no odd divisors if...
no_odd_divisors(X, 3). % X has no odd divisors 3 or above
no_odd_divisors(X, D) :- % X has no odd divisors D or above if...
D > sqrt(X), !. % D is greater than sqrt(X)
no_odd_divisors(X, D) :- % X has no odd divisors D or above if...
X mod D =\= 0, % X is not divisible by D, AND
D1 is D + 2, % X has no odd divisors D+2 or above
no_odd_divisors(X, D1).
Note the one cut above. This indicates that when we reach more than sqrt(X), we've made the final decision and we don't need to backtrack to other options for "no odd divisor" (corresponding to, Do no more predicate checking. in your statement #5).
This will yield the following behavior:
| ?- prime(2).
yes
| ?- prime(3).
(1 ms) yes
| ?- prime(6).
(1 ms) no
| ?- prime(7).
yes
| ?-
Note that I did define the prime(2) clause second above. In this case, prime(2) will first fail prime(X) with X = 2, then succeed prime(2) with nowhere else to backtrack. If I had defined prime(2) first, as your first statement (If number is 2, return true and do not check any more predicates.) indicates:
prime(2). % 2 is prime
prime(X) :- % X is prime if...
oddp(X), % X is odd, AND
no_odd_divisors(X). % X has no odd divisors
Then you'd see:
| ?- prime(2).
true ? a
no
| ?-
This would be perfectly valid since Prolog first succeeded on prime(2), then knew there was another clause to backtrack to in an effort to find other ways to make prime(2) succeed. It then fails on that second attempt and returns "no". That "no" sometimes confuses Prolog newcomers. You could also prevent the backtrack on the prime(2) case, regardless of clause order, by defining the clause as:
prime(2) :- !.
Which method you choose depends ultimately on the purpose of your predicate relations. The danger in using cuts is that you might unintentionally prevent alternate solutions you may actually want. So it should be used very thoughtfully and not as a quick patch to reduce outputs.
There are several issues on your program:
Writing a cut, !/0, after a call to false/0 is useless and as the cut will never be reached. Try exchanging the order of these two calls.
The first clause can be simplified to oddp(N) :- N mod 2 =:= 1. You can also apply this simplification in other clauses.
The predicate not/1 is better considered deprecated. Write instead evenp(N) :- \+ oddp(N).. The (\+)/1 is the standard operator/control construct for negation as failure.
To grok green cuts in Prolog I am trying to add them to the standard definition of sum in successor arithmetics (see predicate plus in What's the SLD tree for this query?). The idea is to "clean up" the output as much as possible by eliminating all useless backtracks (i.e., no ... ; false) while keeping identical behavior under all possible combinations of argument instantiations - all instantiated, one/two/three completely uninstantiated, and all variations including partially instantiated args.
This is what I was able to do while trying to come as close as possible to this ideal (I acknowledge false's answer to how to insert green cuts into append/3 as a source):
natural_number(0).
natural_number(s(X)) :- natural_number(X).
plus(X, Y, X) :- (Y == 0 -> ! ; Y = 0), (X == 0 -> ! ; true), natural_number(X).
plus(X, s(Y), s(Z)) :- plus(X, Y, Z).
Under SWI this seems to work fine for all queries but those with shape ?- plus(+X, -Y, +Z)., as for SWI's notation of predicate description. For instance, ?- plus(s(s(0)), Y, s(s(s(0)))). yields Y = s(0) ; false.. My questions are:
How do we prove that the above cuts are (or are not) green?
Can we do better than the above program and eliminate also the last backtrack by adding some other green cuts?
If yes, how?
First a minor issue: the common definition of plus/3 has the first and second argument exchanged which allows to exploit first-argument indexing. See Program 3.3 of the Art of Prolog. That should also be changed in your previous post. I will call your exchanged definition plusp/3 and your optimized definition pluspo/3. Thus, given
plusp(X, 0, X) :- natural_number(X).
plusp(X, s(Y), s(Z)) :- plusp(X, Y, Z).
Detecting red cuts (question one)
How to prove or disprove red/green cuts? First of all, watch for "write"-unifications in the guard. That is, for any such unifications prior to the cut. In your optimized program:
pluspo(X, Y, X) :- (Y == 0 -> ! ; Y = 0), (X == 0 -> ! ; true), ...
I spot the following:
pluspo(X, Y, X) :- (...... -> ! ; ... ), ...
So, let us construct a counterexample: To make this cut cut in a red manner, the "write unification" must make its actual guard Y == 0 true. Which means that the goal to construct must contain the constant 0 somehow. There are only two possibilities to consider. The first or third argument. A zero in the last argument means that we have at most one solution, thus no possibility to cut away further solutions. So, the 0 has to be in the first argument! (The second argument must not be 0 right from the beginning, or the "write unification would not have a detrimental effect.). Here is one such counterexample:
?- pluspo(0, Y, Y).
which gives one correct solution Y = 0, but hides all the other ones! So here we have such an evil red cut!
And contrast it to the unoptimized program which gave infinitely many solutions:
Y = 0
; Y = s(0)
; Y = s(s(0))
; Y = s(s(s(0)))
; ... .
So, your program is incomplete, and any questions about further optimizing it are thus not relevant. But we can do better, let me restate the actual definition we want to optimize:
plus(0, X, X) :- natural_number(X).
plus(s(X), Y, s(Z)) :- plus(X, Y, Z).
In practically all Prolog systems, there is first-argument indexing, which makes the following query determinate:
?- plus(s(0),0,X).
X = s(0).
But many systems do not support (full) third argument indexing. Thus we get in SWI, YAP, SICStus:
?- plus(X, Y, 0).
X = Y, Y = 0
; false.
What you probably wanted to write is:
pluso(X, Y, Z) :-
% Part one: green cuts
( X == 0 -> ! % first-argument indexing
; Z == 0 -> ! % 3rd-argument indexing, e.g. Jekejeke, ECLiPSe
; true
),
% Part two: the original unifications
X = 0,
Y = Z,
natural_number(Z).
pluso(s(X), Y, s(Z)) :- pluso(X, Y, Z).
Note the differences to pluspo/3: There are now only tests prior to the cut! All unifications are thereafter.
?- pluso(X, Y, 0).
X = Y, Y = 0.
The optimizations so far relied only on investigating the heads of the two clauses. They did not take into account the recursive rule. As such, they can be incorporated into a Prolog compiler without any global analysis. In O'Keefe's terminology, these green cuts might be considered blue cuts. To cite The Craft of Prolog, 3.12:
Blue cuts are there to alert the Prolog system to a determinacy it should have noticed but wouldn't. Blue cuts do not change the visible behavior of the program: all they do is make it feasible.
Green cuts are there to prune away attempted proofs that would succeed or be irrelevant, or would be bound to fail, but you would not expect the Prolog system to be able to tell that.
However, the very point is that these cuts do need some guards to work properly.
Now, you considered another query:
?- pluso(X, s(s(0)), s(s(s(0)))).
X = s(0)
; false.
or to put a simpler case:
?- pluso(X, s(0), s(0)).
X = 0
; false.
Here, both heads apply, thus the system is not able to determine determinism. However, we know that there is no solution to a goal plus(X, s^n, s^m) with n > m. So by considering the model of plus/3 we can further avoid choicepoints. I'll be right back after this break:
Better use call_semidet/1!
It gets more and more complex and chances are that optimizations might easily introduce new errors in a program. Also optimized programs are a nightmare to maintain. For practical programming purposes use rather call_semidet/1. It is safe, and will produce a clean error should your assumptions turn out to be false.
Back to business: Here is a further optimization. If Y and Z are identical, the second clause cannot apply:
pluso2(X, Y, Z) :-
% Part one: green cuts
( X == 0 -> ! % first-argument indexing
; Z == 0 -> ! % 3rd-argument indexing, e.g. Jekejeke, ECLiPSe
; Y == Z, ground(Z) -> !
; true
),
% Part two: the original unifications
X = 0,
Y = Z,
natural_number(Z).
pluso2(s(X), Y, s(Z)) :- pluso2(X, Y, Z).
I (currently) believe that pluso2/3 is the optimal usage of green/blue cuts w.r.t. leftover choicepoints. You asked for a proof. Well, I think that is well beyond an SO answer...
The goal ground(Z) is necessary to ensure the non-termination properties. The goal plus(s(_), Z, Z) does not terminate, that property is preserved by ground(Z). Maybe you think it is a good idea to remove infinite failure branches too? In my experience, this is rather problematic. In particular, if those branches are removed automatically. While at first sight it seems to be a good idea, it makes program development much more brittle: An otherwise benign program change might now disable the optimization and thus "cause" non-termination. But anyway, here we go:
Beyond simple green cuts
pluso3(X, Y, Z) :-
% Part one: green cuts
( X == 0 -> ! % first-argument indexing
; Z == 0 -> ! % 3rd-argument indexing, e.g. Jekejeke, ECLiPSe
; Y == Z -> !
; var(Z), nonvar(Y), \+ unify_with_occurs_check(Z, Y) -> !, fail
; var(Z), nonvar(X), \+ unify_with_occurs_check(Z, X) -> !, fail
; true
),
% Part two: the original unifications
X = 0,
Y = Z,
natural_number(Z).
pluso3(s(X), Y, s(Z)) :- pluso3(X, Y, Z).
Can you find a case where pluso3/3 does not terminate while there are finitely many answers?
just started programming with prolog and I'm having a few issues. The function I have is supposed to take a value X and copy it N number of times into M. My function returns a list of N number of memory locations. Here's the code, any ideas?
duple(N,_,M):- length(M,Q), N is Q.
duple(N,X,M):- append(X,M,Q), duple(N,X,Q).
Those are not memory adresses. Those are free variables. What you see is their internal names in your prolog system of choice. Then, as #chac pointed out (+1 btw), the third clause is not really making sense! Maybe you can try to tell us what you meant so that we can bring light about how to do it correctly.
I'm going to give you two implementations of your predicate to try to show you correct Prolog syntax:
duple1(N, X, L) :-
length(L, N),
maplist(=(X), L).
Here, in your duple1/3 predicate, we tell prolog that the length of the resulting list L is N, and then we tell it that each element of L should be unified with X for the predicate to hold.
Another to do that would be to build the resulting list "manually" through recursion:
duple2(0, _X, []).
duple2(N, X, [X|L]) :-
N > 0,
NewN is N - 1,
duple1(NewN, X, L).
Though, note that because we use >/2, is and -/2, ie arithmetic, we prevent prolog from using this predicate in several ways, such as:
?- duple1(X, Y, [xyz, xyz]).
X = 2,
Y = xyz.
This worked before, in our first predicate!
Hope this was of some help.
I suppose you call your predicate, for instance, in this way:
?- duple(3,xyz,L).
and you get
L = [_G289, _G292, _G295] ;
ERROR: Out of global stack
If you try
?- length(X,Y).
X = [],
Y = 0 ;
X = [_G299],
Y = 1 ;
X = [_G299, _G302],
Y = 2 ;
X = [_G299, _G302, _G305],
Y = 3 ;
X = [_G299, _G302, _G305, _G308],
Y = 4 .
...
you can see what's happening:
your query will match the specified *M*, displaying a list of M uninstantiated variables (memory locations), then continue backtracking and generating evee longer lists 'til there is stack space. Your second rule will never fire (and I don't really understand its purpose).
A generator is easier to write in this way:
duple(N,X,M) :- findall(X,between(1,N,_),M).
test:
?- duple(3,xyz,L).
L = [xyz, xyz, xyz].