Prolog: redundant program points in failure-slice? - prolog

We are implementing diagnostic tools for explaining unexpected universal non-termination in pure, monotonic Prolog programs—based on the concept of the failure-slice.
As introduced in
the paper "Localizing and explaining reasons for nonterminating logic programs with failure slices", goals false/0 are added at a number of program points in an effort to reduce the program fragment sizes of explanation candidates (while still preserving non-termination).
So far, so good... So here comes my question1:
Why are there N+1 program points in a clause having N goals?
Or, more precisely:
How come that N points do not suffice? Do we ever need the (N+1)-th program point?
Couldn't we move that false to each use of the predicate of concern instead?
Also, we know that the program fragment is only used for queries like ?- G, false.
Footnote 1: We assume each fact foo(bar,baz). is regarded as a rule foo(bar,baz) :- true..

Why are there N+1 program points in a clause having N goals? How come that N points do not suffice?
In many examples, not all points are actually useful. The point after the head in a predicate with a single clause is such an example. But the program points are here to be used in any program.
Let's try out some examples.
N = 0
A fact is a clause with zero goals. Now even a fact may or may not contribute to non-termination. As in:
?- p.
p :-
q(1).
p.
q(1).
q(2).
We do need a program point for each fact of q/1, even if it has no goal at all, since the minimal failure slice is:
?- p, false.
p :-
q(1),
p, false.
q(1).
q(2) :- false.
N = 1
p :-
q,
p.
p :-
p.
q :-
s.
s.
s :-
s.
So here the question is: Do we need two program points in q/0? Well yes, there are different independent failure slices. Sometimes with false in the beginning, and sometimes at the end.
What is a bit confusing is that the first program point (that is the one in the query) is always true, and the last is always false. So one could remove them, but I think it is clearer to leave them, as a false at the end is what you have to enter into Prolog anyway. See the example in the Appendix. There, P0 = 1, P8 = 0 is hard coded.

Related

Join two list in prolog?

I am learning prolog, what I am doing is writing a predicate to join two list. For example, if I query:
joinL([22,33,44],[1,2,3],L)
It will show L = [22,33,44,1,2,3].
To do it, I have tried to write predicate as followings:
joinL([],L2,L2).
joinL([H|T],L2,L):-joinL(T,L2,L),L = [H|L].
But when I query
joinL([22,33,44],[1,2,3],L)
It does not show desired result as i have just described above. Actually, it returns false.
What I want to ask is: "How did my code become wrong?", I do NOT ask "How to write predicate that join two list in prolog?" cause I can google it easily, and when compare with my code, I curiously want to know why i am wrong with my code. Can any one help me! Thank you all for reading and answering my question!
The problem is that you are using the = in the same way as one would use assignment:
L = [H|L]
In a state-changing language this means that whatever is stored in L (which is supposed to be a list) becomes a new list, made by tacking H to the front: [H|L]
In Prolog this states that what we know about L is that it is equal to [H|L]- equal to itself with H tacked to the front. This is not possible for any L though (actually, it is, if L is an infinite list containing only H but the proof engine of Prolog is not good enough to deal with that). Prolog's proof search fails at that hurdle and will return "false" - there are no solutions to the logic program you have entered.
(More after a coffee)
Here is how to think about this:
Ok, so I would like to state some logic facts about the problem of "list concatenation" so that, based on those logic facts, and given two completely-specified lists L1, L2, Prolog's proof search can determine enough about what the concatenated list LJ should look like to actually output it completely!
We decide to specify a predicate joinL(list1,list2,joinedlist) to express this.
First, we cover a special edge case:
joinL([],L2,LJ) :- LJ = L2.
So, it is stated that the 'joinL' relationship between the empty list '[]' and the joined list 'LJ' is such that 'LJ' is necessarily equal to 'L2'.
The logical reading is:
(LJ = L2) → joinL([],L2,LJ)
The operational reading is:
In order to prove joinL([],L2,LJ) you must prove LJ = L2 (which can either be verified if LJ and L2 are already known or can be added to the solution's known constraints if not.
There is also the reading of the SLD resolution, where you add the negation of joinL([],L2,LJ) to your set of logic facts, then try to prove ⊥ (the contradiction also know as the empty statement) using resolution, but I have not found that view to be particularly helpful.
Anyway, let's state more things about the edge cases:
joinL([],L2,LJ) :- LJ = L2.
joinL(L1,[],LJ) :- LJ = L1.
joinL([],[],LJ) :- LJ = [].
This will already enable the Prolog proof engine to determine LJ completely whenever any of the L1 and L2 is the empty list.
One commonly abbreviates to:
joinL([],L,L).
joinL(L,[],L).
joinL([],[],[]).
(The above abbreviation would not be possible in Picat for example)
And the third statement can be dropped because the other two "subsume it" - they cover that case more generally. Thus:
joinL([],L,L).
joinL(L,[],L).
Now for the case of non-empty lists. A fat part of logic programming is about inductive (or recursive) definitions of predicates (see this), so let's go:
joinL([H|T],L2,LJ) :- LJ = [H|LX], joinL(T,L2,LX).
Again, this is just a specification, where we say that the concatenation of a nonempty list [H|T] and any list L2 is a list LJ such that LJ is composed of H and a list LX and LX is the concatenation of T and L2.
This is useful to the Prolog proof engine because it gives more information about LJ (in fact, it specifies what the first element of LJ is) and reduces the problem to finding out more using the same predicate but a problem that is a little nearer to the base case with the empty list: joinL(T,L2,LX). If the proof goes down that route it will eventually hit joinL([],L2,LX), find out that L2 = LX and be able to successfully return from its descent.
joinL([H|T],L2,LJ) :- LJ = [H|LX], joinL(T,L2,LX).
is commonly abbreviated to
joinL([H|T],L2,[H|LX]) :- joinL(T,L2,LX).
Looks like we have covered everything with:
joinL([],L,L).
joinL(L,[],L).
joinL([H|T],L2,[H|LX]) :- joinL(T,L2,LX).
We can even drop the second statement, as it is covered by the recursive descent with L2 always equal to '[]'. It gives us a shorter program which burns cycles needlessly when L2 is '[]':
joinL([],L,L).
joinL([H|T],L2,[H|LX]) :- joinL(T,L2,LX).
Let's test this. One should use unit tests but I can't be bothered now and will just run these in SWISH. Let's see what Prolog can find out about X:
joinL([],[],X). % X = []
joinL([1,2],[],X). % X = [1,2]
joinL([],[1,2],X). % X = [1,2]
joinL([3,4],[1,2],X). % X = [3,4,1,2]
joinL([1,2],[3,4],X). % X = [1,2,3,4]
One can constrain the result completely, transforming Prolog into a checker:
joinL([3,4],[1,2],[3,4,1,2]). % true
joinL([3,4],[1,2],[1,1,1,1]). % false
Sometimes the predicate works backwards too, but often more careful design is needed. Not here:
joinL([3,4],L2,[3,4,1,2]). % L2 = [1, 2]
For this one, Prolog suggests a second solution might exist but there is none of course:
joinL(L1,[3,4],[1,2,3,4]). % L1 = [1, 2]
Find me something impossible:
joinL(L1,[3,4],[1,2,100,100]). % false

Prolog and limitations of backtracking

This is probably the most trivial implementation of a function that returns the length of a list in Prolog
count([], 0).
count([_|B], T) :- count(B, U), T is U + 1.
one thing about Prolog that I still cannot wrap my head around is the flexibility of using variables as parameters.
So for example I can run count([a, b, c], 3). and get true. I can also run count([a, b], X). and get an answer X = 2.. Oddly (at least for me) is that I can also run count(X, 3). and get at least one result, which looks something like X = [_G4337877, _G4337880, _G4337883] ; before the interpreter disappears into an infinite loop. I can even run something truly "flexible" like count(X, A). and get X = [], A = 0 ; X = [_G4369400], A = 1., which is obviously incomplete but somehow really nice.
Therefore my multifaceted question. Can I somehow explain to Prolog not to look beyond first result when executing count(X, 3).? Can I somehow make Prolog generate any number of solutions for count(X, A).? Is there a limitation of what kind of solutions I can generate? What is it about this specific predicate, that prevents me from generating all solutions for all possible kinds of queries?
This is probably the most trivial implementation
Depends from viewpoint: consider
count(L,C) :- length(L,C).
Shorter and functional. And this one also works for your use case.
edit
library CLP(FD) allows for
:- use_module(library(clpfd)).
count([], 0).
count([_|B], T) :- U #>= 0, T #= U + 1, count(B, U).
?- count(X,3).
X = [_G2327, _G2498, _G2669] ;
false.
(further) answering to comments
It was clearly sarcasm
No, sorry for giving this impression. It was an attempt to give you a synthetic answer to your question. Every details of the implementation of length/2 - indeed much longer than your code - have been carefully weighted to give us a general and efficient building block.
There must be some general concept
I would call (full) Prolog such general concept. From the very start, Prolog requires us to solve computational tasks describing relations among predicate arguments. Once we have described our relations, we can query our 'knowledge database', and Prolog attempts to enumerate all answers, in a specific order.
High level concepts like unification and depth first search (backtracking) are keys in this model.
Now, I think you're looking for second order constructs like var/1, that allow us to reason about our predicates. Such constructs cannot be written in (pure) Prolog, and a growing school of thinking requires to avoid them, because are rather difficult to use. So I posted an alternative using CLP(FD), that effectively shields us in some situation. In this question specific context, it actually give us a simple and elegant solution.
I am not trying to re-implement length
Well, I'm aware of this, but since count/2 aliases length/2, why not study the reference model ? ( see source on SWI-Prolog site )
The answer you get for the query count(X,3) is actually not odd at all. You are asking which lists have a length of 3. And you get a list with 3 elements. The infinite loop appears because the variables B and U in the first goal of your recursive rule are unbound. You don't have anything before that goal that could fail. So it is always possible to follow the recursion. In the version of CapelliC you have 2 goals in the second rule before the recursion that fail if the second argument is smaller than 1. Maybe it becomes clearer if you consider this slightly altered version:
:- use_module(library(clpfd)).
count([], 0).
count([_|B], T) :-
T #> 0,
U #= T - 1,
count(B, U).
Your query
?- count(X,3).
will not match the first rule but the second one and continue recursively until the second argument is 0. At that point the first rule will match and yield the result:
X = [_A,_B,_C] ?
The head of the second rule will also match but its first goal will fail because T=0:
X = [_A,_B,_C] ? ;
no
In your above version however Prolog will try the recursive goal of the second rule because of the unbound variables B and U and hence loop infinitely.

Prolog: Check if X is in range of 0 to K - 1

I'm new to prolog and every single bit of code I write turns into an infinite loop.
I'm specifically trying to see if X is in the range from 0 to K - 1.
range(X,X).
range(X,K) :- K0 is K - 1, range(X,K0).
My idea behind the code is that I decrement K until K0 equals X, then the base case will kick in. I'm getting an infinite loop though, so something with the way I'm thinking is wrong.
Welcome to the wondrous world of Prolog! It seems you tried to leapfrog several steps when learning Prolog, and (not very surprisingly) failed.
Ideally, you take a book like Art of Prolog and start with family relations. Then extend towards natural numbers using successor-arithmetics, and only then go to (is)/2. Today, (that is, since about 1996) there is even a better way than using (is)/2 which is library(clpfd) as found in SICStus or SWI.
So let's see how your program would have been, using successor-arithmetics. Maybe less_than_or_equal/2 would be a better name:
less_than_or_equal(N,N).
less_than_or_equal(N,s(M)) :-
less_than_or_equal(N,M).
?- less_than_or_equal(N,s(s(0))).
N = s(s(0))
; N = s(0)
; N = 0.
It works right out of the box! No looping whatsoever. So what went wrong?
Successor arithmetics relies on the natural numbers. But you used integers which contain also these negative numbers. With negative numbers, numbers are no longer well ordered (well founded, or Noetherian), and you directly experienced that consequence. So stick with the natural numbers! They are all natural, and do not contain any artificial negative ingredients. Whoever said "God made the integers, all else is the work of man." must have been wrong.
But now back to your program. Why does it not terminate? After all, you found an answer, so it is not completely wrong. Is it not? You tried to reapply the notions of control flow you learned in command oriented languages to Prolog. Well, Prolog has two relatively independent control flows, and many more surprising things like real variables (TM) that appear at runtime that have no direct counterpart in Java or C#. So this mapping did not work. I got a little bit suspicious when you called the fact a "base case". You probably meant that it is a "termination condition". But it is not.
So how can we easily understand termination in Prolog? The best is to use a failure-slice. The idea is that we will try to make your program as small as possible by inserting false goals into your program. At any place. Certain of the resulting programs will still not terminate. And those are most interesting, since they are a reason for non-termination of the original program! They are immediately, causally connected to your problem. And they are much better for they are shorter. Which means less time to read. Here are some attempts, I will strike through the parts that are no longer relevant.
range(X,X).
range(X,K) :-
K0 is K - 1, false,
range(X,K0).
Nah, above doesn't loop, so it cannot tell us anything. Let's try again:
range(X,X) :- false.
range(X,K) :-
K0 is K - 1,
range(X,K0), false.
This one loops for range(X,1) already. In fact, it is the minimal failure slice. With a bit of experience you will learn to see those with no effort.
We have to change something in the visible part to make this terminate. For example, you might add K > 0 or do what #Shevliaskovic suggested.
I believe the simplest way to do this is:
range(X,X).
range(X,K) :- X>0, X<K-1.
and here are my results:
6 ?- range(4,4).
true .
7 ?- range(5,8).
true.
8 ?- range(5,4).
false.
The simple way, as has been pointed out, if you just want to validate that X lies within a specified domain would be to just check the condition:
range(X,K) :- X >= 0 , X < K .
Otherwise, if you want your range/2 to be generative, would be to use the built-in between/3:
range(X,K) :- integer(K) , K1 is K-1 , between(0,K1,X).
If your prolog doesn't have a between/3, it's a pretty simple implementation:
%
% the classic `between/3` wants the inclusive lower and upper bounds
% to be bound. So we'll make the test once and use a helper predicate.
%
between(Lo,Hi,N) :-
integer(Lo),
integer(Hi),
_between(Lo,Hi,N)
.
_between(Lo,Hi,Lo) :- % unify the lower bound with the result
Lo =< Hi % - if we haven't yet exceeded the inclusive upper bound.
. %
_between(Lo,Hi,N) :- % otherwise...
Lo < Hi , % - if the lower bound is less than the inclusive upper bound
L1 is Lo+1 , % - increment the lower bound
_between(L1,Hi,N) % - and recurse down.
. %

Prolog without if and else statements

I am currently trying to learn some basic prolog. As I learn I want to stay away from if else statements to really understand the language. I am having trouble doing this though. I have a simple function that looks like this:
if a > b then 1
else if
a == b then c
else
-1;;
This is just very simple logic that I want to convert into prolog.
So here where I get very confused. I want to first check if a > b and if so output 1. Would I simply just do:
sample(A,B,C,O):-
A > B, 1,
A < B, -1,
0.
This is what I came up with. o being the output but I do not understand how to make the 1 the output. Any thoughts to help me better understand this?
After going at it some more I came up with this but it does not seem to be correct:
Greaterthan(A,B,1.0).
Lessthan(A,B,-1.0).
Equal(A,B,C).
Sample(A,B,C,What):-
Greaterthan(A,B,1.0),
Lessthan(A,B,-1.0),
Equal(A,B,C).
Am I headed down the correct track?
If you really want to try to understand the language, I recommend using CapelliC's first suggestion:
sample(A, B, _, 1) :- A > B.
sample(A, B, C, C) :- A == B.
sample(A, B, _, -1) :- A < B.
I disagree with CappeliC that you should use the if/then/else syntax, because that way (in my experience) it's easy to fall into the trap of translating the different constructs, ending up doing procedural programming in Prolog, without fully grokking the language itself.
TL;DR: Don't.
You are trying to translate constructs you know from other programming languages to Prolog. With the assumption that learning Prolog means essentially mapping one construct after the other into Prolog. After all, if all constructs have been mapped, you will be able to encode any program into Prolog.
However, by doing that you are missing the essence of Prolog altogether.
Prolog consists of a pure, monotonic core and some procedural adornments. If you want to understand what distinguishes Prolog so much from other programming languages you really should study its core first. And that means, you should ignore those other parts. You have only so much attention span, and if you waste your time with going through all of these non-monotonic, even procedural constructs, chances are that you will miss its essence.
So, why is a general if-then-else (as it has been proposed by several answers) such a problematic construct? There are several reasons:
In the general case, it breaks monotonicity. In pure monotonic Prolog programs, adding a new fact will increase the set of true statements you can derive from it. So everything that was true before adding the fact, will be true thereafter. It is this property which permits one to reason very effectively over programs. However, note that monotonicity means that you cannot model every situation you might want to model. Think of a predicate childless/1 that should succeed if a person does not have a child. And let's assume that childless(john). is true. Now, if you add a new fact about john being the parent of some child, it will no longer hold that childless(john) is true. So there are situations that inherently demand some non-monotonic constructs. But there are many situations that can be modeled in the monotonic part. Stick to those first.
if-then-else easily leads to hard-to-read nesting. Just look at your if-then-else-program and try to answer "When will the result be -1"? The answer is: "If neither a > b is true nor a == b is true". Lengthy, isn't it? So the people who will maintain, revise and debug your program will have to "pay".
From your example it is not clear what arguments you are considering, should you be happy with integers, consider to use library(clpfd) as it is available in SICStus, SWI, YAP:
sample(A,B,_,1) :- A #> B.
sample(A,B,C,C) :- A #= B.
sample(A,B,_,-1) :- A #< B.
This definition is now so general, you might even ask
When will -1 be returned?
?- sample(A,B,C,-1).
A = B, C = -1, B in inf..sup
; A#=<B+ -1.
So there are two possibilities.
Here are some addenda to CapelliC's helpful answer:
When starting out, it is sometimes easy to mistakenly conceive of Prolog predicates functionally. They are either not functions at all, or they are n-ary functions which only ever yield true or false as outputs. However, I often find it helpful to forget about functions and just think of predicates relationally. When we define a predicate p/n, we're describing a relation between n elements, and we've named the relation p.
In your case, it sounds like we're defining conditions on an ordered triplet, <A, B, C>, where the value of C depends upon the relation between A and B. There are three relevant relationships between A and B (here, since we are dealing with a simple case, these three are exhaustive for the kind of relationship in question), and we can simply describe what value C should have in the three cases.
sample(A, B, 1.0) :-
A > B.
sample(A, B, -1.0) :-
A < B.
sample(A, B, some_value) :-
A =:= B.
Notice that I have used the arithmetical operator =:=/2. This is more specific than ==/2, and it lets us compare mathematical expressions for numerical equality. ==/2 checks for equivalence of terms: a == a, 2 == 2, 5+7 == 5+7 are all true, because equivalent terms stand on the left and right of the operator. But 5+7 == 7+5, 5+7 == 12, A == a are all false, since it are the terms themselves which are being compared and, in the first case the values are reversed, in the second we're comparing +(5,7) with an integer and in the third we're comparing a free variable with an atom. The following, however, are true: 2 =:= 2, 5 + 7 =:= 12, 2 + 2 =:= 4 + 0. This will let us unify A and B with evaluable mathematical expressions, rather than just integers or floats. We can then pose queries such as
?- sample(2^3, 2+2+2, X).
X = 1.0
?- sample(2*3, 2+2+2, X).
X = some_value.
CapelliC points out that when we write multiple clauses for a predicate, we are expressing a disjunction. He is also careful to note that this particular example works as a plain disjunction only because the alternatives are by nature mutually exclusive. He shows how to get the same exclusivity entailed by the structure of your first "if ... then ... else if ... else ..." by intervening in the resolution procedure with cuts. In fact, if you consult the swi-prolog docs for the conditional ->/2, you'll see the semantics of ->/2 explained with cuts, !, and disjunctions, ;.
I come down midway between CapelliC and SQB in prescribing use of the control predicates. I think you are wise to stick with defining such things with separate clauses while you are still learning the basics of the syntax. However, ->/2 is just another predicate with some syntax sugar, so you oughtn't be afraid of it. Once you start thinking relationally instead of functionally or imperatively, you might find that ->/2 is a very nice tool for giving concise expression to patterns of relation. I would format my clause using the control predicates thus:
sample(A, B, Out) :-
( A > B -> Out = 1.0
; A =:= B -> Out = some_value
; Out = -1.0
).
Your code has both syntactic and semantic issues.
Predicates starts lower case, and the comma represent a conjunction. That is, you could read your clause as
sample(A,B,C,What) if
greaterthan(A,B,1.0) and lessthan(A,B,-1.0) and equal(A,B,C).
then note that the What argument is useless, since it doesn't get a value - it's called a singleton.
A possible way of writing disjunction (i.e. OR)
sample(A,B,_,1) :- A > B.
sample(A,B,C,C) :- A == B.
sample(A,B,_,-1) :- A < B.
Note the test A < B to guard the assignment of value -1. That's necessary because Prolog will execute all clause if required. The basic construct to force Prolog to avoid some computation we know should not be done it's the cut:
sample(A,B,_,1) :- A > B, !.
sample(A,B,C,C) :- A == B, !.
sample(A,B,_,-1).
Anyway, I think you should use the if/then/else syntax, even while learning.
sample(A,B,C,W) :- A > B -> W = 1 ; A == B -> W = C ; W = -1.

Prolog successor notation yields incomplete result and infinite loop

I start to learn Prolog and first learnt about the successor notation.
And this is where I find out about writing Peano axioms in Prolog.
See page 12 of the PDF:
sum(0, M, M).
sum(s(N), M, s(K)) :-
sum(N,M,K).
prod(0,M,0).
prod(s(N), M, P) :-
prod(N,M,K),
sum(K,M,P).
I put the multiplication rules into Prolog. Then I do the query:
?- prod(X,Y,s(s(s(s(s(s(0))))))).
Which means finding the factor of 6 basically.
Here are the results.
X = s(0),
Y = s(s(s(s(s(s(0)))))) ? ;
X = s(s(0)),
Y = s(s(s(0))) ? ;
X = s(s(s(0))),
Y = s(s(0)) ? ;
infinite loop
This result has two problems:
Not all results are shown, note that the result X=6,Y=1 is missing.
It does not stop unless I Ctrl+C then choose abort.
So... my questions are:
WHY is that? I tried switching "prod" and "sum" around. The resulting code gives me all results. And again, WHY is that? It still dead-loops though.
HOW to resolve that?
I read the other answer on infinite loop. But I'd appreciate someone answer basing on this scenario. It greatly helps me.
If you want to study termination properties in depth, programs using successor-arithmetics are an ideal study object: You know a priori what they should describe, so you can concentrate on the more technical details. You will need to understand several notions.
Universal termination
The easiest way to explain it, is to consider Goal, false. This terminates iff Goal terminates universally. That is: Looking at tracers is the most ineffective way - they will show you only a single execution path. But you need to understand all of them at once! Also never look at answers when you want universal termination, they will only distract you. You have seen it above: You got three neat and correct answers, only then your program loops. So better "turn off" answers with false. This removes all distraction.
Failure slice
The next notion you need is that of a failure slice. Take a pure monotonic logic program and throw in some goals false. If the resulting failure slice does not terminate (universally), also the original program won't. In your exemple, consider:
prod(0,M,0) :- false.
prod(s(N), M, P) :-
prod(N,M,K), false,
sum(K,M,P).
These false goals help to remove irrelevant adornments in your program: The remaining part shows you clearly, why prod(X,Y,s(s(s(s(s(s(0))))))). does not terminate. It does not terminate, because that fragment does not care about P at all! You are hoping that the third argument will help to make prod/3 terminate, but the fragment shows you it is all in vain, since P does not occur in any goal. No need for chatty tracers.
Often it is not so easy to find minimal failure slices. But once you found one, it is next to trivial to determine its termination or rather non-termination properties. After some time you can use your intuition to imagine a slice, and then you can use your reason to check if that slice is of relevance or not.
What is so remarkable about the notion of a failure slice is this: If you want to improve the program, you have to modify your program in the part visible in above fragment! As long as you do not change it, the problem will persist. A failure slice is thus a very relevant part of your program.
Termination inference
That is the final thing you need: A termination inferencer (or analyzer) like cTI will help you to identify the termination condition rapidly. Look at the inferred termination conditions of prod/3 and the improved prod2/3 here!
Edit: And since this was a homework question I have not posted the final solution. But to make it clear, here are the termination conditions obtained so far:
prod(A,B,C)terminates_if b(A),b(B).
prod2(A,B,C)terminates_if b(A),b(B);b(A),b(C).
So the new prod2/3 is strictly better than the original program!
Now, it is up to you to find the final program. Its termination condition is:
prod3(A,B,C)terminates_if b(A),b(B);b(C).
To start with, try to find the failure slice for prod2(A,B,s(s(s(s(s(s(0)))))))! We expect it to terminate, but it still does not. So take the program and add manuallyfalse goals! The remaining part will show you the key!
As a final hint: You need to add one extra goal and one fact.
Edit: Upon request, here is the failure slice for prod2(A,B,s(s(s(s(s(s(0))))))):
prod2(0,_,0) :- false.
prod2(s(N), M, P) :-
sum(M, K, P),
prod2(N,M,K), false.
sum(0, M, M).
sum(s(N), M, s(K)) :- false,
sum(N,M,K).
Please note the significantly simplified definition of sum/3. It only says: 0 plus anything is anything. No more. As a consequence even the more specialized prod2(A,0,s(s(s(s(s(s(0))))))) will loop whileprod2(0,X,Y) elegantly terminates ...
The first question (WHY) is fairly easy to spot, specially if know about left recursion. sum(A,B,C) binds A and B when C is bound, but the original program prod(A,B,C) doesn't use that bindings, and instead recurse with still A,B unbound.
If we swap sum,prod we get 2 useful bindings from sum for the recursive call:
sum(M, K, P)
Now M is bound, and will be used to terminate the left-recursion. We can swap N and M, because we know that product is commutative.
sum(0, M, M).
sum(s(N), M, s(K)) :-
sum(N, M, K).
prod3(0, _, 0).
prod3(s(N), M, P) :-
sum(M, K, P),
prod3(M, N, K).
Note that if we swap M,K (i.e. sum(K,M,P)), when prod3 is called with P unknown we again have a non terminating loop, but in sum.
?- prod3(X,Y,s(s(s(s(s(s(0))))))).
X = s(s(s(s(s(s(0)))))),
Y = s(0) ;
X = s(s(s(0))),
Y = s(s(0)) ;
X = s(s(0)),
Y = s(s(s(0))) ;
X = s(0),
Y = s(s(s(s(s(s(0)))))) ;
false.
OT I'm perplexed by cTI report: prod3(A,B,C)terminates_if b(A),b(B);b(A),b(C).

Resources