Labeling in Prolog constraint programming - prolog

I am new to Prolog and currently working on a simple constraint programming problem. So I have four real numbers A,B,C,D with the property such that
A+B+C+d = ABC*D = 7.11
Since it is easier to work with integer, I tried the following implementation:
:- use_module(library(clpfd)).
grocery(Vars):-
Vars=[A,B,C,D],
X #= 100 * A,
Y #= 100 * B,
Z #= 100 * C,
W #= 100 * D,
X+Y+Z+W #= 711,
X*Y*Z*W #= 71100000000.
Since the above will give me partially solved answer, I tried putting the keyword label(Vars) at the end. But this causes my execution of grocery(V) to produce
ERROR: Arguments are not sufficiently instantiated.
While grocery([V]) will give me a false. Can anyone enlighten me on how to do the labeling? Thanks
Edit : I did not put the call to the library clpfd earlier on

You are facing two issues that I would like to discuss separately:
Instantiation error
As you mentioned, we get:
?- grocery(Vs), label(Vs).
ERROR: Arguments are not sufficiently instantiated
Labeling requires that the variables that are to be labeled all have finite domains. In your case, label/1 throws an instantiation-error because the domains of some variables are still infinite:
?- grocery([A,B,C,D]).
A in inf.. -1\/1..sup,
100*A#=_9006,
_9006 in inf.. -100\/100..sup,
_9006+_9084+_9078+_9072#=711,
_9006*_9084#=_9102,
_9084 in inf.. -100\/100..sup,
100*B#=_9084,
_9102 in inf.. -1\/1..sup,
_9102*_9078#=_9222,
_9078 in inf.. -100\/100..sup,
100*C#=_9078,
C in inf.. -1\/1..sup,
_9222 in -71100000000.. -1\/1..71100000000,
_9222*_9072#=71100000000,
_9072 in -71100000000.. -100\/100..71100000000,
100*D#=_9072,
D in -711000000.. -1\/1..711000000,
B in inf.. -1\/1..sup.
The only chance to correct this is to create a suitable specialization of the program, where the variables end up with finite domains. Writing [Vs] instead of Vs is obviously no solution:
?- grocery(Vs), label([Vs]).
ERROR: Type error: `integer' expected, found `[_8206,_9038,_8670,_8930]' (a list)
This is because label/1 requires its argument to be a list of finite domain variables, not a list of lists.
An example of a suitable specialization could be:
?- grocery(Vs), Vs ins 0..sup, label(Vs).
false.
The resulting program has no solution, but at least we know that it definitely has no solution, because there is no more instantiation error.
No solution
We have thus arrived at a second, rather independent question: Why does this resulting program have no solution?
A major advantage of using a logic programming language like Prolog is that it enables the application of declarative debugging approaches as for example shown in GUPU.
Like in GUPU, use the following definition to generalize away goals:
:- op(950,fy, *).
*_.
For example, we can generalize away the last goal of your program:
grocery(Vars):-
Vars = [A,B,C,D],
X #= 100 * A,
Y #= 100 * B,
Z #= 100 * C,
W #= 100 * D,
X+Y+Z+W #= 711,
* X*Y*Z*W #= 71100000000.
The resulting program is obviously more general than the original program, because we have removed a constraint from a pure and monotonic Prolog program.
Now, we still get for the preceding query:
?- grocery(Vs), Vs ins 0..sup, label(Vs).
false.
And now we know: Even the more general program has no solution.
If you expect solutions in this case, you will have to change portions of the remaining program to correct the mistake in the formulation.
For more information about this approach, see program-slicing and logical-purity.

Related

CLPFD ins operator yields not sufficiently instantiated error

So, my goal is to make a map colourer in Prolog. Here's the map I'm using:
And this are my colouring constraints:
colouring([A,B,C,D,E,F]) :-
maplist( #\=(A), [B,C,D,E] ),
maplist( #\=(B), [C,D,F]),
C #\= D,
maplist( #\=(D), [E,F]),
E #\= F.
Where [A,B,C,D,E,F] is a list of numbers(colors) from 1 to n.
So I want my solver to, given a List of 6 colors and a natural number N, determine the colors and N constraints both ways, in a way that even the most general query could yield results:
regions_ncolors(L,N) :- colouring(L), L ins 1..N, label(L).
Where the most general query is regions_ncolors(L,N).
However, the operator ins doesn't seem to accept a variable N, it instead yields an argument not sufficiently instantiated error. I've tried using this solution instead:
int_cset_(N,Acc,Acc) :- N #= 0.
int_cset_(N,Acc,Cs) :- N_1 #= N-1, int_cset_(N_1,[N|Acc],Cs).
int_cset(N,Cs) :- int_cset_(N,[],Cs).
% most general solver
regions_ncolors(L,N) :- colouring(L), int_cset(N,Cs), subset(L,Cs), label(L).
Where the arguments in int_cset(N,Cs) is a natural number(N) and the counting set Sn = {1,2,...,N}
But this solution is buggy as regions_ncolors(L,N). only returns the same(one) solution for all N, and when I try to add a constraint to N, it goes in an infinite loop.
So what can I do to make the most general query work both ways(for not-instantiated variables)?
Thanks in advance!
Btw, I added a swi-prolog tag in my last post although it was removed by moderation. I don't know if this question is specific to swi-prolog which is why I'm keeping the tag, just in case :)
Your colouring is too specific, you encode the topology of your map into it. Not a problem as is, but it defeats of the purpose of then having a "most general query" solution for just any list.
If you want to avoid the problem of having a free variable instead of a list, you could first instantiate the list with length/2. Compare:
?- L ins 1..3.
ERROR: Arguments are not sufficiently instantiated
ERROR: In:
ERROR: [16] throw(error(instantiation_error,_86828))
ERROR: [10] clpfd:(_86858 ins 1..3) ...
Is that the same problem as you see?
If you first make a list and a corresponding set:
?- length(L, N), L ins 1..N.
L = [],
N = 0 ;
L = [1],
N = 1 ;
L = [_A, _B],
N = 2,
_A in 1..2,
_B in 1..2 ;
L = [_A, _B, _C],
N = 3,
_A in 1..3,
_B in 1..3,
_C in 1..3 .
If you use length/2 like this you will enumerate the possible lists and integer sets completely outside of the CLP(FD) labeling. You can then add more constraints on the variables on the list and if necessary, use labeling.
Does that help you get any further with your problem? I am not sure how it helps for the colouring problem. You would need a different representation of the map topology so that you don't have to manually define it within a predicate like your colouring/1 you have in your question.
There are several issues in your program.
subset/2 is impure
SWI's (by default) built-in predicate subset/2 is not the pure relation you are hoping for. Instead, it expects that both arguments are already sufficiently instantiated. And if not, it takes a guess and sticks to it:
?- colouring(L), subset(L,[1,2,3,4,5]).
L = [1,2,3,4,2,1].
?- colouring(L), subset(L,[1,2,3,4,5]), L = [2|_].
false.
?- L = [2|_], colouring(L), subset(L,[1,2,3,4,5]), L = [2|_].
L = [2,1,3,4,1,2].
With a pure definition it is impossible that adding a further goal as L = [2|_] in the third query makes a failing query succeed.
In general it is a good idea to not interfere with labeling/2 except for the order of variables and the options argument. The internal implementation is often much faster than manual instantiations.
Also, your map is far too simple to expose subset/2s weakness. Not sure what the minimal failing graph is, but here is one such example from
R. Janczewski et al. The smallest hard-to-color graph for algorithm DSATUR, Discrete Mathematics 236 (2001) p.164.
colouring_m13([K1,K2,K3,K6,K5,K7,K4]):-
maplist(#\=(K1), [K2,K3,K4,K7]),
maplist(#\=(K2), [K3,K5,K6]),
maplist(#\=(K3), [K4,K5]),
maplist(#\=(K4), [K5,K7]),
maplist(#\=(K5), [K6,K7]),
maplist(#\=(K6), [K7]).
?- colouring_m13(L), subset(L,[1,2,3,4]).
false. % incomplete
?- L = [3|_], colouring_m13(L), subset(L,[1,2,3,4]).
L = [3,1,2,2,3,1,4].
int_cset/2 never terminates
... (except for some error cases like int_cset(non_integer, _).). As an example consider:
?- int_cset(1,Cs).
Cs = [1]
; loops.
And don't get fooled by the fact that an actual solution was found! It still does not terminate.
#Luis: But how come? I'm getting baffled by this, the same thing is happening on ...
To see this, you need the notion of a failure-slice which helps to identify the responsible part in your program. With some falsework consisting of goals false the responsible part is exposed.
All unnecessary parts have been removed by false. What remains has to be changed somehow.
int_cset_(N,Acc,Acc) :- false, N #= 0.
int_cset_(N,Acc,Cs) :- N1 #= N-1, int_cset_(N1,[N|Acc],Cs), false.
int_cset(N,Cs) :- int_cset_(N,[],Cs), false.
?- int_cset(1, Cs), false.
loops.
Adding the redundant goal N1 #> 0
will avoid unnecessary non-termination.
This alone will not solve your problem since if N is not given, you will still encounter non-termination due to the following failure slice:
regions_ncolors(L,N) :-
colouring(L),
int_cset(N,Cs), false,
subset(L,Cs),
label(L).
In int_cset(N,Cs), Cs occurs for the first time and thus cannot influence termination (there is another reason too, its definition would ignore it as well..) and therefore only N has a chance to induce termination.
The actual solution has been already suggested by #TA_intern using length/2 which liberates one of such mode-infested chores.

Steadfastness: Definition and its relation to logical purity and termination

So far, I have always taken steadfastness in Prolog programs to mean:
If, for a query Q, there is a subterm S, such that there is a term T that makes ?- S=T, Q. succeed although ?- Q, S=T. fails, then one of the predicates invoked by Q is not steadfast.
Intuitively, I thus took steadfastness to mean that we cannot use instantiations to "trick" a predicate into giving solutions that are otherwise not only never given, but rejected. Note the difference for nonterminating programs!
In particular, at least to me, logical-purity always implied steadfastness.
Example. To better understand the notion of steadfastness, consider an almost classical counterexample of this property that is frequently cited when introducing advanced students to operational aspects of Prolog, using a wrong definition of a relation between two integers and their maximum:
integer_integer_maximum(X, Y, Y) :-
Y >= X,
!.
integer_integer_maximum(X, _, X).
A glaring mistake in this—shall we say "wavering"—definition is, of course, that the following query incorrectly succeeds:
?- M = 0, integer_integer_maximum(0, 1, M).
M = 0. % wrong!
whereas exchanging the goals yields the correct answer:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
A good solution of this problem is to rely on pure methods to describe the relation, using for example:
integer_integer_maximum(X, Y, M) :-
M #= max(X, Y).
This works correctly in both cases, and can even be used in more situations:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
?- M = 0, integer_integer_maximum(0, 1, M).
false.
| ?- X in 0..2, Y in 3..4, integer_integer_maximum(X, Y, M).
X in 0..2,
Y in 3..4,
M in 3..4 ? ;
no
Now the paper Coding Guidelines for Prolog by Covington et al., co-authored by the very inventor of the notion, Richard O'Keefe, contains the following section:
5.1 Predicates must be steadfast.
Any decent predicate must be “steadfast,” i.e., must work correctly if its output variable already happens to be instantiated to the output value (O’Keefe 1990).
That is,
?- foo(X), X = x.
and
?- foo(x).
must succeed under exactly the same conditions and have the same side effects.
Failure to do so is only tolerable for auxiliary predicates whose call patterns are
strongly constrained by the main predicates.
Thus, the definition given in the cited paper is considerably stricter than what I stated above.
For example, consider the pure Prolog program:
nat(s(X)) :- nat(X).
nat(0).
Now we are in the following situation:
?- nat(0).
true.
?- nat(X), X = 0.
nontermination
This clearly violates the property of succeeding under exactly the same conditions, because one of the queries no longer succeeds at all.
Hence my question: Should we call the above program not steadfast? Please justify your answer with an explanation of the intention behind steadfastness and its definition in the available literature, its relation to logical-purity as well as relevant termination notions.
In 'The craft of prolog' page 96 Richard O'Keef says 'we call the property of refusing to give wrong answers even when the query has an unexpected form (typically supplying values for what we normally think of as inputs*) steadfastness'
*I am not sure if this should be outputs. i.e. in your query ?- M = 0, integer_integer_maximum(0, 1, M). M = 0. % wrong! M is used as an input but the clause has been designed for it to be an output.
In nat(X), X = 0. we are using X as an output variable not an input variable, but it has not given a wrong answer, as it does not give any answer. So I think under that definition it could be steadfast.
A rule of thumb he gives is 'postpone output unification until after the cut.' Here we have not got a cut, but we still want to postpone the unification.
However I would of thought it would be sensible to have the base case first rather than the recursive case, so that nat(X), X = 0. would initially succeed .. but you would still have other problems..

Counter-intuitive behavior of min_member/2

min_member(-Min, +List)
True when Min is the smallest member in the standard order of terms. Fails if List is empty.
?- min_member(3, [1,2,X]).
X = 3.
The explanation is of course that variables come before all other terms in the standard order of terms, and unification is used. However, the reported solution feels somehow wrong.
How can it be justified? How should I interpret this solution?
EDIT:
One way to prevent min_member/2 from succeeding with this solution is to change the standard library (SWI-Prolog) implementation as follows:
xmin_member(Min, [H|T]) :-
xmin_member_(T, H, Min).
xmin_member_([], Min0, Min) :-
( var(Min0), nonvar(Min)
-> fail
; Min = Min0
).
xmin_member_([H|T], Min0, Min) :-
( H #>= Min0
-> xmin_member_(T, Min0, Min)
; xmin_member_(T, H, Min)
).
The rationale behind failing instead of throwing an instantiation error (what #mat suggests in his answer, if I understood correctly) is that this is a clear question:
"Is 3 the minimum member of [1,2,X], when X is a free variable?"
and the answer to this is (to me at least) a clear "No", rather than "I can't really tell."
This is the same class of behavior as sort/2:
?- sort([A,B,C], [3,1,2]).
A = 3,
B = 1,
C = 2.
And the same tricks apply:
?- min_member(3, [1,2,A,B]).
A = 3.
?- var(B), min_member(3, [1,2,A,B]).
B = 3.
The actual source of confusion is a common problem with general Prolog code. There is no clean, generally accepted classification of the kind of purity or impurity of a Prolog predicate. In a manual, and similarly in the standard, pure and impure built-ins are happily mixed together. For this reason, things are often confused, and talking about what should be the case and what not, often leads to unfruitful discussions.
How can it be justified? How should I interpret this solution?
First, look at the "mode declaration" or "mode indicator":
min_member(-Min, +List)
In the SWI documentation, this describes the way how a programmer shall use this predicate. Thus, the first argument should be uninstantiated (and probably also unaliased within the goal), the second argument should be instantiated to a list of some sort. For all other uses you are on your own. The system assumes that you are able to check that for yourself. Are you really able to do so? I, for my part, have quite some difficulties with this. ISO has a different system which also originates in DEC10.
Further, the implementation tries to be "reasonable" for unspecified cases. In particular, it tries to be steadfast in the first argument. So the minimum is first computed independently of the value of Min. Then, the resulting value is unified with Min. This robustness against misuses comes often at a price. In this case, min_member/2 always has to visit the entire list. No matter if this is useful or not. Consider
?- length(L, 1000000), maplist(=(1),L), min_member(2, L).
Clearly, 2 is not the minimum of L. This could be detected by considering the first element of the list only. Due to the generality of the definition, the entire list has to be visited.
This way of handling output unification is similarly handled in the standard. You can spot those cases when the (otherwise) declarative description (which is the first of a built-in) explicitly refers to unification, like
8.5.4 copy_term/2
8.5.4.1 Description
copy_term(Term_1, Term_2) is true iff Term_2 unifies
with a term T which is a renamed copy (7.1.6.2) of
Term_1.
or
8.4.3 sort/2
8.4.3.1 Description
sort(List, Sorted) is true iff Sorted unifies with
the sorted list of List (7.1.6.5).
Here are those arguments (in brackets) of built-ins that can only be understood as being output arguments. Note that there are many more which effectively are output arguments, but that do not need the process of unification after some operation. Think of 8.5.2 arg/3 (3) or 8.2.1 (=)/2 (2) or (1).
8.5.4 1 copy_term/2 (2),
8.4.2 compare/3 (1),
8.4.3 sort/2 (2),
8.4.4 keysort/2 (2),
8.10.1 findall/3 (3),
8.10.2 bagof/3 (3),
8.10.3 setof/3 (3).
So much for your direct questions, there are some more fundamental problems behind:
Term order
Historically, "standard" term order1 has been defined to permit the definition of setof/3 and sort/2 about 1982. (Prior to it, as in 1978, it was not mentioned in the DEC10 manual user's guide.)
From 1982 on, term order was frequently (erm, ab-) used to implement other orders, particularly, because DEC10 did not offer higher-order predicates directly. call/N was to be invented two years later (1984) ; but needed some more decades to be generally accepted. It is for this reason that Prolog programmers have a somewhat nonchalant attitude towards sorting. Often they intend to sort terms of a certain kind, but prefer to use sort/2 for this purpose — without any additional error checking. A further reason for this was the excellent performance of sort/2 beating various "efficient" libraries in other programming languages decades later (I believe STL had a bug to this end, too). Also the complete magic in the code - I remember one variable was named Omniumgatherum - did not invite copying and modifying the code.
Term order has two problems: variables (which can be further instantiated to invalidate the current ordering) and infinite terms. Both are handled in current implementations without producing an error, but with still undefined results. Yet, programmers assume that everything will work out. Ideally, there would be comparison predicates that produce
instantiation errors for unclear cases like this suggestion. And another error for incomparable infinite terms.
Both SICStus and SWI have min_member/2, but only SICStus has min_member/3 with an additional argument to specify the order employed. So the goal
?- min_member(=<, M, Ms).
behaves more to your expectations, but only for numbers (plus arithmetic expressions).
Footnotes:
1 I quote standard, in standard term order, for this notion existed since about 1982 whereas the standard was published 1995.
Clearly min_member/2 is not a true relation:
?- min_member(X, [X,0]), X = 1.
X = 1.
yet, after simply exchanging the two goals by (highly desirable) commutativity of conjunction, we get:
?- X = 1, min_member(X, [X,0]).
false.
This is clearly quite bad, as you correctly observe.
Constraints are a declarative solution for such problems. In the case of integers, finite domain constraints are a completely declarative solution for such problems.
Without constraints, it is best to throw an instantiation error when we know too little to give a sound answer.
This is a common property of many (all?) predicates that depend on the standard order of terms, while the order between two terms can change after unification. Baseline is the conjunction below, which cannot be reverted either:
?- X #< 2, X = 3.
X = 3.
Most predicates using a -Value annotation for an argument say that pred(Value) is the same
as pred(Var), Value = Var. Here is another example:
?- sort([2,X], [3,2]).
X = 3.
These predicates only represent clean relations if the input is ground. It is too much to demand the input to be ground though because they can be meaningfully used with variables, as long as the user is aware that s/he should not further instantiate any of the ordered terms. In that sense, I disagree with #mat. I do agree that constraints can surely make some of these relations sound.
This is how min_member/2 is implemented:
min_member(Min, [H|T]) :-
min_member_(T, H, Min).
min_member_([], Min, Min).
min_member_([H|T], Min0, Min) :-
( H #>= Min0
-> min_member_(T, Min0, Min)
; min_member_(T, H, Min)
).
So it seems that min_member/2 actually tries to unify Min (the first argument) with the smallest element in List in the standard order of terms.
I hope I am not off-topic with this third answer. I did not edit one of the previous two as I think it's a totally different idea. I was wondering if this undesired behaviour:
?- min_member(X, [A, B]), A = 3, B = 2.
X = A, A = 3,
B = 2.
can be avoided if some conditions can be postponed for the moment when A and B get instantiated.
promise_relation(Rel_2, X, Y):-
call(Rel_2, X, Y),
when(ground(X), call(Rel_2, X, Y)),
when(ground(Y), call(Rel_2, X, Y)).
min_member_1(Min, Lst):-
member(Min, Lst),
maplist(promise_relation(#=<, Min), Lst).
What I want from min_member_1(?Min, ?Lst) is to expresses a relation that says Min will always be lower (in the standard order of terms) than any of the elements in Lst.
?- min_member_1(X, L), L = [_,2,3,4], X = 1.
X = 1,
L = [1, 2, 3, 4] .
If variables get instantiated at a later time, the order in which they get bound becomes important as a comparison between a free variable and an instantiated one might be made.
?- min_member_1(X, [A,B,C]), B is 3, C is 4, A is 1.
X = A, A = 1,
B = 3,
C = 4 ;
false.
?- min_member_1(X, [A,B,C]), A is 1, B is 3, C is 4.
false.
But this can be avoided by unifying all of them in the same goal:
?- min_member_1(X, [A,B,C]), [A, B, C] = [1, 3, 4].
X = A, A = 1,
B = 3,
C = 4 ;
false.
Versions
If the comparisons are intended only for instantiated variables, promise_relation/3 can be changed to check the relation only when both variables get instantiated:
promise_relation(Rel_2, X, Y):-
when((ground(X), ground(Y)), call(Rel_2, X, Y)).
A simple test:
?- L = [_, _, _, _], min_member_1(X, L), L = [3,4,1,2].
L = [3, 4, 1, 2],
X = 1 ;
false.
! Edits were made to improve the initial post thanks to false's comments and suggestions.
I have an observation regarding your xmin_member implementation. It fails on this query:
?- xmin_member(1, [X, 2, 3]).
false.
I tried to include the case when the list might include free variables. So, I came up with this:
ymin_member(Min, Lst):-
member(Min, Lst),
maplist(#=<(Min), Lst).
Of course it's worse in terms of efficiency, but it works on that case:
?- ymin_member(1, [X, 2, 3]).
X = 1 ;
false.
?- ymin_member(X, [X, 2, 3]).
true ;
X = 2 ;
false.

Prevent backtracking after first solution to Fibonacci pair

The term fib(N,F) is true when F is the Nth Fibonacci number.
The following Prolog code is generally working for me:
:-use_module(library(clpfd)).
fib(0,0).
fib(1,1).
fib(N,F) :-
N #> 1,
N #=< F + 1,
F #>= N - 1,
F #> 0,
N1 #= N - 1,
N2 #= N - 2,
F1 #=< F,
F2 #=< F,
F #= F1 + F2,
fib(N1,F1),
fib(N2,F2).
When executing this query (in SICStus Prolog), the first (and correct) match is found for N (rather instantly):
| ?- fib(X,377).
X = 14 ?
When proceeding (by entering ";") to see if there are any further matches (which is impossible by definition), it takes an enormous amount of time (compared to the first match) just to always reply with no:
| ?- fib(X,377).
X = 14 ? ;
no
Being rather new to Prolog, I tried to use the Cut-Operator (!) in various ways, but I cannot find a way to prevent the search after the first match. Is it even possible given the above rules? If yes, please let me know how :)
There are two parts to get what you want.
The first is to use
call_semidet/1
which ensures that there is exactly one answer. See links for an
implementation for SICStus, too. In the unlikely event of having more
than one answer, call_semidet/1 produces a safe error. Note that
once/1 and !/0 alone simply cut away whatever there has been.
However, you will not be very happy with call_semidet/1 alone. It
essentially executes a goal twice. Once to see if there is no more
than one answer, and only then again to obtain the first answer. So
you will get your answer much later.
The other part is to speed up your definition such that above will not
be too disturbing to you. The solution suggested by CapelliC changes
the algorithm altogether which is specific to your concrete function
but does not extend to any other function. But it also describes a
different relation.
Essentially, you found the quintessential parts already, you only need
to assemble them a bit differently to make them work. But, let's start
with the basics.
CLPFD as CLP(Z)
What you are doing here is still not that common to many Prolog
programmers. You use finite domain constraints for general integer
arithmetics. That is, you are using CLPFD as a pure substitute to the
moded expression evaluation found in (is)/2, (>)/2 and the
like. So you want to extend the finite domain paradigm which assumes
that we express everything within finite given intervals. In fact, it
would be more appropriate to call this extension CLP(Z).
This extension does not work in every Prolog offering finite
domains. In fact, there is only SICStus, SWI and YAP that correctly
handle the case of infinite intervals. Other systems might fail or
succeed when they rather should succeed or fail - mostly when integers
are getting too large.
Understanding non-termination
The first issue is to understand the actual reason why your original
program did not terminate. To this end, I will use a failure
slice. That
is, I add false goals into your program. The point being: if the
resulting program does not terminate then also the original program
does not terminate. So the minimal failure slice of your (presumed)
original program is:
fiborig(0,0) :- false.
fiborig(1,1) :- false.
fiborig(N,F) :-
N #> 1,
N1 #= N-1,
N2 #= N-2,
F #= F1+F2,
fiborig(N1,F1), false,
fiborig(N2,F2).
There are two sources for non-termination here: One is that for a given
F there are infinitely many values for F1 and F2. That can be
easily handled by observing that F1 #> 0, F2 #>= 0.
The other is more related to Prolog's execution mechanism. To
illustrate it, I will add F2 #= 0. Again, because the resulting
program does not terminate, also the original program will loop.
fiborig(0,0) :- false.
fiborig(1,1) :- false.
fiborig(N,F) :-
N #> 1,
N1 #= N-1,
N2 #= N-2,
F #= F1+F2,
F1 #> 0,
F2 #>= 0,
F2 #= 0,
fiborig(N1,F1), false,
fiborig(N2,F2).
So the actual problem is that the goal that might have 0 as result
is executed too late. Simply exchange the two recursive goals. And add
the redundant constraint F2 #=< F1 for efficiency.
fibmin(0,0).
fibmin(1,1).
fibmin(N,F) :-
N #> 1,
N1 #= N-1,
N2 #= N-2,
F1 #> 0,
F2 #>= 0,
F1 #>= F2,
F #= F1+F2,
fibmin(N2,F2),
fibmin(N1,F1).
On my lame laptop I got the following runtimes for fib(N, 377):
SICStus SWI
answer/total
fiborig: 0.090s/27.220s 1.332s/1071.380s
fibmin: 0.080s/ 0.110s 1.201s/ 1.506s
Take the sum of both to get the runtime for call_semidet/1.
Note that SWI's implementation is written in Prolog only, whereas
SICStus is partly in C, partly in Prolog. So when porting SWI's (actually #mat's) clpfd to
SICStus, it might be comparable in speed.
There are still many things to optimize. Think of indexing, and the
handling of the "counters", N, N1, N2.
Also your original program can be improved quite a bit. For example,
you are unnecessarily posting the constraint F #>= N-1 three times!
If you are only interested in the first solution or know that there is at most one solution, you can use once/1 to commit to that solution:
?- once(fib(X, 377)).
+1 for using CLP(FD) as a declarative alternative to lower-level arithmetic. Your version can be used in all directions, whereas a version based on primitive integer arithmetic cannot.
I played a bit with another definition, I wrote in standard arithmetic and translated to CLP(FD) on purpose for this question.
My plain Prolog definition was
fibo(1, 1,0).
fibo(2, 2,1).
fibo(N, F,A) :- N > 2, M is N -1, fibo(M, A,B), F is A+B.
Once translated, since it take too long in reverse mode (or doesn't terminate, don't know),
I tried to add more constraints (and moving them around) to see where a 'backward' computation terminates:
fibo(1, 1,0).
fibo(2, 2,1).
fibo(N, F,A) :-
N #> 2,
M #= N -1,
M #>= 0, % added
A #>= 0, % added
B #< A, % added - this is key
F #= A+B,
fibo(M, A,B). % moved - this is key
After adding B #< A and moving the recursion at last call, now it works.
?- time(fibo(U,377,Y)).
% 77,005 inferences, 0.032 CPU in 0.033 seconds (99% CPU, 2371149 Lips)
U = 13,
Y = 233 ;
% 37,389 inferences, 0.023 CPU in 0.023 seconds (100% CPU, 1651757 Lips)
false.
edit To account for 0 based sequences, add a fact
fibo(0,0,_).
Maybe this explain the role of the last argument: it's an accumulator.

Trying to count steps through recursion?

This is a cube, the edges of which are directional; It can only go left to right, back to front and top to bottom.
edge(a,b).
edge(a,c).
edge(a,e).
edge(b,d).
edge(b,f).
edge(c,d).
edge(c,g).
edge(d,h).
edge(e,f).
edge(e,g).
edge(f,h).
edge(g,h).
With the method below we can check if we can go from A-H for example: cango(A,H).
move(X,Y):- edge(X,Y).
move(X,Y):- edge(X,Z), move(Z,Y).
With move2, I'm trying to impalement counting of steps required.
move2(X,Y,N):- N is N+1, edge(X,Y).
move2(X,Y,N):- N is N+1, edge(X,Z), move2(Z,Y,N).
How would I implement this?
arithmetic evaluation is carried out as usual in Prolog, but assignment doesn't work as usual. Then you need to introduce a new variable to increment value:
move2(X,Y,N,T):- T is N+1, edge(X,Y).
move2(X,Y,N,T):- M is N+1, edge(X,Z), move2(Z,Y,M,T).
and initialize N to 0 at first call. Such added variables (T in our case) are often called accumulators.
move2(X,Y,1):- edge(X,Y), ! .
move2(X,Y,NN):- edge(X,Z), move2(Z,Y,N), NN is N+1 .
(is)/2 is very sensitive to instantiations in its second argument. That means that you cannot use it in an entirely relational manner. You can ask X is 1+1., you can even ask 2 is 1+1. but you cannot ask: 2 is X+1.
So when you are programming with predicates like (is)/2, you have to imagine what modes a predicate will be used with. Such considerations easily lead to errors, in particular, if you just started. But don't worry, also more proficient programmers still fall prey to such problems.
There is a clean alternative in several Prolog systems: In SICStus, YAP, SWI there is a library(clpfd) which permits you to express relations between integers. Usually this library is used for constraint programming, but you can also use it as a safe and clean replacement for (is)/2 on the integers. Even more so, this library is often very efficiently compiled such that the resulting code is comparable in speed to (is)/2.
?- use_module(library(clpfd)).
true.
?- X #= 1+1.
X = 2.
?- 2 #= 1+1.
true.
?- 2 #= X+1.
X = 1.
So now back to your program, you can simply write:
move2(X,Y,1):- edge(X,Y).
move2(X,Y,N0):- N0 #>= 1, N0 #= N1+1, edge(X,Z), move2(Z,Y,N1).
You get now all distances as required.
But there is more to it ...
To make sure that move2/3 actually terminates, try:
?- move2(A, B, N), false.
false.
Now we can be sure that move2/3 always terminates. Always?
Assume you have added a further edge:
edge(f, f).
Now above query loops. But still you can use your program to your advantage!
Determine the number of nodes:
?- setof(C,A^B^(edge(A,B),member(C,[A,B])),Cs), length(Cs, N).
Cs = [a, b, c, d, e, f, g, h], N = 8.
So the longest path will take just 7 steps!
Now you can ask the query again, but now by constraining N to a value less than or equal to7:
?- 7 #>= N, move2(A,B, N), false.
false.
With this additional constraint, you have again a terminating definition! No more loops.

Resources