CLPFD ins operator yields not sufficiently instantiated error - prolog

So, my goal is to make a map colourer in Prolog. Here's the map I'm using:
And this are my colouring constraints:
colouring([A,B,C,D,E,F]) :-
maplist( #\=(A), [B,C,D,E] ),
maplist( #\=(B), [C,D,F]),
C #\= D,
maplist( #\=(D), [E,F]),
E #\= F.
Where [A,B,C,D,E,F] is a list of numbers(colors) from 1 to n.
So I want my solver to, given a List of 6 colors and a natural number N, determine the colors and N constraints both ways, in a way that even the most general query could yield results:
regions_ncolors(L,N) :- colouring(L), L ins 1..N, label(L).
Where the most general query is regions_ncolors(L,N).
However, the operator ins doesn't seem to accept a variable N, it instead yields an argument not sufficiently instantiated error. I've tried using this solution instead:
int_cset_(N,Acc,Acc) :- N #= 0.
int_cset_(N,Acc,Cs) :- N_1 #= N-1, int_cset_(N_1,[N|Acc],Cs).
int_cset(N,Cs) :- int_cset_(N,[],Cs).
% most general solver
regions_ncolors(L,N) :- colouring(L), int_cset(N,Cs), subset(L,Cs), label(L).
Where the arguments in int_cset(N,Cs) is a natural number(N) and the counting set Sn = {1,2,...,N}
But this solution is buggy as regions_ncolors(L,N). only returns the same(one) solution for all N, and when I try to add a constraint to N, it goes in an infinite loop.
So what can I do to make the most general query work both ways(for not-instantiated variables)?
Thanks in advance!
Btw, I added a swi-prolog tag in my last post although it was removed by moderation. I don't know if this question is specific to swi-prolog which is why I'm keeping the tag, just in case :)

Your colouring is too specific, you encode the topology of your map into it. Not a problem as is, but it defeats of the purpose of then having a "most general query" solution for just any list.
If you want to avoid the problem of having a free variable instead of a list, you could first instantiate the list with length/2. Compare:
?- L ins 1..3.
ERROR: Arguments are not sufficiently instantiated
ERROR: In:
ERROR: [16] throw(error(instantiation_error,_86828))
ERROR: [10] clpfd:(_86858 ins 1..3) ...
Is that the same problem as you see?
If you first make a list and a corresponding set:
?- length(L, N), L ins 1..N.
L = [],
N = 0 ;
L = [1],
N = 1 ;
L = [_A, _B],
N = 2,
_A in 1..2,
_B in 1..2 ;
L = [_A, _B, _C],
N = 3,
_A in 1..3,
_B in 1..3,
_C in 1..3 .
If you use length/2 like this you will enumerate the possible lists and integer sets completely outside of the CLP(FD) labeling. You can then add more constraints on the variables on the list and if necessary, use labeling.
Does that help you get any further with your problem? I am not sure how it helps for the colouring problem. You would need a different representation of the map topology so that you don't have to manually define it within a predicate like your colouring/1 you have in your question.

There are several issues in your program.
subset/2 is impure
SWI's (by default) built-in predicate subset/2 is not the pure relation you are hoping for. Instead, it expects that both arguments are already sufficiently instantiated. And if not, it takes a guess and sticks to it:
?- colouring(L), subset(L,[1,2,3,4,5]).
L = [1,2,3,4,2,1].
?- colouring(L), subset(L,[1,2,3,4,5]), L = [2|_].
false.
?- L = [2|_], colouring(L), subset(L,[1,2,3,4,5]), L = [2|_].
L = [2,1,3,4,1,2].
With a pure definition it is impossible that adding a further goal as L = [2|_] in the third query makes a failing query succeed.
In general it is a good idea to not interfere with labeling/2 except for the order of variables and the options argument. The internal implementation is often much faster than manual instantiations.
Also, your map is far too simple to expose subset/2s weakness. Not sure what the minimal failing graph is, but here is one such example from
R. Janczewski et al. The smallest hard-to-color graph for algorithm DSATUR, Discrete Mathematics 236 (2001) p.164.
colouring_m13([K1,K2,K3,K6,K5,K7,K4]):-
maplist(#\=(K1), [K2,K3,K4,K7]),
maplist(#\=(K2), [K3,K5,K6]),
maplist(#\=(K3), [K4,K5]),
maplist(#\=(K4), [K5,K7]),
maplist(#\=(K5), [K6,K7]),
maplist(#\=(K6), [K7]).
?- colouring_m13(L), subset(L,[1,2,3,4]).
false. % incomplete
?- L = [3|_], colouring_m13(L), subset(L,[1,2,3,4]).
L = [3,1,2,2,3,1,4].
int_cset/2 never terminates
... (except for some error cases like int_cset(non_integer, _).). As an example consider:
?- int_cset(1,Cs).
Cs = [1]
; loops.
And don't get fooled by the fact that an actual solution was found! It still does not terminate.
#Luis: But how come? I'm getting baffled by this, the same thing is happening on ...
To see this, you need the notion of a failure-slice which helps to identify the responsible part in your program. With some falsework consisting of goals false the responsible part is exposed.
All unnecessary parts have been removed by false. What remains has to be changed somehow.
int_cset_(N,Acc,Acc) :- false, N #= 0.
int_cset_(N,Acc,Cs) :- N1 #= N-1, int_cset_(N1,[N|Acc],Cs), false.
int_cset(N,Cs) :- int_cset_(N,[],Cs), false.
?- int_cset(1, Cs), false.
loops.
Adding the redundant goal N1 #> 0
will avoid unnecessary non-termination.
This alone will not solve your problem since if N is not given, you will still encounter non-termination due to the following failure slice:
regions_ncolors(L,N) :-
colouring(L),
int_cset(N,Cs), false,
subset(L,Cs),
label(L).
In int_cset(N,Cs), Cs occurs for the first time and thus cannot influence termination (there is another reason too, its definition would ignore it as well..) and therefore only N has a chance to induce termination.
The actual solution has been already suggested by #TA_intern using length/2 which liberates one of such mode-infested chores.

Related

Why doesn't this clpfd query terminate until I add a redundant constraint?

I've written some predicates which take the length of a list and attaches some constraints to it (is this the right vocabulary to be using?):
clp_length([], 0).
clp_length([_Head|Rest], Length) :-
Length #>= 0, Length #= Length1 + 1,
clp_length(Rest, Length1).
clp_length2([], 0).
clp_length2([_Head|Rest], Length) :-
Length #= Length1 + 1,
clp_length2(Rest, Length1).
The first terminates on this simple query, but the second doesn't:
?- Small in 1..2, clp_length(Little, Small).
Small = 1,
Little = [_1348] ;
Small = 2,
Little = [_1348, _2174] ;
false.
?- Small in 1..2, clp_length2(Little, Small).
Small = 1,
Little = [_1346] ;
Small = 2,
Little = [_1346, _2046] ;
% OOPS %
This is strange to me, because Length is pretty clearly greater than 0. To figure that out you could either search, find the zero, and deduce that adding from zero can only increase the number, or you could propagate the in 1..2 constraint down. It feels like the extra clause is redundant! That it isn't means my mental model of clpfd is pretty wrong.
So I think I have two questions (would appreciate answers to the second as comments)
Specifically, why does this additional constraint cause the query to work correctly?
Generally, is there a resource I can use to learn about how clpfd is implemented, instead of just seeing some examples of how it can be used? I'd prefer not to have to read Markus Triska's thesis but that's the only source I can find. Is that my only option if I want to be able to answer questions like this one?
1mo, there is the issue with naming. Please refer to previous answers by
mat
and me recommending relational names. You won't go far using inappropriate names. So list_length/2 or list_fdlength/2 would be an appropriate name. Thus we have list_fdlength/2 and list_fdlength2/2.
2do, consider the rule of list_fdlength2/2. Nothing suggests that 0 is of relevance to you. So that rule will be exactly the same if you are using 0 or 1 or -1 or whatever as base case. So how should this poor rule ever realize that 0 is the end to you? Better, consider a generalization:
list_fdlength2(fake(N), N) :- % Extension to permit fake lists
N #< 0.
list_fdlength2([], 0).
list_fdlength2([_Head|Rest], Length) :-
Length #= Length1 + 1,
list_fdlength2(Rest, Length1).
This generalization shows all real answers plus fake answers. Note that I have not changed the rule, I added this alternative fact only. Thus the fake solutions are actually caused by the rule:
?- list_fdlength2(L, 1).
L = [_A]
; L = [_A, _B|fake(-1)]
; L = [_A, _B, _C|fake(-2)]
; ... .
?- list_fdlength2(L, 0).
L = []
; L = [_A|fake(-1)]
; L = [_A, _B|fake(-2)]
; ... .
Each clause tries to contribute to the solutions just in the scope of the clause. But there is no way to derive (by the built-in Prolog execution mechanism) that some rules are no longer of relevance. You have to state that explicitly with redundant constraints as you did.
Now, back to your original solution containing the redundant constraint Length #>= 0. There should not be any such fake solution at all.
list_fdlength(fake(N), N) :-
N #< 0.
list_fdlength([], 0).
list_fdlength([_Head|Rest], Length) :-
Length #>= 0,
Length #= Length1 + 1,
list_fdlength(Rest, Length1).
?- list_fdlength(L, 1).
L = [_A]
; L = [_A, _B|fake(-1)] % totally unexpected
; false.
?- list_fdlength(L, 0).
L = []
; L = [_A|fake(-1)] % eek
; false.
There are fake answers, too! How ugly! At least, they are finite in number. But, you could have done it better by using
Length #>= 1 in place of Length #>=0. With this little change, there are no longer any fake solutions when N is non-negative and thus also your original program will be better.

Steadfastness: Definition and its relation to logical purity and termination

So far, I have always taken steadfastness in Prolog programs to mean:
If, for a query Q, there is a subterm S, such that there is a term T that makes ?- S=T, Q. succeed although ?- Q, S=T. fails, then one of the predicates invoked by Q is not steadfast.
Intuitively, I thus took steadfastness to mean that we cannot use instantiations to "trick" a predicate into giving solutions that are otherwise not only never given, but rejected. Note the difference for nonterminating programs!
In particular, at least to me, logical-purity always implied steadfastness.
Example. To better understand the notion of steadfastness, consider an almost classical counterexample of this property that is frequently cited when introducing advanced students to operational aspects of Prolog, using a wrong definition of a relation between two integers and their maximum:
integer_integer_maximum(X, Y, Y) :-
Y >= X,
!.
integer_integer_maximum(X, _, X).
A glaring mistake in this—shall we say "wavering"—definition is, of course, that the following query incorrectly succeeds:
?- M = 0, integer_integer_maximum(0, 1, M).
M = 0. % wrong!
whereas exchanging the goals yields the correct answer:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
A good solution of this problem is to rely on pure methods to describe the relation, using for example:
integer_integer_maximum(X, Y, M) :-
M #= max(X, Y).
This works correctly in both cases, and can even be used in more situations:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
?- M = 0, integer_integer_maximum(0, 1, M).
false.
| ?- X in 0..2, Y in 3..4, integer_integer_maximum(X, Y, M).
X in 0..2,
Y in 3..4,
M in 3..4 ? ;
no
Now the paper Coding Guidelines for Prolog by Covington et al., co-authored by the very inventor of the notion, Richard O'Keefe, contains the following section:
5.1 Predicates must be steadfast.
Any decent predicate must be “steadfast,” i.e., must work correctly if its output variable already happens to be instantiated to the output value (O’Keefe 1990).
That is,
?- foo(X), X = x.
and
?- foo(x).
must succeed under exactly the same conditions and have the same side effects.
Failure to do so is only tolerable for auxiliary predicates whose call patterns are
strongly constrained by the main predicates.
Thus, the definition given in the cited paper is considerably stricter than what I stated above.
For example, consider the pure Prolog program:
nat(s(X)) :- nat(X).
nat(0).
Now we are in the following situation:
?- nat(0).
true.
?- nat(X), X = 0.
nontermination
This clearly violates the property of succeeding under exactly the same conditions, because one of the queries no longer succeeds at all.
Hence my question: Should we call the above program not steadfast? Please justify your answer with an explanation of the intention behind steadfastness and its definition in the available literature, its relation to logical-purity as well as relevant termination notions.
In 'The craft of prolog' page 96 Richard O'Keef says 'we call the property of refusing to give wrong answers even when the query has an unexpected form (typically supplying values for what we normally think of as inputs*) steadfastness'
*I am not sure if this should be outputs. i.e. in your query ?- M = 0, integer_integer_maximum(0, 1, M). M = 0. % wrong! M is used as an input but the clause has been designed for it to be an output.
In nat(X), X = 0. we are using X as an output variable not an input variable, but it has not given a wrong answer, as it does not give any answer. So I think under that definition it could be steadfast.
A rule of thumb he gives is 'postpone output unification until after the cut.' Here we have not got a cut, but we still want to postpone the unification.
However I would of thought it would be sensible to have the base case first rather than the recursive case, so that nat(X), X = 0. would initially succeed .. but you would still have other problems..

How do I freeze a goal for a list of variables?

My ultimate goal is to make a reified version of automaton/3, that freezes if there are any variables in the sequence passed to it. i.e. I dont want the automaton to instantiate variables.
(fd_length/3, if_/3 etc as defined by other people here on so).
To start with I have a reified test for single variables:
var_t(X,T):-
var(X) ->
T=true;
T=false.
This allows me to implement:
if_var_freeze(X,Goal):-
if_(var_t(X),freeze(X,Goal),Goal).
So I can do something like:
?-X=bob,Goal =format("hello ~w\n",[X]),if_var_freeze(X,Goal).
Which will behave the same as:
?-Goal =format("hello ~w\n",[X]),if_var_freeze(X,Goal),X=bob.
How do I expand this to work on a list of variables so that Goal is only called once, when all the vars have been instantiated?
In this method if I have more than one variable I can get this behaviour which I don't want:
?-List=[X,Y],Goal = format("hello, ~w and ~w\n",List),
if_var_freeze(X,Goal),
if_var_freeze(Y,Goal),X=bob.
hello, bob and _G3322
List = [bob, Y],
X = bob,
Goal = format("hello, ~w and ~w\n", [bob, Y]),
freeze(Y, format("hello, ~w and ~w\n", [bob, Y])).
I have tried:
freeze_list(List,Goal):-
freeze_list_h(List,Goal,FrozenList),
call(FrozenList).
freeze_list_h([X],Goal,freeze(X,Goal)).
freeze_list_h(List,Goal,freeze(H,Frozen)):-
List=[H|T],
freeze_list_h(T,Goal,Frozen).
Which works like:
?- X=bob,freeze_list([X,Y,Z],format("Hello ~w, ~w and ~w\n",[X,Y,Z])),Y=fred.
X = bob,
Y = fred,
freeze(Z, format("Hello ~w, ~w and ~w\n", [bob, fred, Z])) .
?- X=bob,freeze_list([X,Y,Z],format("Hello ~w, ~w and ~w\n",[X,Y,Z])),Y=fred,Z=sue.
Hello bob, fred and sue
X = bob,
Y = fred,
Z = sue .
Which seems okay, but I am having trouble applying it to automaton/3.
To reiterate the aim is to make a reified version of automaton/3, that freezes if there are any variables in the sequence passed to it. i.e. I don't want the automaton to instantiate variables.
This is what I have:
ga(Seq,G) :-
G=automaton(Seq, [source(a),sink(c)],
[arc(a,0,a), arc(a,1,b),
arc(b,0,a), arc(b,1,c),
arc(c,0,c), arc(c,1,c)]).
max_seq_automaton_t(Max,Seq,A,T):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each member of seq
maplist(=(false),Var_T_List), %check that all are false i.e no uninstaninated vars
call(A),!,
T=true.
max_seq_automaton_t(Max,Seq,A,T):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each member of seq
maplist(=(false),Var_T_List), %check that all are false i.e no uninstaninated vars
\+call(A),!,
T=false.
max_seq_automaton_t(Max,Seq,A,true):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each
memberd_t(true,Var_T_List,true), %at least one var
freeze_list_h(Seq,A,FrozenList),
call(FrozenList),
call(A).
max_seq_automaton_t(Max,Seq,A,false):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each
memberd_t(true,Var_T_List,true), %at least one var
freeze_list_h(Seq,A,FrozenList),
call(FrozenList),
\+call(A).
Which does not work, The following goal should be frozen until X is instantiated:
?- Seq=[X,1],ga(Seq,A),max_seq_automaton_t(3,Seq,A,T).
Seq = [1, 1],
X = 1,
A = automaton([1, 1], [source(a), sink(c)], [arc(a, 0, a), arc(a, 1, b), arc(b, 0, a), arc(b, 1, c), arc(c, 0, c), arc(c, 1, c)]),
T = true
Update This is what I now have which I think works as I originally intended but I am digesting what #Mat has said to think if this is actually what I want. Will update further tomorrow.
goals_to_conj([G|Gs],Conj) :-
goals_to_conj_(Gs,G,Conj).
goals_to_conj_([],G,nonvar(G)).
goals_to_conj_([G|Gs],G0,(nonvar(G0),Conj)) :-
goals_to_conj_(Gs,G,Conj).
max_seq_automaton_t(Max,Seq,A,T):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each member of seq
maplist(=(false),Var_T_List), %check that all are false i.e no uninstaninated vars
call(A),!,
T=true.
max_seq_automaton_t(Max,Seq,A,T):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each member of seq
maplist(=(false),Var_T_List), %check that all are false i.e no uninstaninated vars
\+call(A),!,
T=false.
max_seq_automaton_t(Max,Seq,A,T):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each
memberd_t(true,Var_T_List,true), %at least one var
goals_to_conj(Seq,GoalForWhen),
when(GoalForWhen,(A,T=true)).
max_seq_automaton_t(Max,Seq,A,T):-
Max #>=L,
fd_length(Seq,L),
maplist(var_t,Seq,Var_T_List), %find var_t for each
memberd_t(true,Var_T_List,true), %at least one var
goals_to_conj(Seq,GoalForWhen),
when(GoalForWhen,(\+A,T=false)).
In my view, you are making great progress with Prolog. At this point it makes sense to proceed a bit more prudently though. All the things you are asking for can, in principle, be solved easily. You only need a generalization of freeze/2, which is available as when/2.
However, let us take a step back and more deeply consider what is actually going on here.
Declaratively, when we state a constraint, we mean that it holds. We do not mean "It holds only when everything is instantiated", because that would reduce the constraint to a mere checker, leading to a "generate-and-test" approach. The point of constraints is exactly to prune whenever possible, leading to a much reduced search space in many cases.
Exactly the same holds for reified constraints. When we post a reified constraint, we state that the reification holds. Not only in cases where everything is instantiated, but always. The point is exactly that the (reified) constraint can be used in all directions. If the constraint that is being reified is already entailed, we get to know it. Likewise, if it cannot hold, we get to know it. If either possibility may be the case, we need to search explicitly for solutions, or determine that none exist. If we want to insist that the constraint that is being reified holds, it is easily possible; etc.
However, the point in all cases is exactly that we can focus on the declarative semantics of the constraint, very free from extra-logical, procedural considerations like what is being instantiated and when. If I answered your literal question, it would move you closer to operational considerations, much closer than you probably need or want in actuality.
Therefore, I am not going to answer your literal question. But I will give you a solution to your actual, underlying issue.
The point is to reifiy automaton/3. A constraint reification will not by itself prune anything as long as it is open whether the constraint that is being reified actually holds or not. Only when we insist that the constraint that is being reified holds does propagation occur.
It is easy to reify automaton/3, by reifying the conjunction of constraints that constitute its decomposition. Here is one way to do it, based on code that is freely available in SWI-Prolog:
:- use_module(library(clpfd)).
automaton(Vs, Ns, As, T) :-
must_be(list(list), [Vs,Ns,As]),
include_args1(source, Ns, Sources),
include_args1(sink, Ns, Sinks),
phrase((arcs_relation(As, Relation),
nodes_nums(Sinks, SinkNums0),
nodes_nums(Sources, SourceNums0)), [[]-0], _),
phrase(transitions(Vs, Start, End), Tuples),
list_to_drep(SinkNums0, SinkDrep),
list_to_drep(SourceNums0, SourceDrep),
( Start in SourceDrep #/\
End in SinkDrep #/\
tuples_in(Tuples, Relation)) #<==> T.
include_args1(Goal, Ls0, As) :-
include(Goal, Ls0, Ls),
maplist(arg(1), Ls, As).
list_to_drep([L|Ls], Drep) :-
foldl(drep_, Ls, L, Drep).
drep_(L, D0, D0\/L).
transitions([], S, S) --> [].
transitions([Sig|Sigs], S0, S) --> [[S0,Sig,S1]],
transitions(Sigs, S1, S).
nodes_nums([], []) --> [].
nodes_nums([Node|Nodes], [Num|Nums]) -->
node_num(Node, Num),
nodes_nums(Nodes, Nums).
arcs_relation([], []) --> [].
arcs_relation([arc(S0,L,S1)|As], [[From,L,To]|Rs]) -->
node_num(S0, From),
node_num(S1, To),
arcs_relation(As, Rs).
node_num(Node, Num), [Nodes-C] --> [Nodes0-C0],
{ ( member(N-I, Nodes0), N == Node ->
Num = I, C = C0, Nodes = Nodes0
; Num = C0, C is C0 + 1, Nodes = [Node-C0|Nodes0]
) }.
sink(sink(_)).
source(source(_)).
Note that this propagates nothing whatsoever as long as T is unknown.
I now use the following definition for a few sample queries:
seq(Seq, T) :-
automaton(Seq, [source(a),sink(c)],
[arc(a,0,a), arc(a,1,b),
arc(b,0,a), arc(b,1,c),
arc(c,0,c), arc(c,1,c)], T).
Examples:
?- seq([X,1], T).
Result (omitted): Constraints are posted, nothing is propagated.
Next example:
?- seq([X,1], T), X = 3.
X = 3,
T = 0.
Clearly, the reified automaton/3 constraint does not hold in this case. However, the reifying constraint of course still holds, as always, and this is the reason why T=0 in this case.
Next example:
?- seq([1,1], T), indomain(T).
T = 0 ;
T = 1.
Oh-oh! What is going on here? How can it be that the constraint is both true and false? This is because we do not see all constraints that are actually posted in this example. Use call_residue_vars/2 to see the whole truth.
In fact, try it on the simpler example:
?- call_residue_vars(seq([1,1],0), Vs).
The pending residual constraints that still need to be satisfied in this case are:
_G1496 in 0..1,
_G1502#/\_G1496#<==>_G1511,
tuples_in([[_G1505,1,_G1514]], [[0,0,0],[0,1,1],[1,0,0],[1,1,2],[2,0,2], [2,1,2]])#<==>_G825,
tuples_in([[_G831,1,_G827]], [[0,0,0],[0,1,1],[1,0,0],[1,1,2],[2,0,2],[2,1,2]])#<==>_G826,
_G829 in 0#<==>_G830,
_G830 in 0..1,
_G830#/\_G828#<==>_G831,
_G828 in 0..1,
_G827 in 2#<==>_G828,
_G829 in 0..1,
_G829#/\_G826#<==>0,
_G826 in 0..1,
_G825 in 0..1
So, the above only holds if these constraints, which are said to still flounder, also hold.
Here is an auxiliary definition that helps you label remaining finite domain variables. It suffices for this example:
finite(V) :-
fd_dom(V, L..U),
dif(L, inf),
dif(U, sup).
We can now paste back the residual program (which consists of CLP(FD) constraints), and use label_fixpoint/1 to label variables whose domain is finite:
?- Vs0 = [_G1496, _G1499, _G1502, _G1505, _G1508, _G1511, _G1514, _G1517, _G1520, _G1523, _G1526],
_G1496 in 0..1,
_G1502#/\_G1496#<==>_G1511,
tuples_in([[_G1505,1,_G1514]], [[0,0,0],[0,1,1],[1,0,0],[1,1,2],[2,0,2], [2,1,2]])#<==>_G825,
tuples_in([[_G831,1,_G827]], [[0,0,0],[0,1,1],[1,0,0],[1,1,2],[2,0,2],[2,1,2]])#<==>_G826,
_G829 in 0#<==>_G830, _G830 in 0..1,
_G830#/\_G828#<==>_G831, _G828 in 0..1,
_G827 in 2#<==>_G828, _G829 in 0..1,
_G829#/\_G826#<==>0, _G826 in 0..1, _G825 in 0..1,
include(finite, Vs0, Vs),
label(Vs).
Note that we cannot directly use labeling in the original program, i.e., we cannot do:
?- call_residue_vars(seq([1,1],0), Vs), <label subset of Vs>.
because call_residue_vars/2 also brings internal variables to the surface that, although they have a domain assigned and look like regular CLP(FD) variables, are not meant to directly participate in any labeling.
In contrast, the residual program can be used without any problem for further reasoning, and it is in fact meant to be used that way.
In this concrete case, after labeling the variables whose domains are still finite in the case above, some constraints still remain. They are of the form:
tuples_in([[_G1487,1,_G1496]], [[0,0,0],[0,1,1],[1,0,0],[1,1,2],[2,0,2],[2,1,2]])#<==>_G1518
Exercise: Does it follow from this, however indirectly, that the original query, i.e., seq([1,1],0), cannot hold?
So, to summarize:
Constraint reification does not in itself cause propagation of the constraint that is being reified.
Constraint reification often lets you detect that a constraint cannot hold.
In general, CLP(FD) propagation is necessarily incomplete, i.e., we cannot be sure that there is a solution just because our query succeeds.
labeling/2 lets you see whether there are concrete solutions, if domains are finite.
To see all pending constraints, wrap your query in call_residue_vars/2.
As long as pending constraints remain, it is only a conditional answer.
Recommendation: To make sure that no floundering constraints remain, wrap your query in call_residue_vars/2 and look for any residual constraints on the toplevel.
Consider using the widely available prolog-coroutining predicate when/2 (for details, consider reading the SICStus Prolog manual page on when/2).
Note that you can, in principle, implement freeze/2 like this:
freeze(V,Goal) :-
when(nonvar(V),Goal).
What you are implementing appears to me a variation of the following:
delayed_until_ground_t(Goal,T) :-
( ground(Goal)
-> ( call(Goal)
-> T = true
; T = false
)
; T = true, when(ground(Goal),once(Goal))
; T = false, when(ground(Goal), \+(Goal))
).
Delaying goals can be a really nice feature, but be aware of the perils of delaying forever.
Make sure to read and digest the above answer by #mat regarding call_residue_vars/2!

Counter-intuitive behavior of min_member/2

min_member(-Min, +List)
True when Min is the smallest member in the standard order of terms. Fails if List is empty.
?- min_member(3, [1,2,X]).
X = 3.
The explanation is of course that variables come before all other terms in the standard order of terms, and unification is used. However, the reported solution feels somehow wrong.
How can it be justified? How should I interpret this solution?
EDIT:
One way to prevent min_member/2 from succeeding with this solution is to change the standard library (SWI-Prolog) implementation as follows:
xmin_member(Min, [H|T]) :-
xmin_member_(T, H, Min).
xmin_member_([], Min0, Min) :-
( var(Min0), nonvar(Min)
-> fail
; Min = Min0
).
xmin_member_([H|T], Min0, Min) :-
( H #>= Min0
-> xmin_member_(T, Min0, Min)
; xmin_member_(T, H, Min)
).
The rationale behind failing instead of throwing an instantiation error (what #mat suggests in his answer, if I understood correctly) is that this is a clear question:
"Is 3 the minimum member of [1,2,X], when X is a free variable?"
and the answer to this is (to me at least) a clear "No", rather than "I can't really tell."
This is the same class of behavior as sort/2:
?- sort([A,B,C], [3,1,2]).
A = 3,
B = 1,
C = 2.
And the same tricks apply:
?- min_member(3, [1,2,A,B]).
A = 3.
?- var(B), min_member(3, [1,2,A,B]).
B = 3.
The actual source of confusion is a common problem with general Prolog code. There is no clean, generally accepted classification of the kind of purity or impurity of a Prolog predicate. In a manual, and similarly in the standard, pure and impure built-ins are happily mixed together. For this reason, things are often confused, and talking about what should be the case and what not, often leads to unfruitful discussions.
How can it be justified? How should I interpret this solution?
First, look at the "mode declaration" or "mode indicator":
min_member(-Min, +List)
In the SWI documentation, this describes the way how a programmer shall use this predicate. Thus, the first argument should be uninstantiated (and probably also unaliased within the goal), the second argument should be instantiated to a list of some sort. For all other uses you are on your own. The system assumes that you are able to check that for yourself. Are you really able to do so? I, for my part, have quite some difficulties with this. ISO has a different system which also originates in DEC10.
Further, the implementation tries to be "reasonable" for unspecified cases. In particular, it tries to be steadfast in the first argument. So the minimum is first computed independently of the value of Min. Then, the resulting value is unified with Min. This robustness against misuses comes often at a price. In this case, min_member/2 always has to visit the entire list. No matter if this is useful or not. Consider
?- length(L, 1000000), maplist(=(1),L), min_member(2, L).
Clearly, 2 is not the minimum of L. This could be detected by considering the first element of the list only. Due to the generality of the definition, the entire list has to be visited.
This way of handling output unification is similarly handled in the standard. You can spot those cases when the (otherwise) declarative description (which is the first of a built-in) explicitly refers to unification, like
8.5.4 copy_term/2
8.5.4.1 Description
copy_term(Term_1, Term_2) is true iff Term_2 unifies
with a term T which is a renamed copy (7.1.6.2) of
Term_1.
or
8.4.3 sort/2
8.4.3.1 Description
sort(List, Sorted) is true iff Sorted unifies with
the sorted list of List (7.1.6.5).
Here are those arguments (in brackets) of built-ins that can only be understood as being output arguments. Note that there are many more which effectively are output arguments, but that do not need the process of unification after some operation. Think of 8.5.2 arg/3 (3) or 8.2.1 (=)/2 (2) or (1).
8.5.4 1 copy_term/2 (2),
8.4.2 compare/3 (1),
8.4.3 sort/2 (2),
8.4.4 keysort/2 (2),
8.10.1 findall/3 (3),
8.10.2 bagof/3 (3),
8.10.3 setof/3 (3).
So much for your direct questions, there are some more fundamental problems behind:
Term order
Historically, "standard" term order1 has been defined to permit the definition of setof/3 and sort/2 about 1982. (Prior to it, as in 1978, it was not mentioned in the DEC10 manual user's guide.)
From 1982 on, term order was frequently (erm, ab-) used to implement other orders, particularly, because DEC10 did not offer higher-order predicates directly. call/N was to be invented two years later (1984) ; but needed some more decades to be generally accepted. It is for this reason that Prolog programmers have a somewhat nonchalant attitude towards sorting. Often they intend to sort terms of a certain kind, but prefer to use sort/2 for this purpose — without any additional error checking. A further reason for this was the excellent performance of sort/2 beating various "efficient" libraries in other programming languages decades later (I believe STL had a bug to this end, too). Also the complete magic in the code - I remember one variable was named Omniumgatherum - did not invite copying and modifying the code.
Term order has two problems: variables (which can be further instantiated to invalidate the current ordering) and infinite terms. Both are handled in current implementations without producing an error, but with still undefined results. Yet, programmers assume that everything will work out. Ideally, there would be comparison predicates that produce
instantiation errors for unclear cases like this suggestion. And another error for incomparable infinite terms.
Both SICStus and SWI have min_member/2, but only SICStus has min_member/3 with an additional argument to specify the order employed. So the goal
?- min_member(=<, M, Ms).
behaves more to your expectations, but only for numbers (plus arithmetic expressions).
Footnotes:
1 I quote standard, in standard term order, for this notion existed since about 1982 whereas the standard was published 1995.
Clearly min_member/2 is not a true relation:
?- min_member(X, [X,0]), X = 1.
X = 1.
yet, after simply exchanging the two goals by (highly desirable) commutativity of conjunction, we get:
?- X = 1, min_member(X, [X,0]).
false.
This is clearly quite bad, as you correctly observe.
Constraints are a declarative solution for such problems. In the case of integers, finite domain constraints are a completely declarative solution for such problems.
Without constraints, it is best to throw an instantiation error when we know too little to give a sound answer.
This is a common property of many (all?) predicates that depend on the standard order of terms, while the order between two terms can change after unification. Baseline is the conjunction below, which cannot be reverted either:
?- X #< 2, X = 3.
X = 3.
Most predicates using a -Value annotation for an argument say that pred(Value) is the same
as pred(Var), Value = Var. Here is another example:
?- sort([2,X], [3,2]).
X = 3.
These predicates only represent clean relations if the input is ground. It is too much to demand the input to be ground though because they can be meaningfully used with variables, as long as the user is aware that s/he should not further instantiate any of the ordered terms. In that sense, I disagree with #mat. I do agree that constraints can surely make some of these relations sound.
This is how min_member/2 is implemented:
min_member(Min, [H|T]) :-
min_member_(T, H, Min).
min_member_([], Min, Min).
min_member_([H|T], Min0, Min) :-
( H #>= Min0
-> min_member_(T, Min0, Min)
; min_member_(T, H, Min)
).
So it seems that min_member/2 actually tries to unify Min (the first argument) with the smallest element in List in the standard order of terms.
I hope I am not off-topic with this third answer. I did not edit one of the previous two as I think it's a totally different idea. I was wondering if this undesired behaviour:
?- min_member(X, [A, B]), A = 3, B = 2.
X = A, A = 3,
B = 2.
can be avoided if some conditions can be postponed for the moment when A and B get instantiated.
promise_relation(Rel_2, X, Y):-
call(Rel_2, X, Y),
when(ground(X), call(Rel_2, X, Y)),
when(ground(Y), call(Rel_2, X, Y)).
min_member_1(Min, Lst):-
member(Min, Lst),
maplist(promise_relation(#=<, Min), Lst).
What I want from min_member_1(?Min, ?Lst) is to expresses a relation that says Min will always be lower (in the standard order of terms) than any of the elements in Lst.
?- min_member_1(X, L), L = [_,2,3,4], X = 1.
X = 1,
L = [1, 2, 3, 4] .
If variables get instantiated at a later time, the order in which they get bound becomes important as a comparison between a free variable and an instantiated one might be made.
?- min_member_1(X, [A,B,C]), B is 3, C is 4, A is 1.
X = A, A = 1,
B = 3,
C = 4 ;
false.
?- min_member_1(X, [A,B,C]), A is 1, B is 3, C is 4.
false.
But this can be avoided by unifying all of them in the same goal:
?- min_member_1(X, [A,B,C]), [A, B, C] = [1, 3, 4].
X = A, A = 1,
B = 3,
C = 4 ;
false.
Versions
If the comparisons are intended only for instantiated variables, promise_relation/3 can be changed to check the relation only when both variables get instantiated:
promise_relation(Rel_2, X, Y):-
when((ground(X), ground(Y)), call(Rel_2, X, Y)).
A simple test:
?- L = [_, _, _, _], min_member_1(X, L), L = [3,4,1,2].
L = [3, 4, 1, 2],
X = 1 ;
false.
! Edits were made to improve the initial post thanks to false's comments and suggestions.
I have an observation regarding your xmin_member implementation. It fails on this query:
?- xmin_member(1, [X, 2, 3]).
false.
I tried to include the case when the list might include free variables. So, I came up with this:
ymin_member(Min, Lst):-
member(Min, Lst),
maplist(#=<(Min), Lst).
Of course it's worse in terms of efficiency, but it works on that case:
?- ymin_member(1, [X, 2, 3]).
X = 1 ;
false.
?- ymin_member(X, [X, 2, 3]).
true ;
X = 2 ;
false.

Trying to count steps through recursion?

This is a cube, the edges of which are directional; It can only go left to right, back to front and top to bottom.
edge(a,b).
edge(a,c).
edge(a,e).
edge(b,d).
edge(b,f).
edge(c,d).
edge(c,g).
edge(d,h).
edge(e,f).
edge(e,g).
edge(f,h).
edge(g,h).
With the method below we can check if we can go from A-H for example: cango(A,H).
move(X,Y):- edge(X,Y).
move(X,Y):- edge(X,Z), move(Z,Y).
With move2, I'm trying to impalement counting of steps required.
move2(X,Y,N):- N is N+1, edge(X,Y).
move2(X,Y,N):- N is N+1, edge(X,Z), move2(Z,Y,N).
How would I implement this?
arithmetic evaluation is carried out as usual in Prolog, but assignment doesn't work as usual. Then you need to introduce a new variable to increment value:
move2(X,Y,N,T):- T is N+1, edge(X,Y).
move2(X,Y,N,T):- M is N+1, edge(X,Z), move2(Z,Y,M,T).
and initialize N to 0 at first call. Such added variables (T in our case) are often called accumulators.
move2(X,Y,1):- edge(X,Y), ! .
move2(X,Y,NN):- edge(X,Z), move2(Z,Y,N), NN is N+1 .
(is)/2 is very sensitive to instantiations in its second argument. That means that you cannot use it in an entirely relational manner. You can ask X is 1+1., you can even ask 2 is 1+1. but you cannot ask: 2 is X+1.
So when you are programming with predicates like (is)/2, you have to imagine what modes a predicate will be used with. Such considerations easily lead to errors, in particular, if you just started. But don't worry, also more proficient programmers still fall prey to such problems.
There is a clean alternative in several Prolog systems: In SICStus, YAP, SWI there is a library(clpfd) which permits you to express relations between integers. Usually this library is used for constraint programming, but you can also use it as a safe and clean replacement for (is)/2 on the integers. Even more so, this library is often very efficiently compiled such that the resulting code is comparable in speed to (is)/2.
?- use_module(library(clpfd)).
true.
?- X #= 1+1.
X = 2.
?- 2 #= 1+1.
true.
?- 2 #= X+1.
X = 1.
So now back to your program, you can simply write:
move2(X,Y,1):- edge(X,Y).
move2(X,Y,N0):- N0 #>= 1, N0 #= N1+1, edge(X,Z), move2(Z,Y,N1).
You get now all distances as required.
But there is more to it ...
To make sure that move2/3 actually terminates, try:
?- move2(A, B, N), false.
false.
Now we can be sure that move2/3 always terminates. Always?
Assume you have added a further edge:
edge(f, f).
Now above query loops. But still you can use your program to your advantage!
Determine the number of nodes:
?- setof(C,A^B^(edge(A,B),member(C,[A,B])),Cs), length(Cs, N).
Cs = [a, b, c, d, e, f, g, h], N = 8.
So the longest path will take just 7 steps!
Now you can ask the query again, but now by constraining N to a value less than or equal to7:
?- 7 #>= N, move2(A,B, N), false.
false.
With this additional constraint, you have again a terminating definition! No more loops.

Resources