Computer Reasoning about Prologish Boolos' Curious Inference - prolog

Boolo's curious inference has been originally formulated with equations here. It is a recursive definition of a function f and a predicate d via the syntax of N+, the natural numbers without zero, generated from 1 and s(.).
But it can also be formulated with Horn Clauses. The logical content is not exactly the same, the predicate f captures only the positive aspect of the function, but the problem type is the same. Take the following Prolog program:
f(_, 1, s(1)).
f(1, s(X), s(s(Y))) :- f(1, X, Y).
f(s(X), s(Y), T) :- f(s(X), Y, Z), f(X, Z, T).
d(1).
d(s(X)) :- d(X).
Whats the theoretical logical outcome of the last query, and can you demonstrably have a computer program in our time and space that produces the outcome, i.e. post the program on gist and everybody can run it?
?- f(X,X,Y).
X = 1,
Y = s(1)
X = s(1),
Y = s(s(s(1)))
X = s(s(1)),
Y = s(s(s(s(s(s(s(s(s(s(...))))))))))
ERROR: Out of global stack
?- f(s(s(s(s(1)))), s(s(s(s(1)))), X), d(X).
If the program that does the job of certifying the result is not a Prolog interpreter itself like here, what would do the job especially suited for this Prologish problem formulation?

One solution: Abstract interpretation
Preliminaries
In this answer, I use an interpreter to show that this holds. However, it is not a Prolog interpreter, because it does not interpret the program in exactly the same way Prolog interprets the program.
Instead, it interprets the program in a more abstract way. Such interpreters are therefore called abstract interpreters.
Program representation
Critically, I work directly with the source program, using only modifications that we, by purely algebraic reasoning, know can be safely applied. It helps tremendously for such reasoning that your source program is completely pure by construction, since it only uses pure predicates.
To simplify reasoning about the program, I now make all unifications explicit. It is easy to see that this does not change the meaning of the program, and can be easily automated. I obtain:
f(_, X, Y) :-
X = 1,
Y = s(1).
f(Arg, X, Y) :-
Arg = 1,
X = s(X0),
Y = s(s(Y0)),
f(Arg, X0, Y0).
f(X, Y, T) :-
X = s(X0),
Y = s(Y0),
f(X, Y0, Z),
f(X0, Z, T).
I leave it as an easy exercise to show that this is declaratively equivalent to the original program.
The abstraction
The abstraction I use is the following: Instead of reasoning over the concrete terms 1, s(1), s(s(1)) etc., I use the atom d for each term T for which I can prove that d(T) holds.
Let me show you what I mean by the following interpretation of unification:
interpret(d = N) :- d(N).
This says:
If d(N) holds, then N is to be regarded identical to the atom d, which, as we said, shall denote any term for which d/1 holds.
Note that this differs significantly from what an actual unification between concrete terms d and N means! For example, we obtain:
?- interpret(X = s(s(1))).
X = d.
Pretty strange, but I hope you can get used to it.
Extending the abstraction
Of course, interpreting a single unification is not enough to reason about this program, since it also contains additional language elements.
I therefore extend the abstract interpretation to:
conjunction
calls of f/3.
Interpreting conjunctions is easy, but what about f/3?
Incremental derivations
If, during abstract interpretation, we encounter the goal f(X, Y, Z), then we know the following: In principle, the arguments can of course be unified with any terms for which the goal succeeds. So we keep track of those arguments for which we know the query can succeed in principle.
We thus equip the predicate with an additional argument: A list of f/3 goals that are logical consequences of the program.
In addition, we implement the following very important provision: If we encounter a unification that cannot be safely interpreted in abstract terms, then we throw an error instead of failing silently. This may for example happen if the unification would fail when regarded as an abstract interpretation although it would succeed as a concrete unification, or if we cannot fully determine whether the arguments are of the intended domain. The primary purpose of this provision is to avoid unintentional elimination of actual solutions due to oversights in the abstract interpreter. This is the most critical aspect in the interpreter, and any proof-theoretic mechanism will face closely related questions (how can we ensure that no proofs are missed?).
Here it is:
interpret(Var = N, _) :-
must_be(var, Var),
must_be(ground, N),
d(N),
Var = d.
interpret((A,B), Ds) :-
interpret(A, Ds),
interpret(B, Ds).
interpret(f(A, B, C), Ds) :-
member(f(A, B, C), Ds).
Quis custodiet ipsos custodes?
How can we tell whether this is actually correct? That's the tough part! In fact, it turns out that the above is not sufficient to be certain to catch all cases, because it may simply fail if d(N) does not hold. It is obviously not acceptable for the abstract interpreter to fail silently for cases it cannot handle. So we need at least one more clause:
interpret(Var = N, _) :-
must_be(var, Var),
must_be(ground, N),
\+ d(N),
domain_error(d, N).
In fact, an abstract interpreter becomes a lot less error-prone when we reason about ground terms, and so I will use the atom any to represent "any term at all" in derived answers.
Over this domain, the interpretation of unification becomes:
interpret(Var = N, _) :-
must_be(ground, N),
( var(Var) ->
( d(N) -> Var = d
; N = s(d) -> Var = d
; N = s(s(d)) -> Var = d
; domain_error(d, N)
)
; Var == any -> true
; domain_error(any, Var)
).
In addition, I have implemented further cases of the unification over this abstract domain. I leave it as an exercise to ponder whether this correctly models the intended semantics, and to implement further cases.
As it will turn out, this definition suffices to answer the posted question. However, it clearly leaves a lot to be desired: It is more complex than we would like, and it becomes increasingly harder to tell whether we have covered all cases. Note though that any proof-theoretic approach will face closely corresponding issues: The more complex and powerful it becomes, the harder it is to tell whether it is still correct.
All derivations: See you at the fixpoint!
It now remains to deduce everything that follows from the original program.
Here it is, a simple fixpoint computation:
derivables(Ds) :-
functor(Head, f, 3),
findall(Head-Body, clause(Head, Body), Clauses),
derivables_fixpoint(Clauses, [], Ds).
derivables_fixpoint(Clauses, Ds0, Ds) :-
findall(D, clauses_derivable(Clauses, Ds0, D), Ds1, Ds0),
term_variables(Ds1, Vs),
maplist(=(any), Vs),
sort(Ds1, Ds2),
( same_length(Ds2, Ds0) -> Ds = Ds0
; derivables_fixpoint(Clauses, Ds2, Ds)
).
clauses_derivable(Clauses, Ds0, Head) :-
member(Head-Body, Clauses),
interpret(Body, Ds0).
Since we are deriving ground terms, sort/2 removes duplicates.
Example query:
?- derivables(Ds).
ERROR: Arguments are not sufficiently instantiated
Somewhat anticlimactically, the abstract interpreter is unable to process this program!
Commutativity to the rescue
In a proof-theoretic approach, we search for, well, proofs. In an interpreter-based approach, we can either improve the interpreter or apply algebraic laws to transform the source program in a way that preserves essential properties.
In this case, I will do the latter, and leave the former as an exercise. Instead of searching for proofs, we are searching for equivalent ways to write the program so that our interpreter can derive the desired properties. For example, I now use commutativity of conjunction to obtain:
f(_, X, Y) :-
X = 1,
Y = s(1).
f(Arg, X, Y) :-
Arg = 1,
f(Arg, X0, Y0),
X = s(X0),
Y = s(s(Y0)).
f(X, Y, T) :-
f(X, Y0, Z),
f(X0, Z, T),
X = s(X0),
Y = s(Y0).
Again, I leave it as an exercise to carefully check that this program is declaratively equivalent to your original program.
iamque opus exegi, because:
?- derivables(Ds).
Ds = [f(any, d, d)].
This shows that in each solution of f/3, the last two arguments are always terms for which d/1 holds! In particular, it also holds for the sample arguments you posted, even if there is no hope to ever actually compute the concrete terms!
Conclusion
By abstract interpretation, we have shown:
for all X where f(_, _, X) holds, d(X) also holds
beyond that, for all Y where f(_, Y, _) holds, d(Y) also holds.
The question only asked for a special case of the first property. We have shown significantly more!
In summary:
If f(_, Y, X) holds, then d(X) holds and d(Y) holds.
Prolog makes it comparatively easy and convenient to reason about Prolog programs. This often allows us to derive interesting properties of Prolog programs, such as termination properties and type information.
Please see Reasoning about programs for references and more explanation.
+1 for a great question and reference.

Related

Prolog order of clauses causes pathfinding to fail

I am developing a path finding algorithm in Prolog, giving all nodes accessible by a path from a starting node. To avoid duplicate paths, visited nodes are kept in a list.
Nodes and neighbors are defined as below:
node(a).
node(b).
node(c).
node(d).
node(e).
edge(a,b).
edge(b,c).
edge(c,d).
edge(b,d).
neighbor(X,Y) :- edge(X,Y).
neighbor(X,Y) :- edge(Y,X).
The original algorithm below works fine:
path2(X,Y) :-
pathHelper(X,Y,[X]).
pathHelper(X,Y,L) :-
neighbor(X,Y),
\+ member(Y,L).
pathHelper(X,Y,H) :-
neighbor(X,Z),
\+ member(Z,H),
pathHelper(Z,Y,[Z|H]).
This works fine
[debug] ?- path2(a,X).
X = b ;
X = c ;
X = d ;
X = d ;
X = c ;
false.
however, when changing the order of the two clauses in the second definition, such as below
pathHelper(X,Y,L) :-
\+ member(Y,L),
neighbor(X,Y).
When trying the same here, swipl returns the following:
[debug] ?- path2(a,X).
false.
The query doesn't work anymore, and only returns false. I have tried to understand this through the tracecommand, but still can't make sense of what exactly is wrong.
In other words, I am failing to understand why the order of neighbor(X,Y)and \+ member(Y,L)is crucial here. It makes sense to have neighbor(X,Y) first in terms of efficiency, but not in terms of correctness to me.
You are now encountering the not so clean-cut borders of pure Prolog and its illogical surroundings. Welcome to the real world.
Or rather, not welcome! Instead, let's try to improve your definition. The key problem is
\+ member(Y, [a]), Y = b.
which fails while
Y = b, \+ member(Y,[a]).
succeeds. There is no logic to justify this. It's just the operational mechanism of Prolog's built-in (\+)/1.
Happily, we can improve upon this. Enter non_member/2.
non_member(_X, []).
non_member(X, [E|Es]) :-
dif(X, E),
non_member(X, Es).
Now,
?- non_member(Y, [a]).
dif(Y,a).
Mark this answer, it says: Yes, Y is not an element of [a], provided Y is different from a. Think of the many solutions this answer includes, like Y = 42, or Y = b and infinitely many more such solutions that are not a. Infinitely many solutions captured in nine characters!
Now, both non_member(Y, [a]), Y = b and Y = b, non_member(Y, [a]) succeed. So exchanging them has only influence on runtime and space consumption. If we are at it, note that you check for non-memberness in two clauses. You can factor this out. For a generic solution to this, see closure/3. With it, you simply say: closure(neighbor, A,B).
Also consider the case where you have only edge(a,a). Your definition fails here for path2(a,X). But shouldn't this rather succeed?
And the name path2/2 is not that fitting, rather reserve this word for an actual path.
The doubt you have is related to how prolog handle negation. Prolog uses negation as failure. This means that, if prolog has to negate a goal g (indicate it with not(g)), it tries to prove g by executing it and then, if the g fails, not(g) (or \+ g, i.e. the negation of g) succeeds and viceversa.
Keep in mind also that, after the execution of not(g), if the goal has variables, they will not be instantiated. This because prolog should instantiate the variables with all the terms that makes g fail, and this is likely an infinite set (for example for a list, not(member(A,[a]) should instantiate the variable A with all the elements that are not in the list).
Let's see an example. Consider this simple program:
test:-
L = [a,b,c],
\+member(A,L),
writeln(A).
and run it with ?- trace, test. First of all you get a Singleton variable in \+: A warning for the reason i explained before, but let's ignore it and see what happens.
Call:test
Call:_5204=[a, b]
Exit:[a, b]=[a, b]
Call:lists:member(_5204, [a, b])
Exit:lists:member(a, [a, b]) % <-----
Fail:test
false
You see at the highlighted line that the variable A is instantiated to a and so member/2 succeeds and so \+ member(A,L) is false.
So, in your code, if you write pathHelper(X,Y,L) :- \+ member(Y,L), neighbor(X,Y)., this clause will always fail because Y is not sufficiently instantiated. If you swap the two terms, Y will be ground and so member/2 can fail (and \+member succeeds).

Prolog - subsitution and evaluation

Hello good people of programming .
Logic programming is always fascinating compare to imperative programming.
As pursuing unknown of logic programming, there is some problems encountering arithmetic expressions.
Here is the code I have done so far.
number_atom(N) :-
(number(N) -> functor(N, _, _); functor(N, _, _), atom(N)).
arithmeticAdd_expression(V,V,Val,Val).
arithmeticAdd_expression(N, _Var, _Val, N) :-
number_atom(N).
arithmeticAdd_expression(X+Y, Var, Val, R) :-
arithmeticAdd_expression(X, Var, Val, RX),
arithmeticAdd_expression(Y, Var, Val, RY),
(number(RX), number(RY) -> R is RX + RY; R = RX + RY).
Taking add operation as example:
arithmeticAdd_expression(Expression, Variable, Value, Result)
?- arithmeticAdd_expression(a+10, a, 1, Result).
?- Result = 11;
?- Result = a + 10.
?- arithmeticAdd_expression(a+10, b, 1, Result).
?- Result = a + 10.
What I would like to achieve is that
if the atom(s) in the Expression can only be substituted by given Variable and value, then Result is the number only like the example shown above(Result = 11). Else, the Result is the Expression itself only. My problem with the code is somewhere there, I just could figure it out. So, Please someone can help me? Thank you.
An important attraction of logic programming over, say, functional programming is that you can often use the same code in multiple directions.
This means that you can ask not only for a particular result if the inputs are given, but also ask how solutions look like in general.
However, for this to work, you have to put some thought into the way you represent your data. For example, in your case, any term in your expression that is still a logical variable may denote either a given number or an atom that should be interpreted differently than a plain number or an addition of two other terms. This is called a defaulty representation because you have to decide what a variable should denote by default, and there is no way to restrict its meaning to only one of the possible cases.
Therefore, I suggest first of all to change the representation so that you can symbolically distinguish the two cases. For example, to represent expressions in your case, let us adopt the convention that:
atoms are denoted by the wrapper a/1
numbers are denoted by the wrapper n/1.
and as is already the case, (+)/2 shall denote addition of two expressions.
So, a defaulty term like b+10 shall now be written as: a(b)+n(10). Note the use of the wrappers a/1 and n/1 to make clear which case we are dealing with. Such a representation is called clean. The wrappers are arbitrarily (though mnemonically) chosen, and we could have used completely different wrappers such as atom/1 and number/1, or atm/1 and nmb/1. The key property is only that we can now symbolically distinguish different cases by virtue of their outermost functor and arity.
Now the key advantage: Using such a convention, we can write for example: a(X)+n(Y). This is a generalization of the earlier term. However, it carries a lot more information than only X+Y, because in the latter case, we have lost track of what these variables stand for, while in the former case, this distinction is still available.
Now, assuming that this convention is used in expressions, it becomes straight-forward to describe the different cases:
expression_result(n(N), _, _, n(N)).
expression_result(a(A), A, N, n(N)).
expression_result(a(A), Var, _, a(A)) :-
dif(A, Var).
expression_result(X+Y, Var, Val, R) :-
expression_result(X, Var, Val, RX),
expression_result(Y, Var, Val, RY),
addition(RX, RY, R).
addition(n(X), n(Y), n(Z)) :- Z #= X + Y.
addition(a(X), Y, a(X)+Y).
addition(X, a(Y), X+a(Y)).
Note that we can now use pattern matching to distinguish the cases. No more if-then-elses, and no more atom/1 or number/1 tests are necessary.
Your test cases work as expected:
?- expression_result(a(a)+n(10), a, 1, Result).
Result = n(11) ;
false.
?- expression_result(a(a)+n(10), b, 1, Result).
Result = a(a)+n(10) ;
false.
And now the key advantage: With such a pure program (please see logical-purity for more information), we can also ask "What do results look like in general?"
?- expression_result(Expr, Var, N, R).
Expr = R, R = n(_1174) ;
Expr = a(Var),
R = n(N) ;
Expr = R, R = a(_1698),
dif(_1698, Var) ;
Expr = n(_1852)+n(_1856),
R = n(_1896),
_1852+_1856#=_1896 ;
Expr = n(_2090)+a(Var),
R = n(_2134),
_2090+N#=_2134 .
Here, I have used logical variables for all arguments, and I get quite general answers from this program. This is why I have used clpfd constraints for declarative integer arithmetic.
Thus, your immediate issue can be readily solved by using a clean representation, and using the code above.
Only one very small challenge remains: Maybe you actually want to use a defaulty representation such as c+10 (instead of a(c)+n(10)). The task you are then facing is to convert the defaulty representation to a clean one, for example via a predicate defaulty_clean/2. I leave this as an easy exercise. Once you have a clean representation, you can use the code above without changes.

Fold over a partial list

This is a question provoked by an already deleted answer to this question. The issue could be summarized as follows:
Is it possible to fold over a list, with the tail of the list generated while folding?
Here is what I mean. Say I want to calculate the factorial (this is a silly example but it is just for demonstration), and decide to do it like this:
fac_a(N, F) :-
must_be(nonneg, N),
( N =< 1
-> F = 1
; numlist(2, N, [H|T]),
foldl(multiplication, T, H, F)
).
multiplication(X, Y, Z) :-
Z is Y * X.
Here, I need to generate the list that I give to foldl. However, I could do the same in constant memory (without generating the list and without using foldl):
fac_b(N, F) :-
must_be(nonneg, N),
( N =< 1
-> F = 1
; fac_b_1(2, N, 2, F)
).
fac_b_1(X, N, Acc, F) :-
( X < N
-> succ(X, X1),
Acc1 is X1 * Acc,
fac_b_1(X1, N, Acc1, F)
; Acc = F
).
The point here is that unlike the solution that uses foldl, this uses constant memory: no need for generating a list with all values!
Calculating a factorial is not the best example, but it is easier to follow for the stupidity that comes next.
Let's say that I am really afraid of loops (and recursion), and insist on calculating the factorial using a fold. I still would need a list, though. So here is what I might try:
fac_c(N, F) :-
must_be(nonneg, N),
( N =< 1
-> F = 1
; foldl(fac_foldl(N), [2|Back], 2-Back, F-[])
).
fac_foldl(N, X, Acc-Back, F-Rest) :-
( X < N
-> succ(X, X1),
F is Acc * X1,
Back = [X1|Rest]
; Acc = F,
Back = []
).
To my surprise, this works as intended. I can "seed" the fold with an initial value at the head of a partial list, and keep on adding the next element as I consume the current head. The definition of fac_foldl/4 is almost identical to the definition of fac_b_1/4 above: the only difference is that the state is maintained differently. My assumption here is that this should use constant memory: is that assumption wrong?
I know this is silly, but it could however be useful for folding over a list that cannot be known when the fold starts. In the original question we had to find a connected region, given a list of x-y coordinates. It is not enough to fold over the list of x-y coordinates once (you can however do it in two passes; note that there is at least one better way to do it, referenced in the same Wikipedia article, but this also uses multiple passes; altogether, the multiple-pass algorithms assume constant-time access to neighboring pixels!).
My own solution to the original "regions" question looks something like this:
set_region_rest([A|As], Region, Rest) :-
sort([A|As], [B|Bs]),
open_set_closed_rest([B], Bs, Region0, Rest),
sort(Region0, Region).
open_set_closed_rest([], Rest, [], Rest).
open_set_closed_rest([X-Y|As], Set, [X-Y|Closed0], Rest) :-
X0 is X-1, X1 is X + 1,
Y0 is Y-1, Y1 is Y + 1,
ord_intersection([X0-Y,X-Y0,X-Y1,X1-Y], Set, New, Set0),
append(New, As, Open),
open_set_closed_rest(Open, Set0, Closed0, Rest).
Using the same "technique" as above, we can twist this into a fold:
set_region_rest_foldl([A|As], Region, Rest) :-
sort([A|As], [B|Bs]),
foldl(region_foldl, [B|Back],
closed_rest(Region0, Bs)-Back,
closed_rest([], Rest)-[]),
!,
sort(Region0, Region).
region_foldl(X-Y,
closed_rest([X-Y|Closed0], Set)-Back,
closed_rest(Closed0, Set0)-Back0) :-
X0 is X-1, X1 is X + 1,
Y0 is Y-1, Y1 is Y + 1,
ord_intersection([X0-Y,X-Y0,X-Y1,X1-Y], Set, New, Set0),
append(New, Back0, Back).
This also "works". The fold leaves behind a choice point, because I haven't articulated the end condition as in fac_foldl/4 above, so I need a cut right after it (ugly).
The Questions
Is there a clean way of closing the list and removing the cut? In the factorial example, we know when to stop because we have additional information; however, in the second example, how do we notice that the back of the list should be the empty list?
Is there a hidden problem I am missing?
This looks like its somehow similar to the Implicit State with DCGs, but I have to admit I never quite got how that works; are these connected?
You are touching on several extremely interesting aspects of Prolog, each well worth several separate questions on its own. I will provide a high-level answer to your actual questions, and hope that you post follow-up questions on the points that are most interesting to you.
First, I will trim down the fragment to its essence:
essence(N) :-
foldl(essence_(N), [2|Back], Back, _).
essence_(N, X0, Back, Rest) :-
( X0 #< N ->
X1 #= X0 + 1,
Back = [X1|Rest]
; Back = []
).
Note that this prevents the creation of extremely large integers, so that we can really study the memory behaviour of this pattern.
To your first question: Yes, this runs in O(1) space (assuming constant space for arising integers).
Why? Because although you continuously create lists in Back = [X1|Rest], these lists can all be readily garbage collected because you are not referencing them anywhere.
To test memory aspects of your program, consider for example the following query, and limit the global stack of your Prolog system so that you can quickly detect growing memory by running out of (global) stack:
?- length(_, E),
N #= 2^E,
portray_clause(N),
essence(N),
false.
This yields:
1.
2.
...
8388608.
16777216.
etc.
It would be completely different if you referenced the list somewhere. For example:
essence(N) :-
foldl(essence_(N), [2|Back], Back, _),
Back = [].
With this very small change, the above query yields:
?- length(_, E),
N #= 2^E,
portray_clause(N),
essence(N),
false.
1.
2.
...
1048576.
ERROR: Out of global stack
Thus, whether a term is referenced somewhere can significantly influence the memory requirements of your program. This sounds quite frightening, but really is hardly an issue in practice: You either need the term, in which case you need to represent it in memory anyway, or you don't need the term, in which case it is simply no longer referenced in your program and becomes amenable to garbage collection. In fact, the amazing thing is rather that GC works so well in Prolog also for quite complex programs that not much needs to be said about it in many situations.
On to your second question: Clearly, using (->)/2 is almost always highly problematic in that it limits you to a particular direction of use, destroying the generality we expect from logical relations.
There are several solutions for this. If your CLP(FD) system supports zcompare/3 or a similar feature, you can write essence_/3 as follows:
essence_(N, X0, Back, Rest) :-
zcompare(C, X0, N),
closing(C, X0, Back, Rest).
closing(<, X0, [X1|Rest], Rest) :- X1 #= X0 + 1.
closing(=, _, [], _).
Another very nice meta-predicate called if_/3 was recently introduced in Indexing dif/2 by Ulrich Neumerkel and Stefan Kral. I leave implementing this with if_/3 as a very worthwhile and instructive exercise. Discussing this is well worth its own question!
On to the third question: How do states with DCGs relate to this? DCG notation is definitely useful if you want to pass around a global state to several predicates, where only a few of them need to access or modify the state, and most of them simply pass the state through. This is completely analogous to monads in Haskell.
The "normal" Prolog solution would be to extend each predicate with 2 arguments to describe the relation between the state before the call of the predicate, and the state after it. DCG notation lets you avoid this hassle.
Importantly, using DCG notation, you can copy imperative algorithms almost verbatim to Prolog, without the hassle of introducing many auxiliary arguments, even if you need global states. As an example for this, consider a fragment of Tarjan's strongly connected components algorithm in imperative terms:
function strongconnect(v)
// Set the depth index for v to the smallest unused index
v.index := index
v.lowlink := index
index := index + 1
S.push(v)
This clearly makes use of a global stack and index, which ordinarily would become new arguments that you need to pass around in all your predicates. Not so with DCG notation! For the moment, assume that the global entities are simply easily accessible, and so you can code the whole fragment in Prolog as:
scc_(V) -->
vindex_is_index(V),
vlowlink_is_index(V),
index_plus_one,
s_push(V),
This is a very good candidate for its own question, so consider this a teaser.
At last, I have a general remark: In my view, we are only at the beginning of finding a series of very powerful and general meta-predicates, and the solution space is still largely unexplored. call/N, maplist/[3,4], foldl/4 and other meta-predicates are definitely a good start. if_/3 has the potential to combine good performance with the generality we expect from Prolog predicates.
If your Prolog implementation supports freeze/2 or similar predicate (e.g. Swi-Prolog), then you can use following approach:
fac_list(L, N, Max) :-
(N >= Max, L = [Max], !)
;
freeze(L, (
L = [N|Rest],
N2 is N + 1,
fac_list(Rest, N2, Max)
)).
multiplication(X, Y, Z) :-
Z is Y * X.
factorial(N, Factorial) :-
fac_list(L, 1, N),
foldl(multiplication, L, 1, Factorial).
Example above first defines a predicate (fac_list) which creates a "lazy" list of increasing integer values starting from N up to maximum value (Max), where next list element is generated only after previous one was "accessed" (more on that below). Then, factorial just folds multiplication over lazy list, resulting in constant memory usage.
The key to understanding how this example works is remembering that Prolog lists are, in fact, just terms of arity 2 with name '.' (actually, in Swi-Prolog 7 the name was changed, but this is not important for this discussion), where first element represents list item and the second element represents tail (or terminating element - empty list, []). For example. [1, 2, 3] can be represented as:
.(1, .(2, .(3, [])))
Then, freeze is defined as follows:
freeze(+Var, :Goal)
Delay the execution of Goal until Var is bound
This means if we call:
freeze(L, L=[1|Tail]), L = [A|Rest].
then following steps will happen:
freeze(L, L=[1|Tail]) is called
Prolog "remembers" that when L will be unified with "anything", it needs to call L=[1|Tail]
L = [A|Rest] is called
Prolog unifies L with .(A, Rest)
This unification triggers execution of L=[1|Tail]
This, obviously, unifies L, which at this point is bound to .(A, Rest), with .(1, Tail)
As a result, A gets unified with 1.
We can extend this example as follows:
freeze(L1, L1=[1|L2]),
freeze(L2, L2=[2|L3]),
freeze(L3, L3=[3]),
L1 = [A|R2], % L1=[1|L2] is called at this point
R2 = [B|R3], % L2=[2|L3] is called at this point
R3 = [C]. % L3=[3] is called at this point
This works exactly like the previous example, except that it gradually generates 3 elements, instead of 1.
As per Boris's request, the second example implemented using freeze. Honestly, I'm not quite sure whether this answers the question, as the code (and, IMO, the problem) is rather contrived, but here it is. At least I hope this will give other people the idea what freeze might be useful for. For simplicity, I am using 1D problem instead of 2D, but changing the code to use 2 coordinates should be rather trivial.
The general idea is to have (1) function that generates new Open/Closed/Rest/etc. state based on previous one, (2) "infinite" list generator which can be told to "stop" generating new elements from the "outside", and (3) fold_step function which folds over "infinite" list, generating new state on each list item and, if that state is considered to be the last one, tells generator to halt.
It is worth to note that list's elements are used for no other reason but to inform generator to stop. All calculation state is stored inside accumulator.
Boris, please clarify whether this gives a solution to your problem. More precisely, what kind of data you were trying to pass to fold step handler (Item, Accumulator, Next Accumulator)?
adjacent(X, Y) :-
succ(X, Y) ;
succ(Y, X).
state_seq(State, L) :-
(State == halt -> L = [], !)
;
freeze(L, (
L = [H|T],
freeze(H, state_seq(H, T))
)).
fold_step(Item, Acc, NewAcc) :-
next_state(Acc, NewAcc),
NewAcc = _:_:_:NewRest,
(var(NewRest) ->
Item = next ;
Item = halt
).
next_state(Open:Set:Region:_Rest, NewOpen:NewSet:NewRegion:NewRest) :-
Open = [],
NewOpen = Open,
NewSet = Set,
NewRegion = Region,
NewRest = Set.
next_state(Open:Set:Region:Rest, NewOpen:NewSet:NewRegion:NewRest) :-
Open = [H|T],
partition(adjacent(H), Set, Adjacent, NotAdjacent),
append(Adjacent, T, NewOpen),
NewSet = NotAdjacent,
NewRegion = [H|Region],
NewRest = Rest.
set_region_rest(Ns, Region, Rest) :-
Ns = [H|T],
state_seq(next, L),
foldl(fold_step, L, [H]:T:[]:_, _:_:Region:Rest).
One fine improvement to the code above would be making fold_step a higher order function, passing it next_state as the first argument.

Steadfastness: Definition and its relation to logical purity and termination

So far, I have always taken steadfastness in Prolog programs to mean:
If, for a query Q, there is a subterm S, such that there is a term T that makes ?- S=T, Q. succeed although ?- Q, S=T. fails, then one of the predicates invoked by Q is not steadfast.
Intuitively, I thus took steadfastness to mean that we cannot use instantiations to "trick" a predicate into giving solutions that are otherwise not only never given, but rejected. Note the difference for nonterminating programs!
In particular, at least to me, logical-purity always implied steadfastness.
Example. To better understand the notion of steadfastness, consider an almost classical counterexample of this property that is frequently cited when introducing advanced students to operational aspects of Prolog, using a wrong definition of a relation between two integers and their maximum:
integer_integer_maximum(X, Y, Y) :-
Y >= X,
!.
integer_integer_maximum(X, _, X).
A glaring mistake in this—shall we say "wavering"—definition is, of course, that the following query incorrectly succeeds:
?- M = 0, integer_integer_maximum(0, 1, M).
M = 0. % wrong!
whereas exchanging the goals yields the correct answer:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
A good solution of this problem is to rely on pure methods to describe the relation, using for example:
integer_integer_maximum(X, Y, M) :-
M #= max(X, Y).
This works correctly in both cases, and can even be used in more situations:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
?- M = 0, integer_integer_maximum(0, 1, M).
false.
| ?- X in 0..2, Y in 3..4, integer_integer_maximum(X, Y, M).
X in 0..2,
Y in 3..4,
M in 3..4 ? ;
no
Now the paper Coding Guidelines for Prolog by Covington et al., co-authored by the very inventor of the notion, Richard O'Keefe, contains the following section:
5.1 Predicates must be steadfast.
Any decent predicate must be “steadfast,” i.e., must work correctly if its output variable already happens to be instantiated to the output value (O’Keefe 1990).
That is,
?- foo(X), X = x.
and
?- foo(x).
must succeed under exactly the same conditions and have the same side effects.
Failure to do so is only tolerable for auxiliary predicates whose call patterns are
strongly constrained by the main predicates.
Thus, the definition given in the cited paper is considerably stricter than what I stated above.
For example, consider the pure Prolog program:
nat(s(X)) :- nat(X).
nat(0).
Now we are in the following situation:
?- nat(0).
true.
?- nat(X), X = 0.
nontermination
This clearly violates the property of succeeding under exactly the same conditions, because one of the queries no longer succeeds at all.
Hence my question: Should we call the above program not steadfast? Please justify your answer with an explanation of the intention behind steadfastness and its definition in the available literature, its relation to logical-purity as well as relevant termination notions.
In 'The craft of prolog' page 96 Richard O'Keef says 'we call the property of refusing to give wrong answers even when the query has an unexpected form (typically supplying values for what we normally think of as inputs*) steadfastness'
*I am not sure if this should be outputs. i.e. in your query ?- M = 0, integer_integer_maximum(0, 1, M). M = 0. % wrong! M is used as an input but the clause has been designed for it to be an output.
In nat(X), X = 0. we are using X as an output variable not an input variable, but it has not given a wrong answer, as it does not give any answer. So I think under that definition it could be steadfast.
A rule of thumb he gives is 'postpone output unification until after the cut.' Here we have not got a cut, but we still want to postpone the unification.
However I would of thought it would be sensible to have the base case first rather than the recursive case, so that nat(X), X = 0. would initially succeed .. but you would still have other problems..

Counter-intuitive behavior of min_member/2

min_member(-Min, +List)
True when Min is the smallest member in the standard order of terms. Fails if List is empty.
?- min_member(3, [1,2,X]).
X = 3.
The explanation is of course that variables come before all other terms in the standard order of terms, and unification is used. However, the reported solution feels somehow wrong.
How can it be justified? How should I interpret this solution?
EDIT:
One way to prevent min_member/2 from succeeding with this solution is to change the standard library (SWI-Prolog) implementation as follows:
xmin_member(Min, [H|T]) :-
xmin_member_(T, H, Min).
xmin_member_([], Min0, Min) :-
( var(Min0), nonvar(Min)
-> fail
; Min = Min0
).
xmin_member_([H|T], Min0, Min) :-
( H #>= Min0
-> xmin_member_(T, Min0, Min)
; xmin_member_(T, H, Min)
).
The rationale behind failing instead of throwing an instantiation error (what #mat suggests in his answer, if I understood correctly) is that this is a clear question:
"Is 3 the minimum member of [1,2,X], when X is a free variable?"
and the answer to this is (to me at least) a clear "No", rather than "I can't really tell."
This is the same class of behavior as sort/2:
?- sort([A,B,C], [3,1,2]).
A = 3,
B = 1,
C = 2.
And the same tricks apply:
?- min_member(3, [1,2,A,B]).
A = 3.
?- var(B), min_member(3, [1,2,A,B]).
B = 3.
The actual source of confusion is a common problem with general Prolog code. There is no clean, generally accepted classification of the kind of purity or impurity of a Prolog predicate. In a manual, and similarly in the standard, pure and impure built-ins are happily mixed together. For this reason, things are often confused, and talking about what should be the case and what not, often leads to unfruitful discussions.
How can it be justified? How should I interpret this solution?
First, look at the "mode declaration" or "mode indicator":
min_member(-Min, +List)
In the SWI documentation, this describes the way how a programmer shall use this predicate. Thus, the first argument should be uninstantiated (and probably also unaliased within the goal), the second argument should be instantiated to a list of some sort. For all other uses you are on your own. The system assumes that you are able to check that for yourself. Are you really able to do so? I, for my part, have quite some difficulties with this. ISO has a different system which also originates in DEC10.
Further, the implementation tries to be "reasonable" for unspecified cases. In particular, it tries to be steadfast in the first argument. So the minimum is first computed independently of the value of Min. Then, the resulting value is unified with Min. This robustness against misuses comes often at a price. In this case, min_member/2 always has to visit the entire list. No matter if this is useful or not. Consider
?- length(L, 1000000), maplist(=(1),L), min_member(2, L).
Clearly, 2 is not the minimum of L. This could be detected by considering the first element of the list only. Due to the generality of the definition, the entire list has to be visited.
This way of handling output unification is similarly handled in the standard. You can spot those cases when the (otherwise) declarative description (which is the first of a built-in) explicitly refers to unification, like
8.5.4 copy_term/2
8.5.4.1 Description
copy_term(Term_1, Term_2) is true iff Term_2 unifies
with a term T which is a renamed copy (7.1.6.2) of
Term_1.
or
8.4.3 sort/2
8.4.3.1 Description
sort(List, Sorted) is true iff Sorted unifies with
the sorted list of List (7.1.6.5).
Here are those arguments (in brackets) of built-ins that can only be understood as being output arguments. Note that there are many more which effectively are output arguments, but that do not need the process of unification after some operation. Think of 8.5.2 arg/3 (3) or 8.2.1 (=)/2 (2) or (1).
8.5.4 1 copy_term/2 (2),
8.4.2 compare/3 (1),
8.4.3 sort/2 (2),
8.4.4 keysort/2 (2),
8.10.1 findall/3 (3),
8.10.2 bagof/3 (3),
8.10.3 setof/3 (3).
So much for your direct questions, there are some more fundamental problems behind:
Term order
Historically, "standard" term order1 has been defined to permit the definition of setof/3 and sort/2 about 1982. (Prior to it, as in 1978, it was not mentioned in the DEC10 manual user's guide.)
From 1982 on, term order was frequently (erm, ab-) used to implement other orders, particularly, because DEC10 did not offer higher-order predicates directly. call/N was to be invented two years later (1984) ; but needed some more decades to be generally accepted. It is for this reason that Prolog programmers have a somewhat nonchalant attitude towards sorting. Often they intend to sort terms of a certain kind, but prefer to use sort/2 for this purpose — without any additional error checking. A further reason for this was the excellent performance of sort/2 beating various "efficient" libraries in other programming languages decades later (I believe STL had a bug to this end, too). Also the complete magic in the code - I remember one variable was named Omniumgatherum - did not invite copying and modifying the code.
Term order has two problems: variables (which can be further instantiated to invalidate the current ordering) and infinite terms. Both are handled in current implementations without producing an error, but with still undefined results. Yet, programmers assume that everything will work out. Ideally, there would be comparison predicates that produce
instantiation errors for unclear cases like this suggestion. And another error for incomparable infinite terms.
Both SICStus and SWI have min_member/2, but only SICStus has min_member/3 with an additional argument to specify the order employed. So the goal
?- min_member(=<, M, Ms).
behaves more to your expectations, but only for numbers (plus arithmetic expressions).
Footnotes:
1 I quote standard, in standard term order, for this notion existed since about 1982 whereas the standard was published 1995.
Clearly min_member/2 is not a true relation:
?- min_member(X, [X,0]), X = 1.
X = 1.
yet, after simply exchanging the two goals by (highly desirable) commutativity of conjunction, we get:
?- X = 1, min_member(X, [X,0]).
false.
This is clearly quite bad, as you correctly observe.
Constraints are a declarative solution for such problems. In the case of integers, finite domain constraints are a completely declarative solution for such problems.
Without constraints, it is best to throw an instantiation error when we know too little to give a sound answer.
This is a common property of many (all?) predicates that depend on the standard order of terms, while the order between two terms can change after unification. Baseline is the conjunction below, which cannot be reverted either:
?- X #< 2, X = 3.
X = 3.
Most predicates using a -Value annotation for an argument say that pred(Value) is the same
as pred(Var), Value = Var. Here is another example:
?- sort([2,X], [3,2]).
X = 3.
These predicates only represent clean relations if the input is ground. It is too much to demand the input to be ground though because they can be meaningfully used with variables, as long as the user is aware that s/he should not further instantiate any of the ordered terms. In that sense, I disagree with #mat. I do agree that constraints can surely make some of these relations sound.
This is how min_member/2 is implemented:
min_member(Min, [H|T]) :-
min_member_(T, H, Min).
min_member_([], Min, Min).
min_member_([H|T], Min0, Min) :-
( H #>= Min0
-> min_member_(T, Min0, Min)
; min_member_(T, H, Min)
).
So it seems that min_member/2 actually tries to unify Min (the first argument) with the smallest element in List in the standard order of terms.
I hope I am not off-topic with this third answer. I did not edit one of the previous two as I think it's a totally different idea. I was wondering if this undesired behaviour:
?- min_member(X, [A, B]), A = 3, B = 2.
X = A, A = 3,
B = 2.
can be avoided if some conditions can be postponed for the moment when A and B get instantiated.
promise_relation(Rel_2, X, Y):-
call(Rel_2, X, Y),
when(ground(X), call(Rel_2, X, Y)),
when(ground(Y), call(Rel_2, X, Y)).
min_member_1(Min, Lst):-
member(Min, Lst),
maplist(promise_relation(#=<, Min), Lst).
What I want from min_member_1(?Min, ?Lst) is to expresses a relation that says Min will always be lower (in the standard order of terms) than any of the elements in Lst.
?- min_member_1(X, L), L = [_,2,3,4], X = 1.
X = 1,
L = [1, 2, 3, 4] .
If variables get instantiated at a later time, the order in which they get bound becomes important as a comparison between a free variable and an instantiated one might be made.
?- min_member_1(X, [A,B,C]), B is 3, C is 4, A is 1.
X = A, A = 1,
B = 3,
C = 4 ;
false.
?- min_member_1(X, [A,B,C]), A is 1, B is 3, C is 4.
false.
But this can be avoided by unifying all of them in the same goal:
?- min_member_1(X, [A,B,C]), [A, B, C] = [1, 3, 4].
X = A, A = 1,
B = 3,
C = 4 ;
false.
Versions
If the comparisons are intended only for instantiated variables, promise_relation/3 can be changed to check the relation only when both variables get instantiated:
promise_relation(Rel_2, X, Y):-
when((ground(X), ground(Y)), call(Rel_2, X, Y)).
A simple test:
?- L = [_, _, _, _], min_member_1(X, L), L = [3,4,1,2].
L = [3, 4, 1, 2],
X = 1 ;
false.
! Edits were made to improve the initial post thanks to false's comments and suggestions.
I have an observation regarding your xmin_member implementation. It fails on this query:
?- xmin_member(1, [X, 2, 3]).
false.
I tried to include the case when the list might include free variables. So, I came up with this:
ymin_member(Min, Lst):-
member(Min, Lst),
maplist(#=<(Min), Lst).
Of course it's worse in terms of efficiency, but it works on that case:
?- ymin_member(1, [X, 2, 3]).
X = 1 ;
false.
?- ymin_member(X, [X, 2, 3]).
true ;
X = 2 ;
false.

Resources