Does it matter what is unification terms sequence? - prolog

Lets consider I have two terms T1 and T2. I unified two terms and got result.
My question is: If I change places of terms and unify terms T2 and T1 - whether result will be the same or different?
I tried to change terms and got the same result. But in theory I can read: in Prolog sequence is important.
So how do you think - is the result the same or different and why?

A difference that shows simply by replacing X = Y by Y = X is highly unlikely.
As long as you consider syntactic unification (using the occurs-check) or rational tree unification, the only differences might be some minimal performance differences.
More visible differences are surfacing when going beyond these well defined relations:
when mixing both, unification may not terminate. I can only give you somewhat related examples in SWI and Scryer:
?- X = s(X), unify_with_occurs_check(X, s(X)).
X = s(X).
?- unify_with_occurs_check(X, s(X)), X = s(X).
false.
Above, commutativity of goals is broken. But then, we are mixing two incompatible theories with one another. So, we can't really complain.
?- Y = s(Y), unify_with_occurs_check(X-X,s(X)-Y).
false.
?- Y = s(Y), unify_with_occurs_check(X-X,Y-s(X)).
Y = s(Y), X = s(Y) % Scryer
| Y = X, X = s(X). % SWI
And here we just exchange the order of arguments. It is general intuition that exchanging (consistently) arguments of a functor should not produce a difference, but helas, here again the incompatible mix is the culprit.
when constraints and side-effects are involved. Still, I fail to produce such a case just replacing X = Y by Y = X.

Related

How can Prolog derive nonsense results such as 3 < 2?

A paper I'm reading says the following:
Plaisted [3] showed that it is possible to write formally correct
PROLOG programs using first-order predicate-calculus semantics and yet
derive nonsense results such as 3 < 2.
It is referring to the fact that Prologs didn't use the occurs check back then (the 1980s).
Unfortunately, the paper it cites is behind a paywall. I'd still like to see an example such as this. Intuitively, it feels like the omission of the occurs check just expands the universe of structures to include circular ones (but this intuition must be wrong, according to the author).
I hope this example isn't
smaller(3, 2) :- X = f(X).
That would be disappointing.
Here is the example from the paper in modern syntax:
three_less_than_two :-
less_than(s(X), X).
less_than(X, s(X)).
Indeed we get:
?- three_less_than_two.
true.
Because:
?- less_than(s(X), X).
X = s(s(X)).
Specifically, this explains the choice of 3 and 2 in the query: Given X = s(s(X)) the value of s(X) is "three-ish" (it contains three occurrences of s if you don't unfold the inner X), while X itself is "two-ish".
Enabling the occurs check gets us back to logical behavior:
?- set_prolog_flag(occurs_check, true).
true.
?- three_less_than_two.
false.
?- less_than(s(X), X).
false.
So this is indeed along the lines of
arbitrary_statement :-
arbitrary_unification_without_occurs_check.
I believe this is the relevant part of the paper you can't see for yourself (no paywall restricted me from viewing it when using Google Scholar, you should try accessing this that way):
Ok, how does the given example work?
If I write it down:
sm(s(s(s(z))),s(s(z))) :- sm(s(X),X). % 3 < 2 :- s(X) < X
sm(X,s(X)). % forall X: X < s(X)
Query:
?- sm(s(s(s(z))),s(s(z)))
That's an infinite loop!
Turn it around
sm(X,s(X)). % forall X: X < s(X)
sm(s(s(s(z))),s(s(z))) :- sm(s(X),X). % 3 < 2 :- s(X) < X
?- sm(s(s(s(z))),s(s(z))).
true ;
true ;
true ;
true ;
true ;
true
The deep problem is that X should be Peano number. Once it's cyclic, one is no longer in Peano arithmetic. One has to add some \+cyclic_term(X) term in there. (maybe later, my mind is full now)

Setting types of unbound variables in Prolog

I'm trying to find a way to set the type of a variable before it has been bound to a value. Unfortunately, the integer/1 predicate cannot be used for this purpose:
%This goal fails if Int is an unbound variable.
get_first_int(Int,List) :-
integer(Int),member(Int,List),writeln(Int).
I wrote a predicate called is_int that attempts to check the type in advance, but it does not work as I expected. It allows the variable to be bound to an atom instead of an integer:
:- initialization(main).
%This prints 'a' instead of 1.
main :- get_first_int(Int,[a,b,c,1]),writeln(Int).
get_first_int(Int,List) :-
is_integer(Int),member(Int,List).
is_integer(A) :- integer(A);var(A).
Is it still possible to set the type of a variable that is not yet bound to a value?
In SWI-Prolog I have used when/2 for similar situations. I really don't know if it is a good idea, it definitely feels like a hack, but I guess it is good enough if you just want to say "this variable can only become X" where X is integer, or number, or atom and so on.
So:
will_be_integer(X) :- when(nonvar(X), integer(X)).
and then:
?- will_be_integer(X), member(X, [a,b,c,1]).
X = 1.
But I have the feeling that almost always you can figure out a less hacky way to achieve the same. For example, why not just write:
?- member(X, [a,b,c,1]), integer(X).
???
Specific constraints for integers
In addition to what Boris said, I have a recommendation for the particular case of integers: Consider using CLP(FD) constraints to express that a variable must be of type integer. To express only this quite general requirement, you can post a CLP(FD) constraint that necessarily holds for all integers.
For example:
?- X in inf..sup.
X in inf..sup.
From this point onwards, X can only be instantiated to an integer. Everything else will yield a type error.
For example:
?- X in inf..sup, X = 3.
X = 3.
?- X in inf..sup, X = a.
ERROR: Type error: `integer' expected, found `a' (an atom)
Declaratively, you can always replace a type error with silent failure, since no possible additional instantiation can make the program succeed if this error arises.
Thus, in case you prefer silent failure over this type error, you can obtain it with catch/3:
?- X in inf..sup, catch(X = a, error(type_error(integer,_),_), false).
false.
CLP(FD) constraints are tailor-made for integers, and let you express also further requirements for this specific domain in a convenient way.
Case-specific advice
Let us consider your specific example of get_first_int/2. First, let us rename it to list_first_integer/3 so that it is clear what each argument is, and also to indicate that we fully intend to use it in several directions, not just to "get", but also to test and ideally to generate lists and integers that are in this relation.
Second, note that this predicate is rather messy, since it impurely depends on the instantiation of the list and integer, a property which cannot be expressed in first-order logic but rather depends on something outside of this logic. If we accept this, then one quite straight-forward way to do what you primarily want is to write it as:
list_first_integer(Ls, I) :-
once((member(I0, Ls), integer(I0))),
I = I0.
This works as long as the list is sufficiently instantiated, which implicitly seems to be the case in your examples, but definitely need not be the case in general. For example, with fully instantiated lists, we get:
?- list_first_integer([a,b,c], I).
false.
?- list_first_integer([a,b,c,4], I).
I = 4.
?- list_first_integer([a,b,c,4], 3).
false.
In contrast, if the list is not sufficiently instantiated, then we have the following major problems:
?- list_first_integer(Ls, I).
nontermination
and further:
?- list_first_integer([X,Y,Z], I).
false.
even though a more specific instantiation succeeds:
?- X = 0, list_first_integer([X,Y,Z], I).
X = I, I = 0.
Core problem: Defaulty representation
The core problem is that you are reasoning here about defaulty terms: A list element that is still a variable may either be instantiated to an integer or to any other term in the future. A clean way out is to design your data representation to symbolically distinguish the possible cases. For example, let us use the wrapper i/1 to denote an integer, and o/1 to denote any other kind of term. With this representation, we can write:
list_first_integer([i(I)|_], I).
list_first_integer([o(_)|Ls], I) :-
list_first_integer(Ls, I).
Now, we get correct results:
?- list_first_integer([X,Y,Z], I).
X = i(I) ;
X = o(_12702),
Y = i(I) ;
X = o(_12702),
Y = o(_12706),
Z = i(I) ;
false.
?- X = i(0), list_first_integer([X,Y,Z], I).
X = i(0),
I = 0 ;
false.
And the other examples also still work, if we only use the clean data representation:
?- list_first_integer([o(a),o(b),o(c)], I).
false.
?- list_first_integer([o(a),o(b),o(c),i(4)], I).
I = 4 ;
false.
?- list_first_integer([o(a),o(b),o(c),i(4)], 3).
false.
The most general query now allows us to generate solutions:
?- list_first_integer(Ls, I).
Ls = [i(I)|_16880] ;
Ls = [o(_16884), i(I)|_16890] ;
Ls = [o(_16884), o(_16894), i(I)|_16900] ;
Ls = [o(_16884), o(_16894), o(_16904), i(I)|_16910] ;
etc.
The price you have to pay for this generality lies in these symbolic wrappers. As you seem to care about correctness and also about generality of your code, I consider this a bargain in comparison to more error prone defaulty approaches.
Synthesis
Note that CLP(FD) constraints can be naturally used together with a clean representation. For example, to benefit from more finely grained type errors as explained above, you can write:
list_first_integer([i(I)|_], I) :- I in inf..sup.
list_first_integer([o(_)|Ls], I) :-
list_first_integer(Ls, I).
Now, you get:
?- list_first_integer([i(a)], I).
ERROR: Type error: `integer' expected, found `a' (an atom)
Initially, you may be faced with a defaulty representation. In my experience, a good approach is to convert it to a clean representation as soon as you can, for the sake of the remainder of your program in which you can then distinguish all cases symbolically in such a way that no ambiguity remains.

Understanding recursivity in Prolog

I have this example:
descend(X,Y) :- child(X,Y).
descend(X,Y) :- child(X,Z), descend(Z,Y).
child(anne,bridget).
child(bridget,caroline).
child(caroline,donna).
It works great and I understand it. This is a solution of a little exercise. My solution was the same but changing:
descend(X,Y) :- descend(X,Z), descend(Z,Y).
That is, changing child for descend in the second descend rule.
If I query descend(X, Y). in the first solution, I obtain:
?- descend(X, Y).
X = anne,
Y = bridget ;
X = bridget,
Y = caroline ;
X = caroline,
Y = donna ;
X = anne,
Y = caroline ;
X = anne,
Y = donna ;
X = bridget,
Y = donna ;
false.
Which is correct. But if I query with my solution the same, I get:
?- descend(X, Y).
X = anne,
Y = bridget ;
X = bridget,
Y = caroline ;
X = caroline,
Y = donna ;
X = anne,
Y = caroline ;
X = anne,
Y = donna ;
ERROR: Out of local stack
It doesn't say X = bridget,Y = donna ; and it also overflows. I understand why it overflows. What I don't understand is why it doesn't find this last relationship. Is it because of the overflow? If so, why? (Why is the stack so big with such small knowledge base?).
If I query descend(bridget, donna) it answers yes.
I'm having problems imagining the exploration tree...
Apart from that question, I guess that the original solution is more efficient (ignoring the fact that mine enters in a infinite loop at the end), isn't it?
Thanks!
I'm having problems imagining the exploration tree...
Yes, that's quite difficult in Prolog. And it would be worse if you had a bigger database! But most of the time it is not necessary to envision the very precise search tree. Instead, you can use several quite robust notions.
Remember how you formulated your query. You looked at one solution after the other. But what you really were interested in was the question whether or not the query terminates. You can go for it without looking at the solutions by adding false.
?- descend(X, Y), false.
ERROR: Out of local stack
This query can never be true. It can either fail, overflow, or loop, or produce another error. What remains is a very useful notion: Universal termination or as in this case non-termination.
This can be extended to your actual program:
descend(X,Y) :- false, child(X,Y).
descend(X,Y) :- descend(X,Z), false, descend(Z,Y).
If this fragment called a failure-slice does not terminate, then also your original program does not terminate. Look at this miserable remainder of your program! Not even child/2 is present any longer. And thus we can conclude that child/2 does not influence non-termination! The Y occurs only once. And X will never cause a failure. Thus descend/2 terminates never!
So this conclusion is much more general than just a statement about a specific search tree. It's a statement about all of them.
If you still want to reason about the very precise order of solutions, you will have to go into the very gore of actual execution. But why bother? It's extremely complex, in particular if your child/2 relation contains cycles. Chances are that you will confuse things and build inaccurate theories (at least I did). No need for another cargo cult. I, for one, have given up to "step through" such myriads of detail. And I do not miss it.

Computer Reasoning about Prologish Boolos' Curious Inference

Boolo's curious inference has been originally formulated with equations here. It is a recursive definition of a function f and a predicate d via the syntax of N+, the natural numbers without zero, generated from 1 and s(.).
But it can also be formulated with Horn Clauses. The logical content is not exactly the same, the predicate f captures only the positive aspect of the function, but the problem type is the same. Take the following Prolog program:
f(_, 1, s(1)).
f(1, s(X), s(s(Y))) :- f(1, X, Y).
f(s(X), s(Y), T) :- f(s(X), Y, Z), f(X, Z, T).
d(1).
d(s(X)) :- d(X).
Whats the theoretical logical outcome of the last query, and can you demonstrably have a computer program in our time and space that produces the outcome, i.e. post the program on gist and everybody can run it?
?- f(X,X,Y).
X = 1,
Y = s(1)
X = s(1),
Y = s(s(s(1)))
X = s(s(1)),
Y = s(s(s(s(s(s(s(s(s(s(...))))))))))
ERROR: Out of global stack
?- f(s(s(s(s(1)))), s(s(s(s(1)))), X), d(X).
If the program that does the job of certifying the result is not a Prolog interpreter itself like here, what would do the job especially suited for this Prologish problem formulation?
One solution: Abstract interpretation
Preliminaries
In this answer, I use an interpreter to show that this holds. However, it is not a Prolog interpreter, because it does not interpret the program in exactly the same way Prolog interprets the program.
Instead, it interprets the program in a more abstract way. Such interpreters are therefore called abstract interpreters.
Program representation
Critically, I work directly with the source program, using only modifications that we, by purely algebraic reasoning, know can be safely applied. It helps tremendously for such reasoning that your source program is completely pure by construction, since it only uses pure predicates.
To simplify reasoning about the program, I now make all unifications explicit. It is easy to see that this does not change the meaning of the program, and can be easily automated. I obtain:
f(_, X, Y) :-
X = 1,
Y = s(1).
f(Arg, X, Y) :-
Arg = 1,
X = s(X0),
Y = s(s(Y0)),
f(Arg, X0, Y0).
f(X, Y, T) :-
X = s(X0),
Y = s(Y0),
f(X, Y0, Z),
f(X0, Z, T).
I leave it as an easy exercise to show that this is declaratively equivalent to the original program.
The abstraction
The abstraction I use is the following: Instead of reasoning over the concrete terms 1, s(1), s(s(1)) etc., I use the atom d for each term T for which I can prove that d(T) holds.
Let me show you what I mean by the following interpretation of unification:
interpret(d = N) :- d(N).
This says:
If d(N) holds, then N is to be regarded identical to the atom d, which, as we said, shall denote any term for which d/1 holds.
Note that this differs significantly from what an actual unification between concrete terms d and N means! For example, we obtain:
?- interpret(X = s(s(1))).
X = d.
Pretty strange, but I hope you can get used to it.
Extending the abstraction
Of course, interpreting a single unification is not enough to reason about this program, since it also contains additional language elements.
I therefore extend the abstract interpretation to:
conjunction
calls of f/3.
Interpreting conjunctions is easy, but what about f/3?
Incremental derivations
If, during abstract interpretation, we encounter the goal f(X, Y, Z), then we know the following: In principle, the arguments can of course be unified with any terms for which the goal succeeds. So we keep track of those arguments for which we know the query can succeed in principle.
We thus equip the predicate with an additional argument: A list of f/3 goals that are logical consequences of the program.
In addition, we implement the following very important provision: If we encounter a unification that cannot be safely interpreted in abstract terms, then we throw an error instead of failing silently. This may for example happen if the unification would fail when regarded as an abstract interpretation although it would succeed as a concrete unification, or if we cannot fully determine whether the arguments are of the intended domain. The primary purpose of this provision is to avoid unintentional elimination of actual solutions due to oversights in the abstract interpreter. This is the most critical aspect in the interpreter, and any proof-theoretic mechanism will face closely related questions (how can we ensure that no proofs are missed?).
Here it is:
interpret(Var = N, _) :-
must_be(var, Var),
must_be(ground, N),
d(N),
Var = d.
interpret((A,B), Ds) :-
interpret(A, Ds),
interpret(B, Ds).
interpret(f(A, B, C), Ds) :-
member(f(A, B, C), Ds).
Quis custodiet ipsos custodes?
How can we tell whether this is actually correct? That's the tough part! In fact, it turns out that the above is not sufficient to be certain to catch all cases, because it may simply fail if d(N) does not hold. It is obviously not acceptable for the abstract interpreter to fail silently for cases it cannot handle. So we need at least one more clause:
interpret(Var = N, _) :-
must_be(var, Var),
must_be(ground, N),
\+ d(N),
domain_error(d, N).
In fact, an abstract interpreter becomes a lot less error-prone when we reason about ground terms, and so I will use the atom any to represent "any term at all" in derived answers.
Over this domain, the interpretation of unification becomes:
interpret(Var = N, _) :-
must_be(ground, N),
( var(Var) ->
( d(N) -> Var = d
; N = s(d) -> Var = d
; N = s(s(d)) -> Var = d
; domain_error(d, N)
)
; Var == any -> true
; domain_error(any, Var)
).
In addition, I have implemented further cases of the unification over this abstract domain. I leave it as an exercise to ponder whether this correctly models the intended semantics, and to implement further cases.
As it will turn out, this definition suffices to answer the posted question. However, it clearly leaves a lot to be desired: It is more complex than we would like, and it becomes increasingly harder to tell whether we have covered all cases. Note though that any proof-theoretic approach will face closely corresponding issues: The more complex and powerful it becomes, the harder it is to tell whether it is still correct.
All derivations: See you at the fixpoint!
It now remains to deduce everything that follows from the original program.
Here it is, a simple fixpoint computation:
derivables(Ds) :-
functor(Head, f, 3),
findall(Head-Body, clause(Head, Body), Clauses),
derivables_fixpoint(Clauses, [], Ds).
derivables_fixpoint(Clauses, Ds0, Ds) :-
findall(D, clauses_derivable(Clauses, Ds0, D), Ds1, Ds0),
term_variables(Ds1, Vs),
maplist(=(any), Vs),
sort(Ds1, Ds2),
( same_length(Ds2, Ds0) -> Ds = Ds0
; derivables_fixpoint(Clauses, Ds2, Ds)
).
clauses_derivable(Clauses, Ds0, Head) :-
member(Head-Body, Clauses),
interpret(Body, Ds0).
Since we are deriving ground terms, sort/2 removes duplicates.
Example query:
?- derivables(Ds).
ERROR: Arguments are not sufficiently instantiated
Somewhat anticlimactically, the abstract interpreter is unable to process this program!
Commutativity to the rescue
In a proof-theoretic approach, we search for, well, proofs. In an interpreter-based approach, we can either improve the interpreter or apply algebraic laws to transform the source program in a way that preserves essential properties.
In this case, I will do the latter, and leave the former as an exercise. Instead of searching for proofs, we are searching for equivalent ways to write the program so that our interpreter can derive the desired properties. For example, I now use commutativity of conjunction to obtain:
f(_, X, Y) :-
X = 1,
Y = s(1).
f(Arg, X, Y) :-
Arg = 1,
f(Arg, X0, Y0),
X = s(X0),
Y = s(s(Y0)).
f(X, Y, T) :-
f(X, Y0, Z),
f(X0, Z, T),
X = s(X0),
Y = s(Y0).
Again, I leave it as an exercise to carefully check that this program is declaratively equivalent to your original program.
iamque opus exegi, because:
?- derivables(Ds).
Ds = [f(any, d, d)].
This shows that in each solution of f/3, the last two arguments are always terms for which d/1 holds! In particular, it also holds for the sample arguments you posted, even if there is no hope to ever actually compute the concrete terms!
Conclusion
By abstract interpretation, we have shown:
for all X where f(_, _, X) holds, d(X) also holds
beyond that, for all Y where f(_, Y, _) holds, d(Y) also holds.
The question only asked for a special case of the first property. We have shown significantly more!
In summary:
If f(_, Y, X) holds, then d(X) holds and d(Y) holds.
Prolog makes it comparatively easy and convenient to reason about Prolog programs. This often allows us to derive interesting properties of Prolog programs, such as termination properties and type information.
Please see Reasoning about programs for references and more explanation.
+1 for a great question and reference.

Steadfastness: Definition and its relation to logical purity and termination

So far, I have always taken steadfastness in Prolog programs to mean:
If, for a query Q, there is a subterm S, such that there is a term T that makes ?- S=T, Q. succeed although ?- Q, S=T. fails, then one of the predicates invoked by Q is not steadfast.
Intuitively, I thus took steadfastness to mean that we cannot use instantiations to "trick" a predicate into giving solutions that are otherwise not only never given, but rejected. Note the difference for nonterminating programs!
In particular, at least to me, logical-purity always implied steadfastness.
Example. To better understand the notion of steadfastness, consider an almost classical counterexample of this property that is frequently cited when introducing advanced students to operational aspects of Prolog, using a wrong definition of a relation between two integers and their maximum:
integer_integer_maximum(X, Y, Y) :-
Y >= X,
!.
integer_integer_maximum(X, _, X).
A glaring mistake in this—shall we say "wavering"—definition is, of course, that the following query incorrectly succeeds:
?- M = 0, integer_integer_maximum(0, 1, M).
M = 0. % wrong!
whereas exchanging the goals yields the correct answer:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
A good solution of this problem is to rely on pure methods to describe the relation, using for example:
integer_integer_maximum(X, Y, M) :-
M #= max(X, Y).
This works correctly in both cases, and can even be used in more situations:
?- integer_integer_maximum(0, 1, M), M = 0.
false.
?- M = 0, integer_integer_maximum(0, 1, M).
false.
| ?- X in 0..2, Y in 3..4, integer_integer_maximum(X, Y, M).
X in 0..2,
Y in 3..4,
M in 3..4 ? ;
no
Now the paper Coding Guidelines for Prolog by Covington et al., co-authored by the very inventor of the notion, Richard O'Keefe, contains the following section:
5.1 Predicates must be steadfast.
Any decent predicate must be “steadfast,” i.e., must work correctly if its output variable already happens to be instantiated to the output value (O’Keefe 1990).
That is,
?- foo(X), X = x.
and
?- foo(x).
must succeed under exactly the same conditions and have the same side effects.
Failure to do so is only tolerable for auxiliary predicates whose call patterns are
strongly constrained by the main predicates.
Thus, the definition given in the cited paper is considerably stricter than what I stated above.
For example, consider the pure Prolog program:
nat(s(X)) :- nat(X).
nat(0).
Now we are in the following situation:
?- nat(0).
true.
?- nat(X), X = 0.
nontermination
This clearly violates the property of succeeding under exactly the same conditions, because one of the queries no longer succeeds at all.
Hence my question: Should we call the above program not steadfast? Please justify your answer with an explanation of the intention behind steadfastness and its definition in the available literature, its relation to logical-purity as well as relevant termination notions.
In 'The craft of prolog' page 96 Richard O'Keef says 'we call the property of refusing to give wrong answers even when the query has an unexpected form (typically supplying values for what we normally think of as inputs*) steadfastness'
*I am not sure if this should be outputs. i.e. in your query ?- M = 0, integer_integer_maximum(0, 1, M). M = 0. % wrong! M is used as an input but the clause has been designed for it to be an output.
In nat(X), X = 0. we are using X as an output variable not an input variable, but it has not given a wrong answer, as it does not give any answer. So I think under that definition it could be steadfast.
A rule of thumb he gives is 'postpone output unification until after the cut.' Here we have not got a cut, but we still want to postpone the unification.
However I would of thought it would be sensible to have the base case first rather than the recursive case, so that nat(X), X = 0. would initially succeed .. but you would still have other problems..

Resources