I've got a very large number of equations which I am trying to use PROLOG to solve. However, I've come a minor cropper in that they are not specified in any sort of useful order- that is, some, if not many variables, are used before they are defined. These are all specified within the same predicate. Can PROLOG cope with the predicates being specified in a random order?
Absolutely... ni (in Italian, Yes and Not)
That is, ideally Prolog requires that you specify what must be computed, not how, writing down the equations controlling the solution in a fairly general logical form, Horn clauses.
But this ideal is far from reach, and this is the point where we, as programmers, play a role. You should try to topologically sort formulae, if you want Prolog just apply arithmetic/algorithms.
But at this point, Prolog is not more useful than any other procedural language. It just make easier to do such topological sort, in sense that formulas can be read (this builtin it's a full Prolog parser!), variables identified and quantified easily, terms transformed, evaluated, and the like (metalanguages features, a strong point of Prolog).
Situation changes if you can use CLP(FD). Just an example, a bidirectional factorial (cool, isn't it?), from the documentation of the shining implementation that Markus Triska developed for SWI-Prolog:
You can also use CLP(FD) constraints as a more declarative alternative for ordinary integer arithmetic with is/2, >/2 etc. For example:
:- use_module(library(clpfd)).
n_factorial(0, 1).
n_factorial(N, F) :- N #> 0, N1 #= N - 1, F #= N * F1, n_factorial(N1, F1).
This predicate can be used in all directions. For example:
?- n_factorial(47, F).
F = 258623241511168180642964355153611979969197632389120000000000 ;
false.
?- n_factorial(N, 1).
N = 0 ;
N = 1 ;
false.
?- n_factorial(N, 3).
false.
To make the predicate terminate if any argument is instantiated, add the (implied) constraint F #\= 0 before the recursive call. Otherwise, the query n_factorial(N, 0) is the only non-terminating case of this kind.
Thus if you write your equations in CLP(FD) you get much more chances to have your 'equation system' solved as is. SWI-Prolog has dedicated debugging for the low level details used to solve CLP(FD).
HTH
Related
Firstly, the clp(fd) documentation mentions:
In modern Prolog systems, arithmetic constraints subsume and supersede low-level predicates over integers. The main advantage of arithmetic constraints is that they are true relations and can be used in all directions. For most programs, arithmetic constraints are the only predicates you will ever need from this library.
Secondly, on a previously asked question, it was mentioned that include/3 is incompatible with clp(fd).
Does that mean that only clp(fd) operators and clp(fd) predicates can be used when writing prolog with the clp(fd) library?
Furthermore, for example, why is include/3 incompatible with clp(fd)? Is it because it does not use clp(fd) operators? To use include/3 in clp(fd) code, would one need to rewrite a version that uses clp(fd) operators and constraints?
why is include/3 incompatible with clp(fd)?
?- X = 1, include(#\=(1),[0,X,2],Xs), X = 1.
X = 1,
Xs = [0,2]. % looks good
?- include(#\=(1),[0,X,2],Xs), X = 1.
false, unexpected.
?- include(#\=(1),[0,X,2],Xs). % generalization
Xs = [0,X,2],
X in inf..0\/2..sup
; unexpected. % missing second answer
So, (#\=)/2 works in this case only if it is sufficiently instantiated. How can you be sure it is? Well, there is no direct safe test. And thus you will get incorrect results in certain cases. As long as these examples fit on a single line, it is rather easy to spot the error. But with a larger program, this is practically impossible. Because of this, constraints and include/3 are incompatible.
A way out would be to produce instantiation errors in cases with insufficient instantiation, but this is pretty hairy in the context of clpfd. Other built-in predicates like (=\=)/2 do this, and are limited in their applicability.
?- X = 1, include(=\=(1),[0,X,2],Xs), X = 1.
X = 1,
Xs = [0,2].
?- include(=\=(1),[0,X,2],Xs), X = 1.
instantiation_error. % much better than an incorrect answer
A safe variant of include/3 is tfilter/3 of library(reif). But before using it, I recommend you read this.
include/3 doesn't have to be incompatible with clpfd (or any other use of attributed variables) - depends on the safe (for the purposes of the program, rather than necessarily "logically pure") usage of the attributed variables. E.g.:
:- use_module(library(clpfd)).
test_include(F) :-
L = [N1, N2, N3, N4],
N1 #< 2,
N2 #> 5,
N3 #< 3,
N4 #> 8,
include(gt_3_fd, L, F).
gt_3_fd(I) :-
I #> 3.
Result in swi-prolog:
?- test_include(F).
F = [_A, _B],
_A in 6..sup,
_B in 9..sup.
The above code is safe, because the variables being used with clpfd are being used consistently with clpfd, and the specified constraints result in reification of gt_3_fd being unnecessary.
Once the variables are nonvar/ground depending on the use-case (clpfd deals with integers, rather than e.g. compound terms, so nonvar is good enough), then the variables can also be used outside of clpfd. Example:
?- I = 5, I > 4, I #> 4, I #> 4.
I = 5.
An operator such as > uses the actual value of the variable, and ignores any attributes which might have been added to the variable by e.g. clpfd.
Logical purity, e.g. adding constraints to the elements in list L after the include/3 rather than before, is a separate issue, and applies to non-attributed variables also.
In summary: A program may be using some integers with clpfd, and some integers outside of clpfd, i.e. a mixture. That is OK, as long as the inside-or-outside distinction is consistently applied, while it is relevant (because e.g. labeling will produce actual values).
I wrote a quick predicate in Prolog trying out CLP(FD) and its ability to solve systems of equations.
problem(A, B) :-
A-B #= 320,
A #= 21*B.
When I call it in SWI, I get:
?- problem(A,B).
320+B#=A,
21*B#=A.
Whereas in GNU, I get the correct answer of:
| ?- problem(A,B).
A = 336
B = 16
What's going on here? Ideally I'd like to get the correct results in SWI as it's a much more robust environment.
This is a good observation.
At first glance, it will no doubt appear to be a shortcoming of SWI that it fails to propagate as strongly as GNU Prolog.
However, there are also other factors at play here.
The core issue
To start, please try the following query in GNU Prolog:
| ?- X #= X.
Declaratively, the query can be read as: X is an integer. The reasons are:
(#=)/2 only holds for integers
X #= X does not constrain the domain of the integer X in any way.
However, at least on my machine, GNU Prolog answers with:
X = _#0(0..268435455)
So, in fact, the domain of the integer X has become finite even though we have not restricted it in any way!
For comparison, we get for example in SICStus Prolog:
?- X #= X.
X in inf..sup.
This shows that the domain of the integer X has not been restricted in any way.
Replicating the result with CLP(Z)
Let us level the playing field. We can simulate the above situation with SWI-Prolog by artificially restricting the variables' domains to, say, the finite interval 0..264:
?- problem(A, B),
Upper #= 2^64,
[A,B] ins 0..Upper.
In response, we now get with SWI-Prolog:
A = 336,
B = 16,
Upper = 18446744073709551616.
So, restricting the domain to a finite subset of integers has allowed us to replicate the result we know from GNU Prolog also with the CLP(FD) solver of SWI-Prolog or its successor, CLP(Z).
The reason for this
The ambition of CLP(Z) is to completely replace low-level arithmetic predicates in user programs by high-level declarative alternatives that can be used as true relations and of course also as drop-in replacements. For this reason, CLP(Z) supports unbounded integers, which can grow as large as your computer's memory allows. In CLP(Z), the default domain of all integer variables is the set of all integers. This means that certain propagations that are applied for bounded domains are not performed as long as one of the domains is infinite.
For example:
?- X #> Y, Y #> X.
X#=<Y+ -1,
Y#=<X+ -1.
This is a conditional answer: The original query is satisfiable iff the so-called residual constraints are satisfiable.
In contrast, we get with finite domains:
?- X #> Y, Y #> X, [X,Y] ins -5000..2000.
false.
As long as all domains are finite, we expect roughly the same propagation strength from the involved systems.
An inherent limitation
Solving equations over integers is undecidable in general. So, for CLP(Z), we know that there is no decision algorithm that always yields correct results.
For this reason, you sometimes get residual constraints instead of an unconditional answer. Over finite sets of integers, equations are of course decidable: If all domains are finite and you do not get a concrete solution as answer, use one of the enumeration predicates to exhaustively search for solutions.
In systems that can reason over infinite sets of integers, you will sooner or later, and necessarily, encounter such phenomena.
In SWI-Prolog, the following query gives this result:
?- X mod 2 #= 0, X mod 2 #= 0.
X mod 2#=0,
X mod 2#=0.
While correct, there is obviously no need for the second constraint
Similarly:
?- dif(X,0), dif(X,0).
dif(X, 0),
dif(X, 0).
Is there no way to avoid such duplicate constraints? (Obviously the most correct way would be to not write code that leads to that situation, but it is not always that easy).
You can either avoid posting redundant constraints, or remove them with a setof/3-like construct. Both are very implementation specific. The best interface for such purpose is offered by SICStus. Other implementations like YAP or SWI more or less copied that interface, by leaving out some essential parts. A recent attempt to overcome SWI's deficiencies was rejected.
Avoid posting constraints
In SICStus, you can use frozen/2 for this purpose:
| ?- dif(X,0), frozen(X,Goal).
Goal = prolog:dif(X,0),
prolog:dif(X,0) ? ;
no
| ?- X mod 2#=0, frozen(X, Goal).
Goal = clpfd:(X in inf..sup,X mod 2#=0),
X mod 2#=0,
X in inf..sup ? ;
no
Otherwise, copy_term/3 might be good enough, provided the constraints are not too much interconnected with each other.
Eliminate redundant constraints
Here, a setof-like construct together with call_residue_vars/1 and copy_term/3 is probably the best approach. Again, the original implementation is in SICStus....
For dif/2 alone, entailment can be tested without resorting to any internals:
difp(X,Y) :-
( X \= Y -> true
; dif(X, Y)
).
Few constraint programming systems implement contraction also related to factoring in resolution theorem proving, since CLP labeling is not SMT. Contraction is a structural rule, and it reads as follows, assuming constraints are stored before the (|-)/2 in negated form.
G, A, A |- B
------------ (Left-Contraction)
G, A |- B
We might also extend it to the case where the two A's are derivably equivalent. Mostlikely this is not implemented, since it is costly. For example Jekejeke Minlog already implements contraction for CLP(FD), i.e. finite domains. We find for queries similar to the first query from the OP:
?- use_module(library(finite/clpfd)).
% 19 consults and 0 unloads in 829 ms.
Yes
?- Y+X*3 #= 2, 2-Y #= 3*X.
3*X #= -Y+2
?- X #< Y, Y-X #> 0.
X #=< Y-1
Basically we normalize to A1*X1+..+An*Xn #= B respectively A1*X1+..+An*Xn #=< B where gcd(A1,..,An)=1 and X1,..,Xn are lexically ordered, and then we check whether there is already the same constraint in the constraint store. But for CLP(H), i.e. Herbrand domain terms, we have not yet implemented contraction. We are still deliberating an efficient algorithm:
?- use_module(library(term/herbrand)).
% 2 consults and 0 unloads in 35 ms.
Yes
?- neq(X,0), neq(X,0).
neq(X, 0),
neq(X, 0)
Contraction for dif/2 would mean to implement a kind of (==)/2 via the instantiation defined in the dif/2 constraint. i.e. we would need to apply a recursive test following the pairing of variables and terms defined in the dif/2 constraint against all other dif/2 constraints already in the constraint store. Testing subsumption instead of contraction would also make more sense.
It probably is only feasible to implement contraction or subsumption for dif/2 with the help of some appropriate indexing technique. In Jekejeke Minlog for example for CLP(FD) we index on X1, but we did not yet realize some indexing for CLP(H). What we first might need to figure out is a normal form for the dif/2 constraints, see also this problem here.
This is probably the most trivial implementation of a function that returns the length of a list in Prolog
count([], 0).
count([_|B], T) :- count(B, U), T is U + 1.
one thing about Prolog that I still cannot wrap my head around is the flexibility of using variables as parameters.
So for example I can run count([a, b, c], 3). and get true. I can also run count([a, b], X). and get an answer X = 2.. Oddly (at least for me) is that I can also run count(X, 3). and get at least one result, which looks something like X = [_G4337877, _G4337880, _G4337883] ; before the interpreter disappears into an infinite loop. I can even run something truly "flexible" like count(X, A). and get X = [], A = 0 ; X = [_G4369400], A = 1., which is obviously incomplete but somehow really nice.
Therefore my multifaceted question. Can I somehow explain to Prolog not to look beyond first result when executing count(X, 3).? Can I somehow make Prolog generate any number of solutions for count(X, A).? Is there a limitation of what kind of solutions I can generate? What is it about this specific predicate, that prevents me from generating all solutions for all possible kinds of queries?
This is probably the most trivial implementation
Depends from viewpoint: consider
count(L,C) :- length(L,C).
Shorter and functional. And this one also works for your use case.
edit
library CLP(FD) allows for
:- use_module(library(clpfd)).
count([], 0).
count([_|B], T) :- U #>= 0, T #= U + 1, count(B, U).
?- count(X,3).
X = [_G2327, _G2498, _G2669] ;
false.
(further) answering to comments
It was clearly sarcasm
No, sorry for giving this impression. It was an attempt to give you a synthetic answer to your question. Every details of the implementation of length/2 - indeed much longer than your code - have been carefully weighted to give us a general and efficient building block.
There must be some general concept
I would call (full) Prolog such general concept. From the very start, Prolog requires us to solve computational tasks describing relations among predicate arguments. Once we have described our relations, we can query our 'knowledge database', and Prolog attempts to enumerate all answers, in a specific order.
High level concepts like unification and depth first search (backtracking) are keys in this model.
Now, I think you're looking for second order constructs like var/1, that allow us to reason about our predicates. Such constructs cannot be written in (pure) Prolog, and a growing school of thinking requires to avoid them, because are rather difficult to use. So I posted an alternative using CLP(FD), that effectively shields us in some situation. In this question specific context, it actually give us a simple and elegant solution.
I am not trying to re-implement length
Well, I'm aware of this, but since count/2 aliases length/2, why not study the reference model ? ( see source on SWI-Prolog site )
The answer you get for the query count(X,3) is actually not odd at all. You are asking which lists have a length of 3. And you get a list with 3 elements. The infinite loop appears because the variables B and U in the first goal of your recursive rule are unbound. You don't have anything before that goal that could fail. So it is always possible to follow the recursion. In the version of CapelliC you have 2 goals in the second rule before the recursion that fail if the second argument is smaller than 1. Maybe it becomes clearer if you consider this slightly altered version:
:- use_module(library(clpfd)).
count([], 0).
count([_|B], T) :-
T #> 0,
U #= T - 1,
count(B, U).
Your query
?- count(X,3).
will not match the first rule but the second one and continue recursively until the second argument is 0. At that point the first rule will match and yield the result:
X = [_A,_B,_C] ?
The head of the second rule will also match but its first goal will fail because T=0:
X = [_A,_B,_C] ? ;
no
In your above version however Prolog will try the recursive goal of the second rule because of the unbound variables B and U and hence loop infinitely.
The question
I have a question related to logical purity.
Is this program pure?
when(ground(X), X > 2).
Some [ir]relevant details about the context
I'm trying to write pure predicates with good termination properties. For instance, I want to write a predicate list_length/2 that describes the relation between a list and its length. I want to achieve the same termination behaviour as the built-in predicate length/2.
My question seeks to find if the following predicate is pure:
list_length([], 0).
list_length([_|Tail], N):-
when(ground(N), (N > 0, N1 is N - 1)),
when(ground(N1), N is N1 + 1),
list_length(Tail, N1).
I can achieve my goal with clpfd ...
:- use_module(library(clpfd)).
:- set_prolog_flag(clpfd_monotonic, true).
list_length([], 0).
list_length([_|Tail], N):-
?(N) #> 0,
?(N1) #= ?(N) - 1,
list_length(Tail, N1).
... or I can use var/1, nonvar/1 and !/0, but then is hard to prove that the predicate is pure.
list_length([],0).
list_length([_|Tail], N):-
nonvar(N), !,
N > 0,
N1 is N - 1,
list_length(Tail, N1).
list_length([_|Tail], N):-
list_length(Tail, N1),
N is N1 + 1.
Logical purity of when/2 and ground/1
Note that there is the ISO built-in ground/1 which is just as impure as nonvar/1.
But it seems you are rather talking about the conditions for when/2. In fact, any accepted condition for when/2 is as pure as it can get. So this is not only true for ground/1.
Is this program pure?
when(ground(X), X > 2).
Yes, in the current sense of purity. That is, in the very same sense that considers library(clpfd) as pure. In the very early days of logic programming and Prolog, say in the 1970s, a pure program would have been only one that succeeds if it is true and fails if it is false. Nothing else.
However, today, we accept that ISO errors, like type errors are issued in place of silent failure. In fact, this makes much more sense from a practical point of view. Think of X = non_number, when(ground(X), X > 2 ). Note that this error system was introduced relatively late into Prolog.
While Prolog I reported errors of built-ins explicitly1 the subsequent DEC10-Prolog (as of, e.g. 1978, 1982) nor C-Prolog did not contain a reliable error reporting system. Instead, a message was printed and the predicate failed thus confusing errors with logical falsity. From this time, there is still the value warning of the Prolog flag unknown (7.11.2.4 in ISO/IEC 13211-1:1995) which causes the attempt to execute an undefined predicate to print a warning and fail.
So where's the catch? Consider
?- when(ground(X), X> 2), when(ground(X), X < 2).
when(ground(X), X>2), when(ground(X), X<2).
These when/2s, while perfectly correct, now contribute a lot to producing inconsistencies as answers. After all, above reads:
Yes, the query is true, provided the very same query is true.
Contrast this to SICStus' or SWI's library(clpfd):
?- X #> 2, X #< 2.
false.
So library(clpfd) is capable of detecting this inconsistency, whereas when/2 has to wait until its argument is ground.
Getting such conditional answers is often very confusing. In fact, many prefer in many situations a more mundane instantiation error to the somewhat cleaner when.
There is no obvious general answer to this. After all, many interesting theories for constraints are undecidable. Yes, the very harmless-looking library(clpfd) permits you to formulate undecidable problems already! So we will have to live with such conditional answers that do not contain solutions.
However, once you get a pure unconditional solution or once you get real failure you do know that this will hold.
list_length/2
Your definition using library(clpfd) is actually slightly better w.r.t. termination than what has been agreed upon for the Prolog prologue. Consider:
?- N in 1..3, list_length(L, N).
Also, the goal length(L,L) produces a type error in a very natural fashion. That is, without any explicit tests.
Your version using when/2 has some "natural" irregularities, like length(L,0+0) fails but length(L,1+0) succeeds. Otherwise it seems to be fine — by construction alone.
The earliest account is on p.9 of G. Battani, H. Meloni. Interpréteur du langage de programmation Prolog. Rapport de D.E.A. d'informatique appliquée, 1973. There, a built-in in error was replaced by a goal that was never resolved. In current terms plus/3 of II-3-6 a, p.13 would be in current systems with freeze/2:
plus(X, Y, Z) :-
( integer(X), integer(Y), ( var(Z) ; integer(Z) )
-> Z is X+Y
; freeze(_,erreur(plus(X,Y,Z)))
).
So plus/3 was not "multi-directional".