?- X in 1..100.
X in 1..100.
?- X #> 0, X #< 101.
X in 1..100.
So far so good!
Now let's imagine that the sup bound of X depends on Y in 100..200.
?- Y in 100..200, X #> 0, X #=< Y.
Y in 100..200,
Y#>=X,
X in 1..200.
The domain of X propagates correctly to 1..200, with the constraint between X and Y.
But…
?- Y in 100..200, X in 1..Y.
ERROR: Arguments are not sufficiently instantiated
I thought that in was equivalent to the inequalities declaration, but it apparently is not.
Is there any hidden reason as to why in requires that both bounds are ground?
That's a very good question. It has a straight-forward answer that may at first appear deep, yet is really easy to understand once you are familiar with declarative debugging approaches that are beginning to become more widely available.
Declarative debugging relies on properties of logic programs that we know always hold. A key property of pure logic programs (see logical-purity) is monotonicity:
Generalizing a goal can at meast increase the set of solutions.
Here is an example:
?- append([a], [b], [c]).
false.
Why does this fail? We want to find an actual cause of failure, an explanation why this does not succeed. It is tempting to think in procedural terms and trace the execution, but this quickly gets extremely complex and does not really show us the essence of the reason.
Instead, we can get to the essence by generalizing the goals systematically. For example, the following succeeds:
?- append([a], [b], Cs).
I have generalized the query by replacing the term [c] with Cs, which subsumes it.
You can do this automatically, see Ulrich Neumerkel's library(diadem) and especially the GUPU system where these ideas are fully developed and help tremendously to locate the precise location of mistakes in Prolog programs.
To apply such techniques to their utmost extent, you must aim to preserve as many interesting properties as possible in your programs, essentially by staying in the pure monotonic core of Prolog. I say "essentially" because these techniques also work somewhat beyond this subset on a class of programs that is very hard to classify formally. Note also that much of the ongoing work on logic programming is aimed towards increasing the pure monotonic subset of Prolog as far as possible, so the restriction becomes less and less severe over time.
Consider now the specifics of (in)/2: The domain argument of (in)/2 carries within it a declarative flaw in that it uses a defaulty representation of domains. There are several ways to see this, and notably from the following situation: Suppose I tell you that a domain has the form inf..High, then what is High? It can be either of:
sup
an integer.
The fact that we cannot clearly distinguish these cases tells you that this representation is defaulty.
A clean representation of boundaries would look like this:
inf, sup to denote the infinities
n(N) to denote the integer N.
With such a representation, I could tell you that the domain looks like this: inf..n(N), and you know that N must be an integer.
Now another example:
?- X in 1..0.
false.
Why does this fail? Let us again generalize the arguments to find a reason:
?- X in Low..0.
Suppose now that (in)/2 accepted this as an admissible query. Then what is Low? Again, it can be either an integer, or the atom inf. Unfortunately, there is no sensible way to constrain Low in this way: Constraining it to integers would make CLP(FD) constraints most appropriate, making it tempting to answer with:
X in Low..0,
Low in inf..0
but that is of course too much, because it precludes Low = inf:
?- Low in inf..0, Low = inf.
Type error: `integer' expected, found `inf' (an atom)
Knowing that we cannot decide the type of the term Low in such situations, and lacking a sensible way to state this, an instantiation error is raised to indicate that such a generalization is inadmissible.
From these considerations, the overall answer is really simple:
(in)/2 cannot be easily generalized due to the defaulty representation of domains.
Invariably, there are only two reasons for using such representations:
they are easier to type
they are easier to read.
The hefty drawback of course is that they stand massively in the way of declarative reasoning over your programs.
Note that internally, the CLP(FD) system of SWI-Prolog does use a clean representation of domains. This can also be carried over to the user side, so that we could write:
?- X in n(Low)..0.
with the declaratively perfectly valid answer:
X in n(Low)..0,
Low in inf..0.
Over time, such cleaner notations will definitely find their way to users. For now, it would be a bit too much to ask, at least in my experience.
In the specific examples you cite, the domain boundary is already constrained to integers, so of course we can distinguish the cases and translate this automatically to inequalitites. However, this would not remove the core problem of defaultyness, and exchanging the goals would still raise an instantiation error.
Related
The following prolog code establishes a very simple grammar for sentences (sentence = object + verb + subject), and provides some small vocabulary.
% Example 05 - Generating Sentences
% subjects, verbs and objects
subject(john).
subject(jane).
verb(eats).
verb(washes).
object(apples).
object(spinach).
% sentence = subject + verb + object
sentence(X,Y,Z) :- subject(X), verb(Y), object(Z).
% sentence as a list
sentence(S) :- S=[X, Y, Z], subject(X), verb(Y), object(Z).
When asked to generate valid sentences, swi-prolog (specifically swish.swi-prolog.org) generates them in the following order:
?- sentence(S).
S = [john, eats, apples]
S = [john, eats, spinach]
S = [john, washes, apples]
S = [john, washes, spinach]
S = [jane, eats, apples]
S = [jane, eats, spinach]
S = [jane, washes, apples]
S = [jane, washes, spinach]
Question
The above suggests that prolog always backtracks from the right to the left of conjunctive queries. Is this true for all prologs? Is it part of the specification? If not, is it common enough to be relied upon?
Notes
For clarity, by backtracking from the right, I mean that Z is unbound and rebound to find all possibilities, given the first matches for X and Y. Then after these have been exhausted, Y is unbound and rebound, and for each Y, different Z are tested. Finally it is X that is unbound then rebound to new values, and for each X the combinations of Y and Z are generated again.
Short answer: yes. Prolog always uses this same well defined scheme also known as chronological backtracking together with (one specific instance) of SLD-resolution.
But that needs some elaboration.
Prolog systems stick to that very strategy because it is quite efficient to implement and leads in many cases directly to the desired result. For those cases where Prolog works nicely it is pretty much competitive with imperative programming languages for many tasks. Some systems even translate to machine code, the most prominent being the just-in-time compiler of sicstus-prolog.
As you have probably already encountered, there are, however, cases where that strategy leads to undesirable inefficiencies and even non-termination whereas another strategy would produce an answer. So what to do in such situations?
Firstly, the precise encoding of a problem may be reformulated. To take your case of grammars, we even have a specific formalism for this, called Definite Clause Grammars, dcg. It is very compact and leads to both efficient parsing and efficient generation for many cases. This insight (and the precise encoding) was not that evident for quite some time. And the precise moment of Prolog's birth (pretty exactly) 50 years ago was when this was understood. In the example you have, you have just 3 tokens in a list, but most of the time that number can vary. It is there where the DCG formalism shines and still can be used both to parse and generate sentences. In your example, say you also want to include subjects with unrestricted length like [the,boy], [the,nice,boy], [the,nice,and,handsome,boy], ...
There are many such encoding techniques to learn.
Another way how Prolog's strategy is further improved is to offer more flexible selection strategies with built-ins like freeze/2, when/2 and similar coroutining methods. While such extensions exist for quite some time, they are difficult to employ. Particularly because understanding non-termination gets even more complex.
A more successful extension are constraints (constraint-programming), most prominently clpz/clpfd which are used primarily for combinatorial problems. While chronological backtracking is still in place, it is only used as a last resort either to ensure correctness of solutions with labeling/2 or when there is no better way to express the actual problem.
And finally, you may want to reconsider Prolog's strategy in a more fundamental way. This is all possible by means of meta-interpretation. In some sense this is a complete new implementation, but it can often use a lot of Prolog's infrastructure thereby making such meta-interpreters quite compact compared to other programming languages. And, it may not only be used to implement other strategies, it is even used to prototype and implement other programming languages. The most prominent example being erlang which first existed as a Prolog meta-interpreter, its syntax still being quite Prologish.
Prolog as a programming language contains also many features that do not fit into this pure view, like side effecting built-ins like put_char/1 which are clearly a hindrance in meta-interpretation. But in many such situations this can be mitigated by restricting their use only to specific modes and producing instantiation errors otherwise. Think of (non-constraint based) arithmetics which produces an error if the result cannot be determined immediately, but still produces correct results when used with sufficiently instantiated arguments like in
?- X > 0, X = -1.
error(instantiation_error,(is)/2).
?- X = -1, X > 0.
false.
?- X = 2, X > 0.
X = 2.
Finally, a word on non-termination. Often non-termination is seen as a fundamental weakness of Prolog. But there is another view on this. Also other much older systems or engines suffer (from time-to-time) runaways. And they are still used. In the case of programming languages, runaways are a fundamental consequence of their generality. And a non-terminating query is still preferable to an incorrect but terminating query.
So I am currently learning prolog and I can't get my head around how this language works.
"It tries all the possible solutions until it finds one, if it doesn't it returns false" is what I've read that this language does. You just Describe the solution and it finds it for you
With that in mind, I am trying to solve the 8 queens problem ( how to place 8 queens on a chess board without anyone threatening the others).
I have this predicate, 'safe' that gets a list of pairs, the positions of all the queens and succeeds when they are not threatening each other.
When I enter in the terminal
?- safe([(1,2),(3,5)]).
true ?
| ?- safe([(1,3),(1,7)]).
no
| ?- safe([(2,2),(3,3)]).
no
| ?- safe([(2,2),(3,4),(8,7)]).
true ?
it recognizes the correct from the wrong answers, so it knows if something is a possible solution
BUT
when I enter
| ?- safe(L).
L = [] ? ;
L = [_] ? ;
it gives me the default answers, even though it recognizes a solution for 2 queens when I enter them.
here is my code
threatens((_,Row),(_,Row)).
threatens((Column,_),(Column,_)).
threatens((Column1,Row1),(Column2,Row2)) :-
Diff1 is Column1 - Row1,
Diff2 is Column2 - Row2,
abs(Diff1) =:= abs(Diff2).
safe([]).
safe([_]).
safe([A,B|T]) :-
\+ threatens(A,B),
safe([A|T]),
safe(T).
One solution I found to the problem is to create predicates 'position' and modify the 'safe' one
possition((0,0)).
possition((1,0)).
...
...
possition((6,7)).
possition((7,7)).
safe([A,B|T]) :-
possition(A),
possition(B),
\+ threatens(A,B),
safe([A|T]),
safe(T).
safe(L,X):-
length(L,X),
safe(L).
but this is just stupid, as you have to type everything explicitly and really really slow,
even for 6 queens.
My real problem here, is not with the code itself but with prolog, I am trying to think in prolog, But all I read is
Describe how the solution would look like and let it work out what is would be
Well that's what I have been doing but it does not seem to work,
Could somebody point me to some resources that don't teach you the semantics but how to think in prolog
Thank you
but this is just stupid, as you have to type everything explicitly and really really slow, even for 6 queens.
Regarding listing the positions, the two coordinates are independent, so you could write something like:
position((X, Y)) :-
coordinate(X),
coordinate(Y).
coordinate(1).
coordinate(2).
...
coordinate(8).
This is already much less typing. It's even simpler if your Prolog has a between/3 predicate:
coordinate(X) :-
between(1, 8, X).
Regarding the predicate being very slow, this is because you are asking it to do too much duplicate work:
safe([A,B|T]) :-
...
safe([A|T]),
safe(T).
Once you know that [A|T] is safe, T must be safe as well. You can remove the last goal and will get an exponential speedup.
Describe how the solution would look like and let it work out what is
would be
demands that the AI be very strong in general. We are not there yet.
You are on the right track though. Prolog essentially works by enumerating possible solutions and testing them, rejecting those that don't fit the conditions encoded in the program. The skill resides in performing a "good enumeration" (traversing the domain in certain ways, exploiting domain symmetries and overlaps etc) and subsequent "fast rejection" (quickly throwing away whole sectors of the search space as not promising). The basic pattern:
findstuff(X) :- generate(X),test(X).
And evidently the program must first generate X before it can test X, which may not be always evident to beginners.
Logic-wise,
findstuff(X) :- x_fulfills_test_conditions(X),x_fullfills_domain_conditions(X).
which is really another way of writing
findstuff(X) :- test(X),generate(X).
would be the same, but for Prolog, as a concrete implementation, there would be nothing to work with.
That X in the program always stands for a particular value (which may be uninstantiated at a given moment, but becomes more and more instantiated going "to the right"). Unlike in logic, where the X really stands for an unknown object onto which we pile constraints until -ideally- we can resolve X to a set of concrete values by applying a lot of thinking to reformulate constraints.
Which brings us the the approach of "Constraint Logic Programming (over finite domains)", aka CLP(FD) which is far more elegant and nearer what's going on when thinking mathematically or actually doing theorem proving, see here:
https://en.wikipedia.org/wiki/Constraint_logic_programming
and the ECLiPSe logic programming system
http://eclipseclp.org/
and
https://www.metalevel.at/prolog/clpz
https://github.com/triska/clpfd/blob/master/n_queens.pl
N-Queens in Prolog on YouTube. as a must-watch
This is still technically Prolog (in fact, implemented on top of Prolog) but allows you to work on a more abstract level than raw generate-and-test.
Prolog is radically different in its approach to computing.
Arithmetic often is not required at all. But the complexity inherent in a solution to a problem show up in some place, where we control how relevant information are related.
place_queen(I,[I|_],[I|_],[I|_]).
place_queen(I,[_|Cs],[_|Us],[_|Ds]):-place_queen(I,Cs,Us,Ds).
place_queens([],_,_,_).
place_queens([I|Is],Cs,Us,[_|Ds]):-
place_queens(Is,Cs,[_|Us],Ds),
place_queen(I,Cs,Us,Ds).
gen_places([],[]).
gen_places([_|Qs],[_|Ps]):-gen_places(Qs,Ps).
qs(Qs,Ps):-gen_places(Qs,Ps),place_queens(Qs,Ps,_,_).
goal(Ps):-qs([0,1,2,3,4,5,6,7,8,9,10,11],Ps).
No arithmetic at all, columns/rows are encoded in a clever choice of symbols (the numbers indeed are just that, identifiers), diagonals in two additional arguments.
The whole program just requires a (very) small subset of Prolog, namely a pure 2-clauses interpreter.
If you take the time to understand what place_queens/4 does (operationally, maybe, if you have above average attention capabilities), you'll gain a deeper understanding of what (pure) Prolog actually computes.
Here is a section of Prolog code defining numeral in a recursive way:
numeral(0).
numeral(succ(X)) :- numeral(X).
When given query numeral(X). Prolog will return:
X = 0 ;
X = succ(0) ;
X = succ(succ(0)) ;
X = succ(succ(succ(0))) ;
X = succ(succ(succ(succ(0)))) ;
X = succ(succ(succ(succ(succ(0))))) ;
X = succ(succ(succ(succ(succ(succ(0)))))) ;
X = succ(succ(succ(succ(succ(succ(succ(0))))))) ;
X = succ(succ(succ(succ(succ(succ(succ(succ(0))))))))
yes
Based on what I have learned, when doing the query, prolog will firstly make X into a variable like (_G42), then it will search the facts and rules to find the match.
In this case, it will find 0 (fact) as a right match. Then it will also try to match the rule. That is considering _G42 is not 0, and _G42 is the succ of another number. Thus, another variable is generated(like _G44), _G44 will match 0 and will also go further like _G42. Since _G44 matches 0, then it will go backward to _G42, getting _G42 = succ(_G44) = succ(0).
I am not sure if I am right about the understanding. I made a diagram to show my comprehension on this problem.
If the analysis is correct, I still feel difficult to design the recursive function like this. Since I am new to Prolog, I want to know if this kind of definition always used in application (say building an expert system, verifying protocols) or it is just for beginners to better understanding the basic searching procedure? If it is often used, what is the key point to design this kind of recursive definition?
My personal opinion: Especially as a beginner, you have zero chance to"understand the recursive search in Prolog". Countless beginners are trying to understand Prolog in this way, and they very consistently fail.
The sad part is that this hits hardest workers the hardest: You always think you can somehow understand it, but in the end, you cannot, because there are too many ways to invoke even the simplest predicates, with uninstantiated and (partly) instantiated arguments, and even with aliased variables.
Your graph nicely illustrates that such a procedural reading gets extremely unwieldy very quickly for even the simplest conceivable recursive definitions.
A much more tractable approach for understanding the predicate is to read it declaratively:
0 is a numeral
If X is a numeral (whatever X is!), then succ(X) of X is also a numeral.
Note that :- even means ←, i.e., an implication from right to left.
My recommendation is to focus on a clear declarative description of what ought to hold. To overcome the initial barriers with Prolog, you must let go the idea that you can trace the steps that the CPU performs in the extreme detail in which you are currently trying to follow it. Prolog is too high-level to be amenable to tracing in this low-level way. It is like trying to interpret between French and English by tracing only the neuronal activities of the speakers.
Write a clear definition and then leave the search to Prolog. There are many other and working ways to understand and break down declarative definitions without getting swamped in low-level details. See for example program-slicing and failure-slicing. They work as long as you stay in the so-called pure monotonic subset of Prolog. Focus on this area, and you will be able to make very fast progress.
Is there a way to make this work?
add(X, X + 1)
input: add(1, Y).
expected output: Y = 2.
output: Y = 1+1.
Or is it only possible by doing this?
add(X, Y):- Y is X+1.
Historically, there have been many attempts to provide this functionality. Let me give as early examples CLP(ℜ) (about 1986) or more recently Prolog IV. However, sooner or later, one realizes that a programmer needs a finer control about the kind of unification that is employed. Take as example a program that wants to differentiate a formula. In that situation an interpreted functor would not be of any use. For this reason most constraints ship today as some added predicates leaving functors otherwise uninterpreted. In this manner they also fit into ISO-Prolog which permits constraints as extensions.
From a programmer's viewpoint, an extension as yours would reduce the number of auxiliary variables needed, however, it also would require to interpret all terms to this end which produces a lot of extra overhead.
I have a somewhat complex predicate with four arguments that need to work when both the first and last arguments are ground/not ground, not ground/ground or ground/ground, and the second and third arguments are ground.
i.e. predicate(A,B,C,D).
I can't provide my actual code since it is part of an assignment.
I have it mostly working, but am receiving instantiation errors when A is not ground, but D is. However, I have singled out a line of code that is causing issues. When I change the goal order of the predicate, it works when D is ground and A is not, but in doing so, it no longer works for when A is ground and D is not. I'm not sure there is a way around this.
Is there a way to use both lines of code so that if the A is ground for instance it will use the first line, but if A is not ground, it will use the second, and ignore the first? And vice versa.
You can do that, but, almost invariably, you will break the declarative semantics of your programs if you do that.
Consider a simple example to see how such a non-monotonic and extra-logical predicate already breaks basic assumptions and typical declarative properties of well-known predicates, like commutativity of conjunction:
?- ground(X), X = a.
false.
But, if we simply exchange the goals by commutativity of conjunction, we get a different answer:
?- X = a, ground(X).
X = a.
For this reason, such meta-logical predicates are best avoided, especially if you are just beginning to learn the language.
Instead, better stay in the pure and monotonic subset of Prolog. Use constraints like dif/2 and CLP(FD) to make your programs usable in all directions, increasing generality and ease of understanding.
See logical-purity, prolog-dif and clpfd for more information.