I am currently studying logic programming, and learn Prolog for that case.
We can have a Knowledge Base, which can lead us to some results, whereas Prolog will get in infinite loop, due to the way it expands the predicates.
Let assume we have the following logic program
p(X):- p(X).
p(X):- q(X).
q(X).
The query p(john) will get to an infinite loop because Prolog expands by default the first predicate that is unified. However, we can conclude that p(john) is true if we start expanding the second predicate.
So why doesn't Prolog expand all the matching predicates (implemented like threads/processes model with time slices), in order to conclude something if the KB can conclude something ?
In our case for example, two processes can be created, one expanded with p(X) and the other one with q(X). So when we later expand q(X), our program will conclude q(john).
Because Prolog's search algorithm for matching predicates is depth-first. So, in your example, once matching the first rule, it will match again the first rule, and will never explore the others.
This would not happen if the algorithm is breadth-first or iterative-deepening.
Usually is up to you to reorder the KB such that these situations never happen.
However, it is possible to encode breadth-first/iterative-deepening search in Prolog using a meta-interpreter that changes the search order. This is an extremely powerful technique that is not well known outside of the Prolog world. 'The Art of Prolog' describes this technique in detail.
You can find some examples of meta-interpreters here, here and here.
Related
During my exploration of different ways to write down lists, I am intrigued by the following list [[a,b]|c] which appears in the book 'Prolog and Natural Language Analysis' by Pereira and Shieber (page 42 of the digital edition).
At first I thought that such a notation was syntactically incorrect, as it would have had to say [[a,b]|[c]], but after using write_canonical/1 Prolog returned '.'('.'(a,'.'(b,[])),c).
As far as I can see, this corresponds to the following tree structure (although it seems odd to me that structure would simply end with c, without the empty list at the end):
I cannot seem to find the corresponding notation using comma's and brackets though. I thought it would correspond to [[a,b],c] (but this obviously returns a different result with write_canonical/1).
Is there no corresponding notation for [[a,b]|c] or am I looking at it the wrong way?
As others have already indicated, the term [[a,b]|c] is not a list.
You can test this yourself, using the | syntax to write it down:
?- is_list([[a,b]|c]).
false.
You can see from write_canonical/1 that this term is identical to what you have drawn:
| ?- write_canonical([[a,b]|c]).
'.'('.'(a,'.'(b,[])),c)
In addition to what others have said, I am posting an additional answer because I want to explain how you can go about finding the reason of unexpected failures. When starting with Prolog, you will often ask yourself "Why does this query fail?"
One way to find explanations for such issues is to generalize the query, by using logical variables instead of concrete terms.
For example, in the above case, we could write:
?- is_list([[A,b]|c]).
false.
In this case, I have used the logical variable A instead of the atom a, thus significantly generalizing the query. Since the generalized query still fails, some constraint in the remaining part must be responsible for the unexpected failure. We this generalize it further to narrow down the cause. For example:
?- is_list([[A,B]|c]).
false.
Or even further:
?- is_list([[A,B|_]|c]).
false.
And even further:
?- is_list([_|c]).
false.
So here we have it: No term that has the general form '.'(_, c) is a list!
As you rightly observe, this is because such a term is not of the form [_|Ls] where Ls is a list.
NOTE: The declarative debugging approach I apply above works for the monotonic subset of Prolog. Actually, is_list/1 does not belong to that subset, because we have:
?- is_list(Ls).
false.
with the declarative reading "There is no spoon list." So, it turns out, it worked only by coincidence in the case above. However, we could define the intended declarative meaning of is_list/1 in a pure and monotonic way like this, by simply applying the inductive definition of lists:
list([]).
list([_|Ls]) :- list(Ls).
This definition only uses pure and monotonic building blocks and hence is monotonic. For example, the most general query now yields actual lists instead of failing (incorrectly):
?- list(Ls).
Ls = [] ;
Ls = [_6656] ;
Ls = [_6656, _6662] ;
Ls = [_6656, _6662, _6668] .
From pure relations, we expect that queries work in all directions!
I cannot seem to find the corresponding notation using comma's and brackets though.
There is no corresponding notation, since this is technically speaking not a real list.
Prolog has syntacical sugar for lists. A list in Prolog is, like a Lisp list, actually a linked list: every element is either an empty list [], or a node .(H,T) with H the head and T the tail. Lists are not "special" in Prolog in the sense that the intepreter handles them differently than any other term. Of course a lot of Prolog libraries do list processing, and use the convention defined above.
To make complex lists more convenient, syntactical sugar was invented. You can write a node .(H,T) like [H|T] as well. So that means that in your [[a,b]|c]. We have an outer list, which has one node .(H,c) and the ? is another list, with two nodes and an empty list H = .(a,.(b,[])).
Technically speaking I would not consider this a "real" list, since the tail of a list should have either another node ./2, or an empty list.
You can however use this with variables like: [[a,b]|C] in order to unify the tail C further. So here we have some sort of list with [a,b] as first element (so a list containing a list) and with an open tail C. If we later for instance ground C to C = [], then the list is [[a,b]].
If I have a subset of logic programming which contains only one function symbol, am I able to do everything?
I think that I cannot but I am not sure at all.
A programming language can do anything user wants if it is a Turing-complete language. I was taught that this means it has to be able to execute if..then..else commands, recursion and that natural numbers should be defined.
Any help and opinions would be appreciated!
In classical predicate logic, there is a distinction between the formula level and the term level. Since an n-ary function can be represented as an (n+1)-ary predicate, restricting only the number of function symbols does not lessen the expressivity.
In prolog, there is no difference between the formula and the term level. You might pick an n-ary symbol p and try to encode turing machines or an equivalent notion(e.g. recursive functions) via nestings of p.
From my intution I would assume this is not possible: you can basically describe n-ary trees with variables as leaves, but then you can always unify these trees. This means that every rule head will match during recursive derivations and therefore you are unable to express any case distinction. Still, this is just an informal argument, not a proof.
P.S. you might also be interested in monadic logic, where only unary predicates are allowed. This fragment of first-order logic is decidable.
I want to write all of a list in prolog. But the problem is I don't want to use recursion.
So I want to to this iteratively.
Strictly speaking, there is no way how you can avoid recursion when you want to express some relation about all (or sufficiently many) elements in a list. However, you might delegate the recursive part to some library predicate.
Take the maplist-family as an example. Say maplist(=(_), Xs) which describes all lists whose elements are the same. To you, there is no longer any recursion. But behind, there is a recursive definition:
maplist(_C_1, []).
maplist(C_1, [E|Es]) :-
call(C_1, E),
maplist(C_1, Es).
There are also other ways to realize shortcuts for recursive predicates, like do-loops and B-Prolog's loops. But ultimately, they all translate to some recursive definition (or call recursive predicates).
There is really no reason to worry about using recursion in Prolog. After all, systems are quite optimized to handle recursion efficiently. Often, recursive predicates "run like" simple loops in imperative programming languages. Also, memory allocation is very well targeted to clean up intermediary/volatile data.
With Swi-Prolog, I have been experimenting with the fact that findall/3 unifies it's last list-argument, aka. it can "run on reverse" or input/output swapped, something like this findall(,,[a,b,c])
Then I came up with this:
Li=[a,b,c,d,e,f,g,h],findall(A, (append(A,B,Li),B=[C|_],writeln(C)), _).
A and B gets instantiated to sublists
Even this works!!
Li=[a,b,c,d,e,f,g,h],findall(_, (append(_,B,Li),B=[C|_],writeln(C)), _).
http://swish.swi-prolog.org/p/oMEAdQWk.pl
Well I don't know how efficient the code is, propably is not efficient. And you should first learn the recursive rules of Prolog :)
Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.
I was wondering what sort of sentences can't you express in Prolog? I've been researching into logic programming in general and have learned that first-order logic is more expressive compared to definite clause logic (Horn clause) that Prolog is based on. It's a tough subject for me to get my head around.
So, for instance, can the following sentence be expressed:
For all cars, there does not exist at least 1 car without an engine
If so, are there any other sentences that CAN'T be expressed? If not, why?
You can express your sentence straightforward with Prolog using negation (\+).
E.g.:
car(bmw).
car(honda).
...
car(toyota).
engine(bmw, dohv).
engine(toyota, wenkel).
no_car_without_engine:-
\+(
car(Car),
\+(engine(Car, _))
).
Procedure no_car_without_engine/0 will succeed if every car has an engine, and fail otherwise.
The most problematic definitions in Prolog, are those which are left-recursive.
Definitions like
g(X) :- g(A), r(A,X).
are most likely to fail, due to Prolog's search algorithm, which is plain depth-first-search
and will run to infinity and beyond.
The general problem with Horn Clauses however is, that they're defined to have at most one positive element. That said, one can find a clause which is limited to those conditions,
for example:
A ∨ B
As a consequence, facts like ∀ X: cat(X) ∨ dog(X) can't be expressed directly.
There are ways to work around those and there are ways to allow such statements (see below).
Reading material:
These slides (p. 3) give an
example of which sentence you can't build using Prolog.
This work (p. 10) also explains Horn Clauses and their implications and introduces a method to allow 'invalid' Horn Clauses.
Prolog is a programming language, not a natural language interface.
The sentence you show is expressed in such a convoluted way that I had hard time attempting to understand it. Effectively, I must thanks gusbro that took the pain to express it in understandable way. But he entirely glossed over the knowledge representation problems that any programming language pose when applied to natural language, or even simply negation in first order logic. These problems are so urgent that the language selected is often perceived as 'unimportant'.
Relating to programming, Prolog lacks the ability to access in O(1) (constant time) any linear data structure (i.e. arrays). Then a QuickSort, for instance, that requires access to array elements in O(1), can't be implemented in efficient way.
But it's nevertheless a Turing complete language, for what is worth. Then there are no statements that can't be expressed in Prolog.
So you are looking for sentences that can't be expressed in clausal logic that can be expressed in first order logic.
Strictly speaking, there are many, simply because clausal logic is a restriction of FOL. So that's true by definition.
What you can do though is you can rewrite any set of FOL sentences into a logic program that is not equivalent but with good properties. So for example if you want to know if p is a consequence of your theory, you can use equivalently the transformed logic program.
A few notes on the other answers:
Negation in Prolog (\+) is negation as failure and not first order logic negation
Prolog is a programming language, as correctly pointed out, we should be talking about clausal logic instead.
Left recursion is not a problem. You can easily use a different selection rule, or some other inference mechanism.