Implementing often-occuring determinism patterns in Prolog - coding-style

When programming in Prolog I often write predicates whose behavior should be semi-deterministic when called with all arguments instantiated (and whose behavior should be non-deterministic otherwise).
A concrete use case for this is my predicate walk/3, which implements graph walks. Since multiple paths can exist between two vertices, the instantiation (+,+) gives multiple choicepoints after true. These are, however, quite useless. Calling code must explicitly use once/1 for performance reasons.
%! walk(+Graph:ugraph, +StartVertex, +EndVertex) is semidet.
%! walk(+Graph:ugraph, -StartVertex, +EndVertex) is nondet.
%! walk(+Graph:ugraph, +StartVertex, -EndVertex) is nondet.
%! walk(+Graph:ugraph, -StartVertex, -EndVertex) is nondet.
Semi-determinism can be forced by the use of once/1 in the calling context, but I want to implement semi-determinism as a property of the predicate walk/3, and not as something that has to be treated specially every time it is called.
In addition to concerns over code aesthetics, the calling context need not always know whether its call to walk/3 is semi-deterministic or not. For example:
%! cycle(+Graph:ugraph, +Vertex) is semidet.
%! cycle(+Graph:ugraph, -Vertex) is nondet.
cycle(Graph, Vertex):-
walk(Graph, Vertex, Vertex).
I have come up with the following solution, which does produce the correct behavior.
walk_wrapper(Graph, Start, End):-
call_ground_as_semidet(walk(Graph, Start, End)).
:- meta_predicate(call_ground_as_semidet(0)).
call_ground_as_semidet(Goal):-
ground(Goal), !,
Goal, !.
call_ground_as_semidet(Goal):-
Goal.
However, this solution has deficiencies:
It's not generic enough, e.g. sometimes ground should be nonvar.
It is not stylistic, requiring an extra predicate wrapper every time it is used.
It may also be slightly inefficient.
My question is: are there other ways in which often-occurring patterns of (non-)determinism, like the one described here, can be generically/efficiently/stylistically programmed in Prolog?

You should experiment with double negation as failure. Yes a ground goal can only be true or false, so it should not leave any choice points. Lets assume we have an acyclic graph, to make matters simple:
If I use this code:
edge(a, b). edge(a, c).
edge(a, d). edge(b, c).
edge(c, d). edge(c, e).
edge(d, e).
path(X,X).
path(X,Y) :- edge(X,Z), path(Z,Y).
The Prolog system will now leave choice points for closed queries:
?- path(a, e).
true ;
true ;
true ;
true ;
true ;
false.
In my opinion the recommended approach, to eliminate these
choice points and nevertheless have a multi-moded predicate,
is to use so called meta-programming in Prolog.
meta-programming is also sometimes derogeratively called
non-logical programming, since it is based on non-logical
predicates such as ground/1, !/0 or (+)/1. But lets call
it meta-programming when declarativity is not impacted.
You could write a wrapper smart/1 as follows, doing the
same as your call_ground_as_semidet/1, but with a small nuance:
smart(G) :- ground(G), !, \+ \+ G.
smart(G) :- G.
The Prolog system will not anymore leave a choice point for closed queries:
?- smart(path(a,e)).
true.
The advantage of \+ \+ over once, is that the former does
not only leave no choice points, but also removes the trail. It
is sometimes called the garbage collection meta-predicate of Prolog.

Not an answer but too long for a comment. Keep in mind I am not sure I understand exactly, so I want to re-state your question first.
To take your graph example. You want to be able to ask the following questions using the same call of the same predicate.
Given a graph,
Question 1: is vertex B reachable from vertex A (somehow)? - yes or no
Question 2: which vertices are reachable from A? - enumerate by backtracking
Question 3: from which vertices is B reachable? - enumerate by backtracking
Question 4: which A and B exist for which B is reachable from A? - enumerate by backtracking
And I might be wrong here, but it seems that answering Question 1 and Question 2 might employ a different search strategy than answering Question 3?
More generally, you want to have a way of saying: if I have a yes-or-no question, succeed or fail. Otherwise, enumerate answers.
Here comes my trouble: what are you going to do with the two different types of answers? And what are the situations in which you don't know in advance which type of answer you need? (If you do know in advance, you can use once(goal), as you said yourself.)
PS:
There is obviously setof/3, which will fail if there are no answers, or collect all answers. Are there situations in which you want to know some of the answers but you don't want to collect all of them? Is this an efficiency concern because of the size and number of the answers?

Not an answer but an advice.
Maybe I missunderstood your question. I think you are trying to address performance issues by forcing a predicate to be non-deterministic. That question is pointless: if p(X) is non-deterministic (multiple solutions), then p(X),! is deterministic (first solution only).
You should not address performance issues by altering program logic or predicate reversibility. I suggest a different approach:
First, take advantage of prolog indexing. For example:
cycle(+Graph:ugraph, +Vertex)
is NOT the same (in terms of performance) as:
cycle(+Vertex, +Graph:ugraph)
You should find documentation on prolog indexing (and performance impact) on the Web.
Second, write multiple implementations for the same problem. Each one will optimize performance for a different case. Then, write a predicate that chooses the best implementation for each case.

Related

Does the Prolog symbol :- mean Implies, Entails or Proves?

In Prolog we can write very simple programs like this:
mammal(dog).
mammal(cat).
animal(X) :- mammal(X).
The last line uses the symbol :- which informally lets us read the final fact as: if X is a mammal then it is also an animal.
I am beginning to learn Prolog and trying to establish which of the following is meant by the symbol :-
Implies (⇒)
Entails (⊨)
Provable (⊢)
In addition, I am not clear on the difference between these three. I am trying to read threads like this one, but the discussion is at a level above my capability, https://math.stackexchange.com/questions/286077/implies-rightarrow-vs-entails-models-vs-provable-vdash.
My thinking:
Prolog works by pattern-matching symbols (unification and search) and so we might be tempted to say the symbol :- means 'syntactic entailment'. However this would only be true of queries that are proven to be true as a result of that syntactic process.
The symbol :- is used to create a database of facts, and therefore is semantic in nature. That means it could be one of Implies (⇒) or Entails (⊨) but I don't know which.
Neither. Or, rather if at all, then it's the implication. The other symbols are above, that is meta-language. The Mathematics Stack Exchange answers explain this quite nicely.
So why :- is not that much of an implication, consider:
p :- p.
In logic, both truth values make this a valid sentence. But in Prolog we stick to the minimal model. So p is false. Prolog uses a subset of predicate logic such that there actually is only one minimal model. And worse, Prolog's actual default execution strategy makes this an infinite loop.
Nevertheless, the most intuitive way to read LHS :- RHS. is to see it as a way to generate new knowledge. Provided RHS is true it follows that also LHS is true. This way one avoids all the paradoxa related to implication.
The direction right-to-left is a bit counter intuitive. This direction is motivated by Prolog's actual execution strategy (which goes left-to-right in this representation).
:- is usually read as if, so something like:
a :- b, c .
reads as
| a is true if b and c are true.
In formal logic, the above would be written as
| a ← b ∧ c
Or
| b and c imply a

Prolog and limitations of backtracking

This is probably the most trivial implementation of a function that returns the length of a list in Prolog
count([], 0).
count([_|B], T) :- count(B, U), T is U + 1.
one thing about Prolog that I still cannot wrap my head around is the flexibility of using variables as parameters.
So for example I can run count([a, b, c], 3). and get true. I can also run count([a, b], X). and get an answer X = 2.. Oddly (at least for me) is that I can also run count(X, 3). and get at least one result, which looks something like X = [_G4337877, _G4337880, _G4337883] ; before the interpreter disappears into an infinite loop. I can even run something truly "flexible" like count(X, A). and get X = [], A = 0 ; X = [_G4369400], A = 1., which is obviously incomplete but somehow really nice.
Therefore my multifaceted question. Can I somehow explain to Prolog not to look beyond first result when executing count(X, 3).? Can I somehow make Prolog generate any number of solutions for count(X, A).? Is there a limitation of what kind of solutions I can generate? What is it about this specific predicate, that prevents me from generating all solutions for all possible kinds of queries?
This is probably the most trivial implementation
Depends from viewpoint: consider
count(L,C) :- length(L,C).
Shorter and functional. And this one also works for your use case.
edit
library CLP(FD) allows for
:- use_module(library(clpfd)).
count([], 0).
count([_|B], T) :- U #>= 0, T #= U + 1, count(B, U).
?- count(X,3).
X = [_G2327, _G2498, _G2669] ;
false.
(further) answering to comments
It was clearly sarcasm
No, sorry for giving this impression. It was an attempt to give you a synthetic answer to your question. Every details of the implementation of length/2 - indeed much longer than your code - have been carefully weighted to give us a general and efficient building block.
There must be some general concept
I would call (full) Prolog such general concept. From the very start, Prolog requires us to solve computational tasks describing relations among predicate arguments. Once we have described our relations, we can query our 'knowledge database', and Prolog attempts to enumerate all answers, in a specific order.
High level concepts like unification and depth first search (backtracking) are keys in this model.
Now, I think you're looking for second order constructs like var/1, that allow us to reason about our predicates. Such constructs cannot be written in (pure) Prolog, and a growing school of thinking requires to avoid them, because are rather difficult to use. So I posted an alternative using CLP(FD), that effectively shields us in some situation. In this question specific context, it actually give us a simple and elegant solution.
I am not trying to re-implement length
Well, I'm aware of this, but since count/2 aliases length/2, why not study the reference model ? ( see source on SWI-Prolog site )
The answer you get for the query count(X,3) is actually not odd at all. You are asking which lists have a length of 3. And you get a list with 3 elements. The infinite loop appears because the variables B and U in the first goal of your recursive rule are unbound. You don't have anything before that goal that could fail. So it is always possible to follow the recursion. In the version of CapelliC you have 2 goals in the second rule before the recursion that fail if the second argument is smaller than 1. Maybe it becomes clearer if you consider this slightly altered version:
:- use_module(library(clpfd)).
count([], 0).
count([_|B], T) :-
T #> 0,
U #= T - 1,
count(B, U).
Your query
?- count(X,3).
will not match the first rule but the second one and continue recursively until the second argument is 0. At that point the first rule will match and yield the result:
X = [_A,_B,_C] ?
The head of the second rule will also match but its first goal will fail because T=0:
X = [_A,_B,_C] ? ;
no
In your above version however Prolog will try the recursive goal of the second rule because of the unbound variables B and U and hence loop infinitely.

Prolog programming language and proof trees

Recall this proof meta-circular
solve(true, true).
solve([], []).
solve([A|B],[ProofA|ProofB]) :-
solve(A,ProofA),
solve(B, ProofB).
solve(A, node(A,Proof)) :-
rule(A,B),
solve(B,Proof).
Assume that the third rule of the interpreter is altered, while the other rules of the interpreter are unchanged, as follows:
% Signature: solve(Exp, Proof)/2 solve(true, true).
solve([], []).
solve([A|B], [ProofA|ProofB]) :-
solve(B, ProofB), %3
solve(A, ProofA).
solve(A, node(A, Proof)) :-
rule(A, B),
solve(B, Proof).
Consider the proof tree that will be created for some query in both versions. Can any variable substitution be achieved in one version only? Explain. Can any true leaf move to the other side of the most left infinite branch? Explain. In both questions give an example if the answer is positive. How will this influence on the proof?
please help me ! tx
(I have a lot of reservations against your meta-interpreter. But first I will answer the question you had)
In this meta-interpreter you are reifying (~ implementing) conjunction. And you implement it with Prolog's conjunction. Now you have two different versions how you interpret a conjunction. Once you say prove A first, then B. Then you say the opposite.
Think of
p :- p, false.
and
p :- false, p.
The second version will produce a finite failure branch, whereas the first will produce an infinite failure branch. So that will be the effect of using one or the other meta-interpreter. Note that this "error" might again be mitigated by interpreting the meta-interpreter itself!
See also this answer which might clarify the notions a bit.
There are also other ways to implement conjunction (via binarization) ; such that the next level of meta-interpreter will no longer able to compensate.
Finally a comment on the style of your meta-interpreter: You are mixing lists and other terms. In fact [true|true] will be true. Avoid such a representation by all means. Either stick with the traditional "vanilla" representation which operates on the syntax tree of Prolog rules. That is, conjunction is represented as (',')/2. Or stick to lists. But do not mix lists and other representations.

Does Prolog use Eager Evaluation?

Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.

Why double negation doesn't bind in Prolog

Say I have the following theory:
a(X) :- \+ b(X).
b(X) :- \+ c(X).
c(a).
It simply says true, which is of course correct, a(X) is true because there is no b(X) (with negation as finite failure). Since there is only a b(X) if there is no c(X) and we have c(a), one can state this is true. I was wondering however why Prolog does not provide the answer X = a? Say for instance I introduce some semantics:
noOrphan(X) :- \+ orphan(X).
orphan(X) :- \+ parent(_,X).
parent(david,michael).
Of course if I query noOrphan(michael), this will result in true and noOrphan(david) in false (since I didn't define a parent for david)., but I was wondering why there is no proactive way of detecting which persons (michael, david,...) belong to the noOrphan/1 relation?
This probably is a result of the backtracking mechanism of Prolog, but Prolog could maintain a state which validates if one is searching in the positive way (0,2,4,...) negations deep, or the negative way (1,3,5,...) negations deep.
Let's start with something simpler. Say \+ X = Y. Here, the negated goal is a predefined built-in predicate. So things are even clearer: X and Y should be different. However, \+ X = Y fails, because X = Y succeeds. So no trace is left under which precise condition the goal failed.
Thus, \+ \+ X = Y does produce an empty answer, and not the expected X = Y. See this answer for more.
Given that such simple queries already show problems, you cannot expect too much of user defined goals such as yours.
In the general case, you would have to first reconsider what you actually mean by negation. The answer is much more complex than it seems at first glance. Think of the program p :- \+ p. should p succeed or fail? Should p be true or not? There are actually two models here which no longer fits into Prolog's view of going with the minimal model. Considerations as these opened new branches to Logic Programming like Answer Set Programming (ASP).
But let's stick to Prolog. Negation can only be used in very restricted contexts, such as when the goal is sufficiently instantiated and the definition is stratified. Unfortunately, there are no generally accepted criteria for the safe execution of a negated goal. We could wait until the goal is variable free (ground), but this means quite often that we have to wait way too long - in jargon: the negated goal flounders.
So effectively, general negation does not go very well together with pure Prolog programs. The heart of Prolog really is the pure, monotonic subset of the language. Within the constraint part of Prolog (or its respective extensions) negation might work quite well, though.
I might be misunderstanding the question, and I don't understand the last paragraph.
Anyway, there is a perfectly valid way of detecting which people are not orphans. In your example, you have forgotten to tell the computer something that you know, namely:
person(michael).
person(david).
% and a few more
person(anna).
person(emilia).
not_orphan(X) :- \+ orphan(X).
orphan(X) :- person(X), \+ parent(_, X).
parent(david, michael).
parent(anna, david).
?- orphan(X).
X = anna ;
X = emilia.
?- not_orphan(X).
X = michael ;
X = david ;
false.
I don't know how exactly you want to define an "orphan", as this definition is definitely a bit weird, but that's not the point.
In conclusion: you can't expect Prolog to know that michael and david and all others are people unless you state it explicitly. You also need to state explicitly that orphan or not_orphan are relationships that only apply to people. The world you are modeling could also have:
furniture(red_sofa).
furniture(kitchen_table).
abstract_concept(love).
emotion(disbelief).
and you need a way of leaving those out of your family affairs.
I hope that helps.

Resources