When does prolog return fail? - prolog

Here is my tutor's question.
"In your own words, what possible conclusions can be drawn when prolog returns fail for a query?"
I have never experienced prolog returning fail. I can only assume that it might return fail when an error is encountered through backtracking perhaps?

When Prolog "returns" an indication of failure for a query, it indicates that it failed to prove the query. Example: 2 == 3..
When you make a query in Prolog, Prolog tries to satisfy it for you. There are two possible outcomes - whether it succeeds, or fails to satisfy the query.
When it succeeds, it indicates the substitutions for a variables in the query, with which it succeeded. If there are several ways to satisfy a query, Prolog will show the substitutions for each of them, if so requested.
If the query succeeds without any substitution to its variables (i.e. there are no variables), the success will be indicated in some way, by printing Yes, true or whatever, depending on the specific implementation.
Similarly, a failure will be indicated in some way too, e.g. by saying No, false, or whatever.
(that's really basic stuff. You should read some good books on Prolog, or talk to your tutor, a lot. Exercise, exercise, exercise ...) :)

Related

What is a "well behaved predicate" in Prolog?

The SWI documentation mentions on several occasions "for well behaved predicates, leave no choicepoints." Can I take that to mean that, for "well behaved predicates" that are either deterministic or semideterministic, there should be no choicepoints left after an answer has been found? What is the definition of well behaved predicate? It's not in the glossary.
I expect it to mean "works as it is expected to work", but I haven't found a clear well-defined definition.
For clarification:
This is the usage in the SWI-documentation:
Deterministic predicates are predicates that must succeed exactly
once and, for well behaved predicates, leave no choicepoints.
And this is the definition of deterministic predicates:
Deterministic predicates are predicates that must succeed exactly once and leave no choicepoints.
for well behaved predicates is clearly intended to change the meaning of the definition somehow, why else add it?
PROBABLE ANSWER:
As #DanielLyons points out, the well behaved part likely means "works as expected" and in plunit this means that you have to pass flags such as [nondet, fail] to indicate how the tested predicate should behave. The predicate can work functionally, but give multiple solutions where a single one is expected and vice versa, which then no longer matches the flagged, expected behavior and generates warnings.
All of the occurrences of this construction I see are in the plunit documentation, and refer to deterministic or semi-deterministic (single solution or 0/1 solutions) predicates. The implication here seems to be that you could call a predicate deterministic if it produces a single solution and leaves a choice-point (so you get exactly one successful unification but possibly more attempts that will definitely fail). It's the same story with semi-deterministic predicates (but probably only in the case where they have found their single success).
I don't think this is a well-defined term. It is always preferable that predicates which produce a single result should not leave choice points around unnecessarily, but perhaps plunit depends on this behavior for some reason and it's simply warning you of it. Prolog has no way of really knowing or keeping track of whether your predicate is deterministic. Other languages, especially Mercury, can. But the distinction here seems to be something plunit cares about, probably to avoid producing a spurious error message about a failed test or something.

Does a rule without passing a variable against the philosophy of declarative programming or prolog?

cancer():-
pain(strong),
mood(depressed),
fever(mild),
bowel(bloody),
miscellaneous(giddy).
diagnose():-
nl,
cancer()->write("has cancer").
for example, dog(X) says that X is a dog but my cancer statement just checks whether the following conditions meet. Is there a better way to do that?
In pure Prolog, a predicate without any arguments can only succeed or fail (or not terminate at all).
Thus, it can encode only very little information. A predicate that always succeeds is already available: true/0, having zero arguments. A predicate that always fails is also already available: false/0, also having zero arguments. A predicate that never terminates can be easily constructed.
So, in this sense, you do not need more predicates with zero arguments, and I think you are perfectly justified in being suspicous about such predicates.
Predicates with zero arguments are of limited use since they are so specific. They may however be used for example to describe a fixed set of tests, or be useful only for their side-effects. This is also what you are using, by emitting output on the terminal in case the predicate succeeds.
This means that you are leaving the pure subset of Prolog, and now relying on features that are beyond pure logic.
This is typically a very bad idea, because it:
prevents or at least complicates many forms of reasoning about your program
makes it much harder to test your predicates
is not thread safe in general
etc.
Therefore, suppose your write your program as follows:
cancer(Patient):-
patient_pain(Patient, strong),
patient_mood(Patient, depressed),
patient_fever(Patient, mild),
patient_bowel(Patient, bloody),
patient_miscellaneous(Patient, giddy).
This predicate is now parametrized by a patient, and thus significantly more general than what you have posted.
It can now be used to reason about several patients, it can be used to reason in parallel about different patients, you can use a Prolog query to test the predicate etc.
You can further generalize the predicate by defining for example patient_diagnosis/2, keeping everything completely pure and benefiting from the above advantages. Note that a patient may have several illnesses, which can be emitted on backtracking.
Thus: Yes, a rule without arguments is at least suspicious and atypical if it arises in your actual code. Leaving aside scenarios such as "test case" and "consistency check", it can only be useful for its side-effects, and I recommend you avoid side-effects if you can.
For more information about this topic, see logical-purity.
cancer() isn't legal syntax, but the idea's perfectly fine.
Just do the call as
cancer
and define it as a fact or rule.
cancer. % fact
cancer :- blah blah %rule
in fact, you use a system predicate with no args in your program -
nl is a predicate that always succeeds, and prints a newline.
There are many reasons to have a predicate with no arguments. Suppose you have a server that runs in a slightly different configuration in production than in development. Developer access API is off in production.
my_handler(Request) :-
development,
blah blah
development only succeeds if we're in development environment
or you might have a side effect set off, or be using state.

What is meant by "logical purity" in Prolog?

What is meant by "logical purity" (in the context of Prolog programming)? The logical-purity tag info says "programs using only Horn clauses", but then, how would predicates like if_/3 qualify, using as much as it does the cut, and the various meta-logical (what's the proper terminology? var/1 and such) predicates, i.e. the low-level stuff.
I get it that it achieves some "pure" effect, but what does this mean, precisely?
For a more concrete illustration, please explain how does if_/3 qualify as logically pure, seen in use e.g. in this answer?
Let us first get used to a declarative reading of logic programs.
Declaratively, a Prolog program states what is true.
For example
natural_number(0).
natural_number(s(X)) :-
natural_number(X).
The first clause states: 0 is a natural number.
The second clause states: If X is a natural number, then s(X) is a natural number.
Let us now consider the effect of changes to this program. For example, what changes when we change the order of these two clauses?
natural_number(s(X)) :-
natural_number(X).
natural_number(0).
Declaratively, exchanging the order of clauses does not change the intended meaning of the program in any way (disjunction is commutative).
Operationally, that is, taking into account the actual execution strategy of Prolog, different clause orders clearly often make a signifcant difference.
However, one extremely nice property of pure Prolog code is preserved regardless of chosen clause ordering:
If a query Q succeeds with respect to a clause ordering O1, then
Q does not fail with a different ordering O2.
Note that I am not saying that Q always also succeeds with a different ordering: This is because the query may also loop or yield an error with different orderings.
For two queries Q1 and Q2, we say that G1 is more general iff it subsumes G2 with respect to syntactic unification. For example, the query ?- parent_child(P, C). is more general than the query ?- parent_child(0, s(0))..
Now, with pure Prolog programs, another extremely nice property holds:
If a query Q1 succeeds, then every more general query Q2 does not
fail.
Note, again, that Q2 may loop instead of succeeding.
Consider now the case of var/1 which you mention, and think of the related predicate nonvar/1. Suppose we have:
my_pred(V) :-
nonvar(V).
When does this hold? Clearly, it holds iff the argument is not a variable.
As expected, we get:
?- my_pred(a).
true.
However, for the more general query ?- my_pred(X)., we get:
?- my_pred(X).
false.
Such a predicate is called non-monotonic, and you cannot treat it as a true relation due to this property: This is because the answer false above logically means that there are no solutions whatsoever, yet in the immediately preceding example, we see that there is a solution. So, illogically, a more specific query, built by adding a constraint, makes the query succeed:
?- X = a, my_pred(X).
true.
Thus, reasoning about such predicates is extremely complicated, to the point that it is no fun at all to program with them. It makes declarative debugging impossible, and hard to state any properties that are preserved. For instance, just swapping the order of subgoals in the above conjunctive query will make it fail:
?- my_pred(X), X = a.
false.
Hence, I strongly suggest to stay within the pure monotonic subset of Prolog, which allows the declarative reasoning along the lines outlined above.
CLP(FD) constraints, dif/2 etc. are all pure in this sense: You cannot trick these predicates into giving logically invalid answers, no matter the modes, orders etc. in which you use them. if_/3 also satisfies this property. On the other hand, var/1, nonvar/1, integer/1, !/0, predicates with side-effects etc. are all extra-logically referencing something outside the declarative world that is being described, and can thus not be considered pure.
EDIT: To clarify: The nice properties I mention here are in no way exhaustive. Pure Prolog code exhibits many other extremely valuable properties through which you can perceive the glory of logic programming. For example, in pure Prolog code, adding a clause can at most extend, never narrow, the set of solutions; adding a goal can at most narrow, never extend, it etc.
Using a single extra-logical primitive may, and typically will, already destroy many of these properties. Therefore, for example, every time you use !/0, consider it a cut right into the heart of purity, and try to feel regret and shame for wounding these properties.
A good Prolog book will at least begin to introduce or contain many hints to encourage such a declarative view, guide you to think about more general queries, properties that are preserved etc. Bad Prolog books will not say much about this and typically end up using exactly those impure language elements that destroy the language's most valuable and beautiful properties.
An awesome Prolog teaching environment that makes extensive use of these properties to implement declarative debugging is called GUPU, I highly recommend to check out these ideas. Ulrich Neumerkel has generously made one core idea that is used in his environment partly available as library(diadem). See the source file for a good example on how to declaratively debug a goal that fails unexpectedly: The library systematically builds generalizations of the query that still fail. This reasoning of course works perfectly with pure code.

First order logic in practice, how to deal with undecidablity?

I am very new to these things. Hope this is not a very naive question.
I tried the following formula in Prolog: A ⇒ B
and given that B is true, I evaluate A and it says FALSE.
My question is why FALSE? (why not TRUE?) Given the current information we don't know anything about B. Does Prolog work based on the assumption that for anything unknown, it outputs FALSE?
If this is an assumption, how common is this?
Another thing that comes into mind is that, it is finding the assignment to the conjunction of input query and axioms (basically SAT solving). Since the resulting output is TRUE, regardless of whatever value A has, it just chooses one randomly (or zero by default?).
Based the properties of the 1st order logic, it is semidecidable. if a sentence A logically implies a sentence B then this can be discovered, but not the other way around. So, how the latter case is handled in practice, when there is no proof of TRUTH?
PS1. A little explanation about how Prolog works, might also be useful. Does it use SAT solvers as black box? Or greedy search algorithms?
Does Prolog work based on the assumption that for anything unknown, it outputs FALSE?
Yes, it certainly does. This behavior reflects the Closed-World Assumption (CWA) in that if a fact isn't explicitly stated, it is considered false.
If this is an assumption, how common is this?
Very common -- most databases use this assumption.
It may help you to learn about Prolog's method of inference: SLD Resolution.

Theorem Proof Using Prolog

How can I write theorem proofs using Prolog?
I have tried to write it like this:
parallel(X,Y) :-
perpendicular(X,Z),
perpendicular(Y,Z),
X \== Y,
!.
perpendicular(X,Y) :-
perpendicular(X,Z),
parallel(Z,Y),
!.
Can you help me?
I was reluctant to post an Answer because this Question is poorly framed. Thanks to theJollySin for adding clean formatting! Something omitted in the rewrite, indicative of what Aman had in mind, was "I inter in Loop" (sic).
We don't know what query was entered that resulted in this looping, so speculation is required. The two rules suggest that Goal involved either the parallel/2 or the perpendicular/2 predicate.
With practice it's not hard to understand what the Prolog engine will do when a query is posed, especially a single goal query. Prolog uses a pretty simple "follow your nose" strategy in attempting to satisfy a goal. Look for the rules for whichever predicate is invoked. Then see if any of those rules, starting with the first and going down in the list of them, can be applied.
There are three topics that beginning Prolog programmers will typically struggle with. One is the recursive nature of the search the Prolog engine makes. Here the only rule for parallel/2 has a right-hand side that invokes two subgoals for perpendicular/2, while the only rule for perpendicular/2 invokes both a subgoal for itself and another subgoal for parallel/2. One should expect that trying to satisfy either kind of query inevitably leads to a Hydra-like struggle with bifurcating heads.
The second topic we see in this example is the use of free variables. If we are to gain knowledge about perpendicularity or parallelism of two specific lines (geometry), then somehow the query or the rules need to provide "binding" of variables to "ground" terms. Again without the actual Goal being queried, it's hard to guess how Aman expected that to work. Perhaps there should have been "facts" supplied about specific lines that are perpendicular or parallel. Lines could be represented merely as atoms (perhaps lowercase letters), but Prolog variables are names that begin with an uppercase letter (as in the two given rules) or with an underscore (_) character.
Finally the third topic that can be quite confusing is how Prolog handles negation. There's only a touch of that in these rules, the place where X\==Y is invoked. But even that brief subgoal requires careful understanding. Prolog implements "negation as failure", so that X\==Y succeeds if and only if X==Y does not succeed. This latter goal is also subtle, because it asks whether X and Y are the same without trying to do any unification. Thus if these are different variables, both free, then X==Y fails (and X\==Ysucceeds). On the other hand, the only way for X==Yto succeed (and thus for X\==Y to fail) would be if both variables were bound to the same ground term. As discussed above the two rules as stated don't provide a way for that to be the case, though something might have taken care of this in the query Goal.
The homework assignment for Aman is to learn about these Prolog topics:
recursion
free and bound variables
negation
Perhaps more concrete suggestions can then be made about Prolog doing geometry proofs!
Added: PTTP (Prolog Technology Theorem Prover) was written by M.E. Stickel in the late 1980's, and this 2006 web page describes it and links to a download.
It also summarizes succinctly why Prolog alone is not " a full general-purpose theorem-proving system." Pointers to later, more capable theorem provers can be followed there as well.

Resources