Prolog, fail and do not backtrack - prolog

Is there any build-in predicate in SWI-Prolog that will always fail AND prevent machine from backtracking - it is stop the program from executing immediately (this is not what fail/0 does)?
I could use cuts, but I don't like them.
Doing something like !, fail is not a problem for me, but in order to accomplish what I want, I would have to use cuts in more locations and this is what I don't like.

You could use exceptions. Based on your question - it should help.
Refer link

You could use the mechanism explicitly designed to help you accomplish something, but you don't like it?
You can always use not, which is syntactic sugar for cut fail

Two alternatives come to mind:
Pass around a backtrack(true) or backtrack(false) term through the code you want to control, and interpret it in the definition of the predicates you're writing to fail quickly if it is set to backtrack(false), or to continue if backtrack(true). Note that this won't actually prevent backtracking; it should just enable fast-failure. Even if your proof tree is deep, this should provide a fast way of preventing the execution of certain code on backtracking.
Use exceptions, as suggested by #Xonix (+1). Throwing an exception will terminate the proof tree construction immediately, and you can pass any term data through the exception up to the handler, bypassing any more execution - it will probably be faster than the first option, but may not be as portable.
Personally I've used both methods before - the first where I've anticipated the need before writing the code, the latter where I haven't.

Too bad, that's what cuts are for.

Related

What is a "well behaved predicate" in Prolog?

The SWI documentation mentions on several occasions "for well behaved predicates, leave no choicepoints." Can I take that to mean that, for "well behaved predicates" that are either deterministic or semideterministic, there should be no choicepoints left after an answer has been found? What is the definition of well behaved predicate? It's not in the glossary.
I expect it to mean "works as it is expected to work", but I haven't found a clear well-defined definition.
For clarification:
This is the usage in the SWI-documentation:
Deterministic predicates are predicates that must succeed exactly
once and, for well behaved predicates, leave no choicepoints.
And this is the definition of deterministic predicates:
Deterministic predicates are predicates that must succeed exactly once and leave no choicepoints.
for well behaved predicates is clearly intended to change the meaning of the definition somehow, why else add it?
PROBABLE ANSWER:
As #DanielLyons points out, the well behaved part likely means "works as expected" and in plunit this means that you have to pass flags such as [nondet, fail] to indicate how the tested predicate should behave. The predicate can work functionally, but give multiple solutions where a single one is expected and vice versa, which then no longer matches the flagged, expected behavior and generates warnings.
All of the occurrences of this construction I see are in the plunit documentation, and refer to deterministic or semi-deterministic (single solution or 0/1 solutions) predicates. The implication here seems to be that you could call a predicate deterministic if it produces a single solution and leaves a choice-point (so you get exactly one successful unification but possibly more attempts that will definitely fail). It's the same story with semi-deterministic predicates (but probably only in the case where they have found their single success).
I don't think this is a well-defined term. It is always preferable that predicates which produce a single result should not leave choice points around unnecessarily, but perhaps plunit depends on this behavior for some reason and it's simply warning you of it. Prolog has no way of really knowing or keeping track of whether your predicate is deterministic. Other languages, especially Mercury, can. But the distinction here seems to be something plunit cares about, probably to avoid producing a spurious error message about a failed test or something.

Does a rule without passing a variable against the philosophy of declarative programming or prolog?

cancer():-
pain(strong),
mood(depressed),
fever(mild),
bowel(bloody),
miscellaneous(giddy).
diagnose():-
nl,
cancer()->write("has cancer").
for example, dog(X) says that X is a dog but my cancer statement just checks whether the following conditions meet. Is there a better way to do that?
In pure Prolog, a predicate without any arguments can only succeed or fail (or not terminate at all).
Thus, it can encode only very little information. A predicate that always succeeds is already available: true/0, having zero arguments. A predicate that always fails is also already available: false/0, also having zero arguments. A predicate that never terminates can be easily constructed.
So, in this sense, you do not need more predicates with zero arguments, and I think you are perfectly justified in being suspicous about such predicates.
Predicates with zero arguments are of limited use since they are so specific. They may however be used for example to describe a fixed set of tests, or be useful only for their side-effects. This is also what you are using, by emitting output on the terminal in case the predicate succeeds.
This means that you are leaving the pure subset of Prolog, and now relying on features that are beyond pure logic.
This is typically a very bad idea, because it:
prevents or at least complicates many forms of reasoning about your program
makes it much harder to test your predicates
is not thread safe in general
etc.
Therefore, suppose your write your program as follows:
cancer(Patient):-
patient_pain(Patient, strong),
patient_mood(Patient, depressed),
patient_fever(Patient, mild),
patient_bowel(Patient, bloody),
patient_miscellaneous(Patient, giddy).
This predicate is now parametrized by a patient, and thus significantly more general than what you have posted.
It can now be used to reason about several patients, it can be used to reason in parallel about different patients, you can use a Prolog query to test the predicate etc.
You can further generalize the predicate by defining for example patient_diagnosis/2, keeping everything completely pure and benefiting from the above advantages. Note that a patient may have several illnesses, which can be emitted on backtracking.
Thus: Yes, a rule without arguments is at least suspicious and atypical if it arises in your actual code. Leaving aside scenarios such as "test case" and "consistency check", it can only be useful for its side-effects, and I recommend you avoid side-effects if you can.
For more information about this topic, see logical-purity.
cancer() isn't legal syntax, but the idea's perfectly fine.
Just do the call as
cancer
and define it as a fact or rule.
cancer. % fact
cancer :- blah blah %rule
in fact, you use a system predicate with no args in your program -
nl is a predicate that always succeeds, and prints a newline.
There are many reasons to have a predicate with no arguments. Suppose you have a server that runs in a slightly different configuration in production than in development. Developer access API is off in production.
my_handler(Request) :-
development,
blah blah
development only succeeds if we're in development environment
or you might have a side effect set off, or be using state.

Python Recursion Understanding Issue

I'm a freshman in cs, and I am having a little issue understanding the content of recursion in python.
I have 3 questions that I want to ask, or to confirm if I am understanding the content correctly.
1: I'm not sure the purpose of using base case in recursion. Is base case worked to terminate the recursion somewhere in my program?
2: Suppose I have my recursive call above any other code of my program. Is the program going to run the recursion first and then to run the content after recursion?
3: How to trace recursion properly with a reasonable correctness rate? I personally think it's really hard to trace recursion and I can barely find some instruction online.
Thanks for answering my questions.
Yes. The idea is that the recursive call will continue to call itself (with different inputs) until it calls the base case (or one of many base cases). The base case is usually a case of your problem where the solution is trivial and doesn't rely on any other parts of the problem. It then walks through the calls backwards, building on the answer it got for the simplest version of the question.
Yes. Interacting with recursive functions from the outside is exactly the same as interacting with any other function.
Depends on what you're writing, and how comfortable you are with debugging tools. If you're trying to track what values are getting passed back and forth between recursive calls, printing all the parameters at the start of the function and the return value at the end can take you pretty far.
The easiest way to wrap your head around this stuff is to write some of your own. Start with some basic examples (the classic is the factorial function) and ask for help if you get stuck.
Edit: If you're more math-oriented, you might look up mathematical induction (you will learn it anyway as part of cs education). It's the exact same concept, just taught a little differently.

Dealing with complicated prolog loops

I am using Prolog to encode some fairly complicated rules in a project of mine. There is a lot of recursion, including mutual recursion. Part of the rules look something like this:
pred1(X) :- ...
pred1(X) :- someguard(X), pred2(X).
pred2(X) :- ...
pred2(X) :- othercondition(X), pred1(X).
There is a fairly obvious infinite loop between pred1 and pred2. Unfortunately, the interaction between these predicates is very complicated and difficult to isolate. I was able to eliminate the infinite loop in this instance by passing around a list of objects that have been passed to pred1, but this is extremely unwieldy! In fact, it largely defeats the purpose of using Prolog in this application.
How can I make Prolog avoid infinite loops? For example, if in the course of proving pred1(foo) it tries to prove pred1(foo) as a sub-goal, fail and backtrack.
Is it possible to do this with meta-interpreters?
Yes, you can use meta-interpreters for this purpose, as mat suggests. But for the normal use case, that is going far beyond the regular effort.
What you may consider instead is to separate the looping functionality from your actual logic using higher-order predicates. That is a very safe way to go — SWI even checks if all the uses have a corresponding definition. This checking is either invoked when typing make. or check.
As an example, consider closure0/3 and path/4 which both handle loop checks "once and forever".
One feature that is available in some Prolog systems and that may help you to solve such issues is called tabling. See for example the related question and prolog-tabling.
If tabling is not available, then yes, meta-interpreters can definitely help a lot with this. For example, you can change the executation strategy etc. with a meta-interpreter.
In SWI-Prolog, also check out call_with_inference_limit/3 to robustly limit the execution, independent of CPU type and system load.
Related and also useful are termination analyzers like cTI: They allow you to statically derive termination conditions.

Why is determining if a function is pure difficult?

I was at the StackOverflow Dev Days convention yesterday, and one of the speakers was talking about Python. He showed a Memoize function, and I asked if there was any way to keep it from being used on a non-pure function. He said no, that's basically impossible, and if someone could figure out a way to do it it would make a great PhD thesis.
That sort of confused me, because it doesn't seem all that difficult for a compiler/interpreter to solve recursively. In pseudocode:
function isPure(functionMetadata): boolean;
begin
result = true;
for each variable in functionMetadata.variablesModified
result = result and variable.isLocalToThisFunction;
for each dependency in functionMetadata.functionsCalled
result = result and isPure(dependency);
end;
That's the basic idea. Obviously you'd need some sort of check to prevent infinite recursion on mutually-dependent functions, but that's not too difficult to set up.
Higher-order functions that take function pointers might be problematic, since they can't be verified statically, but my original question presupposes that the compiler has some sort of language constraint to designate that only a pure function pointer can be passed to a certain parameter. If one existed, that could be used to satisfy the condition.
Obviously this would be easier in a compiled language than an interpreted one, since all this number-crunching would be done before the program is executed and so not slow anything down, but I don't really see any fundamental problems that would make it impossible to evaluate.
Does anyone with a bit more knowledge in this area know what I'm missing?
You also need to annotate every system call, every FFI, ...
And furthermore the tiniest 'leak' tends to leak into the whole code base.
It is not a theoretically intractable problem, but in practice it is very very difficult to do in a fashion that the whole system does not feel brittle.
As an aside, I don't think this makes a good PhD thesis; Haskell effectively already has (a version of) this, with the IO monad.
And I am sure lots of people continue to look at this 'in practice'. (wild speculation) In 20 years we may have this.
It is particularly hard in Python. Since anObject.aFunc can be changed arbitrarily at runtime, you cannot determine at compile time which function will anObject.aFunc() call or even if it will be a function at all.
In addition to the other excellent answers here: Your pseudocode looks only at whether a function modifies variables. But that's not really what "pure" means. "Pure" typically means something closer to "referentially transparent." In other words, the output is completely dependent on the input. So something as simple as reading the current time and making that a factor in the result (or reading from input, or reading the state of the machine, or...) makes the function non-pure without modifying any variables.
Also, you could write a "pure" function that did modify variables.
Here's the first thing that popped into my mind when I read your question.
Class Hierarchies
Determining if a variable is modified includes the act of digging through every single method which is called on the variable to determine if it's mutating. This is ... somewhat straight forward for a sealed type with a non-virtual method.
But consider virtual methods. You must find every single derived type and verify that every single override of that method does not mutate state. Determining this is simply not possible in any language / framework which allows for dynamic code generation or is simply dynamic (if it's possible, it's extremely difficult). The reason why is that the set of derived types is not fixed because a new one can be generated at runtime.
Take C# as an example. There is nothing stopping me from generating a derived class at runtime which overrides that virtual method and modifies state. A static verified would not be able to detect this type of modification and hence could not validate the method was pure or not.
I think the main problem would be doing it efficiently.
D-language has pure functions but you have to specify them yourself, so the compiler would know to check them. I think if you manually specify them then it would be easier to do.
Deciding whether a given function is pure, in general, is reducible to deciding whether any given program will halt - and it is well known that the Halting Problem is the kind of problem that cannot be solved efficiently.
Note that the complexity depends on the language, too. For the more dynamic languages, it's possible to redefine anything at any time. For example, in Tcl
proc myproc {a b} {
if { $a > $b } {
return $a
} else {
return $b
}
}
Every single piece of that could be modified at any time. For example:
the "if" command could be rewritten to use and update global variables
the "return" command, along the same lines, could do the same thing
the could be an execution trace on the if command that, when "if" is used, the return command is redefined based on the inputs to the if command
Admittedly, Tcl is an extreme case; one of the most dynamic languages there is. That being said, it highlights the problem that it can be difficult to determine the purity of a function even once you've entered it.

Resources