Herbrand in Prolog - prolog

I was watching some issues such as Logica fuzzy logic and Horn clauses and saw some simple applications examples of them , with Prolog.
The reason for this question is because these issues are also among the Herbrand theorem which I consider a little more complicated than others, at least for me , and I had difficulty finding an application example related to Prolog .
That's why I wanted to provide me some applicative examples using Prolog , not so basic (because that generates the Herbrand model , according to the definition , are basic rules and always find this application example when search about Herbrand ) , for exclusive use Herbrand. Thanks
This is a example applicative code in Prolog:
p(f(X)):- q(g(X)).
p(f(X)):- p(X).
p(a).
q(b).

A set of clauses has a model iff it has a Herbrand model.
To prove that clause C is a consequence of clauses Cs, simply show that Cs∪~C is unsatisfiable.
This is, in abstract terms, what Prolog does, via a special case of resolution: You can regard the execution of a (pure—what else) Prolog program as the Prolog engine trying to find a resolution refutation of the negated query.
The form of resolution that Prolog implements, SLD resolution with depth-first search, does not guarantee though that all unsatisfiable clauses are disproved, it is incomplete.
In Prolog, procedural properties may impact the derivation of consequences. For example, with your program:
?- p(X).
wating...
Whereas we simply reorder the clauses as:
q(b).
p(a).
p(f(X)):- q(g(X)).
p(f(X)):- p(X).
we get:
?- p(X).
X = a ;
X = f(a) ;
X = f(f(a)) .
Note though that many important declarative properties are indeed preserved in the pure and monotonic subset of Prolog. See logical-purity for more information.

Related

Does the Prolog symbol :- mean Implies, Entails or Proves?

In Prolog we can write very simple programs like this:
mammal(dog).
mammal(cat).
animal(X) :- mammal(X).
The last line uses the symbol :- which informally lets us read the final fact as: if X is a mammal then it is also an animal.
I am beginning to learn Prolog and trying to establish which of the following is meant by the symbol :-
Implies (⇒)
Entails (⊨)
Provable (⊢)
In addition, I am not clear on the difference between these three. I am trying to read threads like this one, but the discussion is at a level above my capability, https://math.stackexchange.com/questions/286077/implies-rightarrow-vs-entails-models-vs-provable-vdash.
My thinking:
Prolog works by pattern-matching symbols (unification and search) and so we might be tempted to say the symbol :- means 'syntactic entailment'. However this would only be true of queries that are proven to be true as a result of that syntactic process.
The symbol :- is used to create a database of facts, and therefore is semantic in nature. That means it could be one of Implies (⇒) or Entails (⊨) but I don't know which.
Neither. Or, rather if at all, then it's the implication. The other symbols are above, that is meta-language. The Mathematics Stack Exchange answers explain this quite nicely.
So why :- is not that much of an implication, consider:
p :- p.
In logic, both truth values make this a valid sentence. But in Prolog we stick to the minimal model. So p is false. Prolog uses a subset of predicate logic such that there actually is only one minimal model. And worse, Prolog's actual default execution strategy makes this an infinite loop.
Nevertheless, the most intuitive way to read LHS :- RHS. is to see it as a way to generate new knowledge. Provided RHS is true it follows that also LHS is true. This way one avoids all the paradoxa related to implication.
The direction right-to-left is a bit counter intuitive. This direction is motivated by Prolog's actual execution strategy (which goes left-to-right in this representation).
:- is usually read as if, so something like:
a :- b, c .
reads as
| a is true if b and c are true.
In formal logic, the above would be written as
| a ← b ∧ c
Or
| b and c imply a

What are the requirements a computer function must meet to be considered "monotonic"?

What are the requirements a computer function/procedure/predicate must meet to be considerd "monotonic"?
Let A be some thing ,
Let B be some thing ,
Let R be a monotonic relationship between A and B ,
Let R_ be a non-monotonic relationship between A and B ,
Let R become false if R_ is true ,
Let R_ become true if R is false ,
Let C be a constraint in consideration of R ,
Let C become false if R_ is true ,
Let D be the collection of constraints C (upon relationship R) .
**What is D ?**
I have reviewed some literature, e.g. a Wikipedia article "monotonic function".
I am most interested in a practical set of criteria I can apply when
pragmatically involved with computer programming.
What are some tips and best practices I should follow when creating and designing my functions, such that they are more likely to be "monotonic"?
In logic programming, and also in logic, the classification "monotonic" almost invariably refers to monotonicity of entailment.
This fundamental property is encountered for example in classical first-order logic: When you are able to derive a consequence from a set of clauses, then you can also derive the consequence when you extend the set of clauses. Conversely, removing a clause does not entail consequences that were previously not the case.
In the pure subset of Prolog, this property also holds, from a declarative perspective. We therefore sometimes call it the pure monotonic subset of Prolog, since impurities do not completely coincide with constructs that destroy monotonicity.
Monotonicity of entailment is the basis and sometimes even necessary condition for several approaches that reason over logic programs, notably declarative debugging.
Note that Prolog has several language constructs that may prevent such reasoning in general. Consider for example the following Prolog program:
f(a).
f(b).
f(c).
And the following query:
?- setof(., f(X), [_,_]).
false.
Now I remove one of the facts from the program, which I indicate by strikethrough text:
f(a) :- false.
f(b).
f(c).
If Prolog programs were monotonic, then every query that previously failed would definitely now fail all the more, since I have removed something that was previously the case.
However, we now have:
?- setof(X, f(X), [_,_]).
true.
So, setof/3 is an example of a predicate that violates monotonicity!

Does Prolog use Eager Evaluation?

Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.

Why double negation doesn't bind in Prolog

Say I have the following theory:
a(X) :- \+ b(X).
b(X) :- \+ c(X).
c(a).
It simply says true, which is of course correct, a(X) is true because there is no b(X) (with negation as finite failure). Since there is only a b(X) if there is no c(X) and we have c(a), one can state this is true. I was wondering however why Prolog does not provide the answer X = a? Say for instance I introduce some semantics:
noOrphan(X) :- \+ orphan(X).
orphan(X) :- \+ parent(_,X).
parent(david,michael).
Of course if I query noOrphan(michael), this will result in true and noOrphan(david) in false (since I didn't define a parent for david)., but I was wondering why there is no proactive way of detecting which persons (michael, david,...) belong to the noOrphan/1 relation?
This probably is a result of the backtracking mechanism of Prolog, but Prolog could maintain a state which validates if one is searching in the positive way (0,2,4,...) negations deep, or the negative way (1,3,5,...) negations deep.
Let's start with something simpler. Say \+ X = Y. Here, the negated goal is a predefined built-in predicate. So things are even clearer: X and Y should be different. However, \+ X = Y fails, because X = Y succeeds. So no trace is left under which precise condition the goal failed.
Thus, \+ \+ X = Y does produce an empty answer, and not the expected X = Y. See this answer for more.
Given that such simple queries already show problems, you cannot expect too much of user defined goals such as yours.
In the general case, you would have to first reconsider what you actually mean by negation. The answer is much more complex than it seems at first glance. Think of the program p :- \+ p. should p succeed or fail? Should p be true or not? There are actually two models here which no longer fits into Prolog's view of going with the minimal model. Considerations as these opened new branches to Logic Programming like Answer Set Programming (ASP).
But let's stick to Prolog. Negation can only be used in very restricted contexts, such as when the goal is sufficiently instantiated and the definition is stratified. Unfortunately, there are no generally accepted criteria for the safe execution of a negated goal. We could wait until the goal is variable free (ground), but this means quite often that we have to wait way too long - in jargon: the negated goal flounders.
So effectively, general negation does not go very well together with pure Prolog programs. The heart of Prolog really is the pure, monotonic subset of the language. Within the constraint part of Prolog (or its respective extensions) negation might work quite well, though.
I might be misunderstanding the question, and I don't understand the last paragraph.
Anyway, there is a perfectly valid way of detecting which people are not orphans. In your example, you have forgotten to tell the computer something that you know, namely:
person(michael).
person(david).
% and a few more
person(anna).
person(emilia).
not_orphan(X) :- \+ orphan(X).
orphan(X) :- person(X), \+ parent(_, X).
parent(david, michael).
parent(anna, david).
?- orphan(X).
X = anna ;
X = emilia.
?- not_orphan(X).
X = michael ;
X = david ;
false.
I don't know how exactly you want to define an "orphan", as this definition is definitely a bit weird, but that's not the point.
In conclusion: you can't expect Prolog to know that michael and david and all others are people unless you state it explicitly. You also need to state explicitly that orphan or not_orphan are relationships that only apply to people. The world you are modeling could also have:
furniture(red_sofa).
furniture(kitchen_table).
abstract_concept(love).
emotion(disbelief).
and you need a way of leaving those out of your family affairs.
I hope that helps.

PROLOG predicate order

I've got a very large number of equations which I am trying to use PROLOG to solve. However, I've come a minor cropper in that they are not specified in any sort of useful order- that is, some, if not many variables, are used before they are defined. These are all specified within the same predicate. Can PROLOG cope with the predicates being specified in a random order?
Absolutely... ni (in Italian, Yes and Not)
That is, ideally Prolog requires that you specify what must be computed, not how, writing down the equations controlling the solution in a fairly general logical form, Horn clauses.
But this ideal is far from reach, and this is the point where we, as programmers, play a role. You should try to topologically sort formulae, if you want Prolog just apply arithmetic/algorithms.
But at this point, Prolog is not more useful than any other procedural language. It just make easier to do such topological sort, in sense that formulas can be read (this builtin it's a full Prolog parser!), variables identified and quantified easily, terms transformed, evaluated, and the like (metalanguages features, a strong point of Prolog).
Situation changes if you can use CLP(FD). Just an example, a bidirectional factorial (cool, isn't it?), from the documentation of the shining implementation that Markus Triska developed for SWI-Prolog:
You can also use CLP(FD) constraints as a more declarative alternative for ordinary integer arithmetic with is/2, >/2 etc. For example:
:- use_module(library(clpfd)).
n_factorial(0, 1).
n_factorial(N, F) :- N #> 0, N1 #= N - 1, F #= N * F1, n_factorial(N1, F1).
This predicate can be used in all directions. For example:
?- n_factorial(47, F).
F = 258623241511168180642964355153611979969197632389120000000000 ;
false.
?- n_factorial(N, 1).
N = 0 ;
N = 1 ;
false.
?- n_factorial(N, 3).
false.
To make the predicate terminate if any argument is instantiated, add the (implied) constraint F #\= 0 before the recursive call. Otherwise, the query n_factorial(N, 0) is the only non-terminating case of this kind.
Thus if you write your equations in CLP(FD) you get much more chances to have your 'equation system' solved as is. SWI-Prolog has dedicated debugging for the low level details used to solve CLP(FD).
HTH

Resources