Prolog: Coming up with Herbrand Universe and Herbrand Base - prolog

I have serious problems to understand the concept of Prolog and the corresponding Herbrand Universe, Herbrand Base and so on.
For example if I have a Prolog program:
p(X,Y) :- q(X,Y), q(Y,X).
s(a).
s(b).
s(c).
q(a,b).
q(b,a).
q(a,c).
I mean - I know what the Prolog program is supposed to do, but I cannot come up with the corresponding Herbrand Universe or Herbrand Base.
For the Herbrand Universe I have to search all variable-free constructor terms.
What are constructors in this context?
I would simply guess that HU = {a,b,c, s(a), s(b), s(c), q(a,a), q(a,b), q(b,a)... p(a,a), p(a,b), ... s(s(a))....}
How to come up with the Herbrand Base?
I'm sorry for all the questions, but I think I'm mixing so many different "Herbrand" things up :-(.
Can anyone help me and explain things to me?
Thank you.

Wikipedia says, "a Herbrand universe ... is defined starting from the set of constants and function symbols in a set of clauses."
So if we follow this definition, here it would consist of atoms a, b, c and compound terms with functors s/1, q/2, p/2 and arguments which are in the Herbrand universe themselves:
hu(a).
hu(b).
hu(c).
hu(s(Y)):- hu(Y).
hu(q(Y,Z)):- hu(Y), hu(Z).
hu(p(Y,Z)):- hu(Y), hu(Z).
"The set of all ground atoms that can be formed from predicate symbols from S and terms from H is called the Herbrand base":
hb(s(Y)):- hu(Y).
hb(q(Y,Z)):- hu(Y), hu(Z).
hb(p(Y,Z)):- hu(Y), hu(Z).

Related

Are PROLOG facts bilateral?

So I have just started programming in PROLOG (SWI distribution). I have good logic bases and I am familiar with facts, rules, quantification and all that vocabulary.
As far as I know, you can define a fact such as:
married(a,b).
And I know that if you make a query like:
?: married(X,b).
The answer will be "a". My question is, if I wanted to make a rule that used the fact previously declared, it will consider that "a is married to b", but will it consider that "b is married to a" or do I have to declare another fact like:
married(b,a).
for it to work? Same for any kind of bilateral relationship that can be represented as a fact.
Do you mean if the relation is automatically symmetric? Then no - suppose you have a directed graph with edge(a,b), then you would not want the other direction edge(b,a) to be inferred. Also, what about relations of arity greater than 2?
You can always create the symmetric closure of a predicate as:
r_sym(X,Y) :-
r(X,Y).
r_sym(X,Y) :-
r(Y,X).
Using a new predicate name prevents infinite derivation chains r(X,Y) -> r(Y,X) -> r(X,Y) -> .....

What is the difference between Well-formed formula and a preposition in propositional logic

What is the exact difference between Well-formed formula and a proposition in propositional logic?
There's really not much given about Wff in my book.
My book says: "Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a proposition". Does that mean they both are the exact same thing?
Proposition: A statement which is true or false, easy for people to read but hard to manipulate using logical equivalences
WFF: An accurate logical statement which is true or false, there should be an official rigorus definition in your textbook. There are 4 rules they must follow. Harder for humans to read but much more precise and easier to manipulate
Example:
Proposition : All men are mortal
WFF: Let P be the set of people, M(x) denote x is a man and S(x)
denote x is mortal Then for all x in P M(x) -> S(x)
It is most likely that there is a typo in the book. In the quote Propositions are also called sentences or statements. Another term formulae or well-formed formulae also refer to the same. That is, we may also call Well formed formula to refer to a preposition, the word "preposition" should be "proposition".
Proposition :- A statement which is either true or false,but not both.
Propositional Form (necessary to understand Well Formed Formula) :- An assertion which contains at least one propositional variable.
Well Formed Formula :-A propositional form satisfying the following rules and any Wff(Well Formed Formula) can be derived using these rules:-
If P is a propositional variable then it is a wff.
If P is a propositional variable,then ~P is a wff.
If P and Q are two wffs then,(A and B),(A or B),(A implies B),(A is equivalent to B) are all wffs.

Herbrand in Prolog

I was watching some issues such as Logica fuzzy logic and Horn clauses and saw some simple applications examples of them , with Prolog.
The reason for this question is because these issues are also among the Herbrand theorem which I consider a little more complicated than others, at least for me , and I had difficulty finding an application example related to Prolog .
That's why I wanted to provide me some applicative examples using Prolog , not so basic (because that generates the Herbrand model , according to the definition , are basic rules and always find this application example when search about Herbrand ) , for exclusive use Herbrand. Thanks
This is a example applicative code in Prolog:
p(f(X)):- q(g(X)).
p(f(X)):- p(X).
p(a).
q(b).
A set of clauses has a model iff it has a Herbrand model.
To prove that clause C is a consequence of clauses Cs, simply show that Cs∪~C is unsatisfiable.
This is, in abstract terms, what Prolog does, via a special case of resolution: You can regard the execution of a (pure—what else) Prolog program as the Prolog engine trying to find a resolution refutation of the negated query.
The form of resolution that Prolog implements, SLD resolution with depth-first search, does not guarantee though that all unsatisfiable clauses are disproved, it is incomplete.
In Prolog, procedural properties may impact the derivation of consequences. For example, with your program:
?- p(X).
wating...
Whereas we simply reorder the clauses as:
q(b).
p(a).
p(f(X)):- q(g(X)).
p(f(X)):- p(X).
we get:
?- p(X).
X = a ;
X = f(a) ;
X = f(f(a)) .
Note though that many important declarative properties are indeed preserved in the pure and monotonic subset of Prolog. See logical-purity for more information.

why the order of quantifiers are important? how the order is determined?

I want to know why the order of quantifiers are important in a logic formula?
When I read books about logic programming, such points are mentioned, but did not say why.
Is there any one could explain with some examples?
Also, how can we determine order of quantifiers from a given logic formula?
Thanks in advance!
You would be well advised to read a book about first-order logic before the books about
logic programming.
Consider the true statement:
1. Everybody has a mother
Let's formalize it in FOL. To keep it simple, we'll say
that the universe of discourse is the set of people, i.e.
our individual variables x, y, z... range over people. Then
1 becomes:
1F. (x)(Ey)Mother(y,x)
which we can read as: For every person x there exists
some person y such that y is the mother of x.
Now let's swap the order of the universal quantifier (x) and existential
quantifier (Ey):
2F. (Ey)(x)Mother(y,x)
That reads: There is some person y such that for every person x,
y is the mother of x. Or in plain English:
2. There is somebody who is the mother of everybody
You see that swapping the quantifiers changes the meaning of the statement,
taking us from the true statement 1 to the false statement 2. Indeed, to the absurdly false statement
2, which entails that somebody is their own mother.
That's why the order of quantifiers matters.
how can we determine order of quantifiers from a given logic formula?
Well, in 1F and 2F, for example, all the variables are already bound by quantifiers,
so there's nothing to determine. The order of the quantifiers is what you see,
left to right.
Suppose one of the variables was free (not bound), e.g.
3F. (Ey)Mother(y,x)
You might read that as: There is someone who is the mother of x, for variable person x.
But this formula really doesn't express any statement. It expresses a unary predicate of persons, the predicate Someone is the mother of x. If you free up the remaining variable:
4F. Mother(x,y)
then you have the binary predicate, or relation: x is the mother of y.
A formula with 1,2,...,n free variables expresses a unary, binary,...,n-ary predicate.
Given a predicate, you can make a statement by binding free variables with quantifiers and/or substituting individual constants for the free variables. From 4F you can make:
(x)(y)Mother(x,y) (Everybody is everybody's mother)
(Ex)(y)Mother(x,y) (Somebody is everybody's mother)
(Ex)(Ey)Mother(x,y) (Somebody is somebody's mother)
(x)Mother(x,Arnold) (Everybody is the mother of Arnold)
(x)Mother(Bernice,x) (Bernice is the mother of everybody)
Mother(Arnold,Bernice) (Arnold is the mother of Bernice)
...
...
and so on ad nauseam.
What this should make clear is that if a formula has free variables, and therefore expresses
a predicate, the formula as such does not imply any particular way of quantifying
the free variables, or that they should be quantified at all.

Herbrand universe and Least herbrand Model

I read the question asked in Herbrand universe, Herbrand Base and Herbrand Model of binary tree (prolog) and the answers given, but I have a slightly different question more like a confirmation and hopefully my confusion will be clarified.
Let P be a program such that we have the following facts and rule:
q(a, g(b)).
q(b, g(b)).
q(X, g(X)) :- q(X, g(g(g(X)))).
From the above program, the Herbrand Universe
Up = {a, b, g(a), g(b), q(a, g(a)), q(a, g(b)), q(b, g(a)), q(b, g(b)), g(g(a)), g(g(b))...e.t.c}
Herbrand base:
Bp = {q(s, t) | s, t E Up}
Now come to my question(forgive me for my ignorance), i included q(a, g(a)) as an element in my Herbrand Universe but from the fact, it states q(a, g(b)). Does that mean that q(a, g(a)) does not suppose to be there?
Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?
Note: I have done a lot of research on this, and some parts are well clear to me but still i have this doubt in me thats why i want to seek the communities opinion. Thank you.
From having the fact q(a,g(b)) you cannot conclude whether or not q(a,g(a)) is in the model. You will have to generate the model first.
For determining the model, start with the facts {q(a,g(b)), q(b,g(b))} and now try to apply your rules to extend it. In your case, however, there is no way to match the right-hand side of the rule q(X,g(X)) :- q(X,g(g(g(X)))). to above facts. Therefore, you are done.
Now imagine the rule
q(a,g(Y)) :- q(b,Y).
This rule could be used to extend our set. In fact, the instance
q(a,g(g(b))) :- q(b,g(b)).
is used: If q(b,g(b)) is present, conclude q(a,g(g(b))). Note that we are using here the rule right-to-left. So we obtain
{q(a,g(b)), q(b,g(b)), q(a,g(g(b)))}
thereby reaching a fixpoint.
Now take as another example you suggested the rule
q(X, g(g(g(X)))) :- q(X, g(X)).
Which permits (I will no longer show the instantiated rule) to generate in one step:
{q(a,g(b)), q(b,g(b)), q(a,g(g(g(b)))), q(b, g(g(g(b))))}
But this is not the end, since, again, the rule can be applied to produce even more! In fact, you have now an infinite model!
{g(a,gn+1(b)), g(b, gn+1(b))}
This right-to-left reading is often very helpful when you are trying to understand recursive rules in Prolog. The top-down reading (left-to-right) is often quite difficult, in particular, since you have to take into account backtracking and general unification.
Concerning your question:
"Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?"
If you have a set P of horn clauses, the definite program, then you can define
a program operator:
T_P(M) := { H S | S is ground substitution, (H :- B) in P and B S in M }
The least model is:
inf(P) := intersect { M | M |= P }
Please note that not all models of a definite program are fixpoints of the
program operator. For example the full herbrand model is always a model of
the program P, which shows that definite programs are always consistent, but
it is not necessarily a fixpoint.
On the other hand each fixpoint of the program operator is a model of the
definite program. Namely if you have T_P(M) = M, then one can conclude
M |= P. So that after some further mathematical reasoning(*) one finds that
the least fixpoint is also the least model:
lfp(T_P) = inf(P)
But we need some further considerations so that we can say that we can determine
the least model by a kind of computation. Namely one easily observes that the
program operator is contiguous, i.e. preserves infinite unions of chains, since
horn clauses do not have forall quantifiers in their body:
union_i T_P(M_i) = T_P(union_i M_i)
So that again after some further mathematical reasoning(*) one finds that we can
compute the least fixpoint via iteration, witch can be used for simple
induction. Every element of the least model has a simple derivation of finite
depth:
union_i T_P^i({}) = lpf(T_P)
Bye
(*)
Most likely you find further hints on the exact mathematical reasoning
needed in this book, but unfortunately I can't recall which sections
are relevant:
Foundations of Logic Programming, John Wylie Lloyd, 1984
http://www.amazon.de/Foundations-Programming-Computation-Artificial-Intelligence/dp/3642968287

Resources