Are PROLOG facts bilateral? - prolog

So I have just started programming in PROLOG (SWI distribution). I have good logic bases and I am familiar with facts, rules, quantification and all that vocabulary.
As far as I know, you can define a fact such as:
married(a,b).
And I know that if you make a query like:
?: married(X,b).
The answer will be "a". My question is, if I wanted to make a rule that used the fact previously declared, it will consider that "a is married to b", but will it consider that "b is married to a" or do I have to declare another fact like:
married(b,a).
for it to work? Same for any kind of bilateral relationship that can be represented as a fact.

Do you mean if the relation is automatically symmetric? Then no - suppose you have a directed graph with edge(a,b), then you would not want the other direction edge(b,a) to be inferred. Also, what about relations of arity greater than 2?
You can always create the symmetric closure of a predicate as:
r_sym(X,Y) :-
r(X,Y).
r_sym(X,Y) :-
r(Y,X).
Using a new predicate name prevents infinite derivation chains r(X,Y) -> r(Y,X) -> r(X,Y) -> .....

Related

Does the Prolog symbol :- mean Implies, Entails or Proves?

In Prolog we can write very simple programs like this:
mammal(dog).
mammal(cat).
animal(X) :- mammal(X).
The last line uses the symbol :- which informally lets us read the final fact as: if X is a mammal then it is also an animal.
I am beginning to learn Prolog and trying to establish which of the following is meant by the symbol :-
Implies (⇒)
Entails (⊨)
Provable (⊢)
In addition, I am not clear on the difference between these three. I am trying to read threads like this one, but the discussion is at a level above my capability, https://math.stackexchange.com/questions/286077/implies-rightarrow-vs-entails-models-vs-provable-vdash.
My thinking:
Prolog works by pattern-matching symbols (unification and search) and so we might be tempted to say the symbol :- means 'syntactic entailment'. However this would only be true of queries that are proven to be true as a result of that syntactic process.
The symbol :- is used to create a database of facts, and therefore is semantic in nature. That means it could be one of Implies (⇒) or Entails (⊨) but I don't know which.
Neither. Or, rather if at all, then it's the implication. The other symbols are above, that is meta-language. The Mathematics Stack Exchange answers explain this quite nicely.
So why :- is not that much of an implication, consider:
p :- p.
In logic, both truth values make this a valid sentence. But in Prolog we stick to the minimal model. So p is false. Prolog uses a subset of predicate logic such that there actually is only one minimal model. And worse, Prolog's actual default execution strategy makes this an infinite loop.
Nevertheless, the most intuitive way to read LHS :- RHS. is to see it as a way to generate new knowledge. Provided RHS is true it follows that also LHS is true. This way one avoids all the paradoxa related to implication.
The direction right-to-left is a bit counter intuitive. This direction is motivated by Prolog's actual execution strategy (which goes left-to-right in this representation).
:- is usually read as if, so something like:
a :- b, c .
reads as
| a is true if b and c are true.
In formal logic, the above would be written as
| a ← b ∧ c
Or
| b and c imply a

Why does this expression not unifiy

I have defined the following knowledge base:
leaf(_).
tree(X) :- leaf(X).
and was expecting the query:
leaf(X) = tree(X).
to return true ., because any leaf should per definition be a tree.
Unfortunately activating trace doesn't yield any useful results.
Here is a link to this minimal example if you'd like to play around with it.
Short answer: you here check if the term leaf(X) can be unified with tree(X). Since these are terms that consist out of different functors, this will fail.
The tree/1 and leaf/1 in your statement leaf(X) = tree(X) are not the predicates. What you basically here have written is:
=(leaf(X), tree(X))
So you call the (=)/2 predicate, with leaf(X) and tree(X) terms.
Now in Prolog two terms are unifiable if:
these are the same atom; or
it is a term with the same functor and arity, and the arguments are elementwise unifiable.
Since the functor leaf/1 is not equal to the functor tree/1, this means that leaf(X) and tree(X) can not be equal.
Even if we would define a predicate with the intent of checking if two predicates are semantically the same, this would fail. Here you basically aim to solve the Equivalence problem, which is undecidable. This means that one, in general, can not construct an algorithm that verifies if two Turing machines decide the same language. Prolog is a Turing complete language, we basically can translate any predicate in a Turing machine and vice versa. So that means that we can not calculate if two predicates accept the same input.

What are the requirements a computer function must meet to be considered "monotonic"?

What are the requirements a computer function/procedure/predicate must meet to be considerd "monotonic"?
Let A be some thing ,
Let B be some thing ,
Let R be a monotonic relationship between A and B ,
Let R_ be a non-monotonic relationship between A and B ,
Let R become false if R_ is true ,
Let R_ become true if R is false ,
Let C be a constraint in consideration of R ,
Let C become false if R_ is true ,
Let D be the collection of constraints C (upon relationship R) .
**What is D ?**
I have reviewed some literature, e.g. a Wikipedia article "monotonic function".
I am most interested in a practical set of criteria I can apply when
pragmatically involved with computer programming.
What are some tips and best practices I should follow when creating and designing my functions, such that they are more likely to be "monotonic"?
In logic programming, and also in logic, the classification "monotonic" almost invariably refers to monotonicity of entailment.
This fundamental property is encountered for example in classical first-order logic: When you are able to derive a consequence from a set of clauses, then you can also derive the consequence when you extend the set of clauses. Conversely, removing a clause does not entail consequences that were previously not the case.
In the pure subset of Prolog, this property also holds, from a declarative perspective. We therefore sometimes call it the pure monotonic subset of Prolog, since impurities do not completely coincide with constructs that destroy monotonicity.
Monotonicity of entailment is the basis and sometimes even necessary condition for several approaches that reason over logic programs, notably declarative debugging.
Note that Prolog has several language constructs that may prevent such reasoning in general. Consider for example the following Prolog program:
f(a).
f(b).
f(c).
And the following query:
?- setof(., f(X), [_,_]).
false.
Now I remove one of the facts from the program, which I indicate by strikethrough text:
f(a) :- false.
f(b).
f(c).
If Prolog programs were monotonic, then every query that previously failed would definitely now fail all the more, since I have removed something that was previously the case.
However, we now have:
?- setof(X, f(X), [_,_]).
true.
So, setof/3 is an example of a predicate that violates monotonicity!

Herbrand in Prolog

I was watching some issues such as Logica fuzzy logic and Horn clauses and saw some simple applications examples of them , with Prolog.
The reason for this question is because these issues are also among the Herbrand theorem which I consider a little more complicated than others, at least for me , and I had difficulty finding an application example related to Prolog .
That's why I wanted to provide me some applicative examples using Prolog , not so basic (because that generates the Herbrand model , according to the definition , are basic rules and always find this application example when search about Herbrand ) , for exclusive use Herbrand. Thanks
This is a example applicative code in Prolog:
p(f(X)):- q(g(X)).
p(f(X)):- p(X).
p(a).
q(b).
A set of clauses has a model iff it has a Herbrand model.
To prove that clause C is a consequence of clauses Cs, simply show that Cs∪~C is unsatisfiable.
This is, in abstract terms, what Prolog does, via a special case of resolution: You can regard the execution of a (pure—what else) Prolog program as the Prolog engine trying to find a resolution refutation of the negated query.
The form of resolution that Prolog implements, SLD resolution with depth-first search, does not guarantee though that all unsatisfiable clauses are disproved, it is incomplete.
In Prolog, procedural properties may impact the derivation of consequences. For example, with your program:
?- p(X).
wating...
Whereas we simply reorder the clauses as:
q(b).
p(a).
p(f(X)):- q(g(X)).
p(f(X)):- p(X).
we get:
?- p(X).
X = a ;
X = f(a) ;
X = f(f(a)) .
Note though that many important declarative properties are indeed preserved in the pure and monotonic subset of Prolog. See logical-purity for more information.

Herbrand universe and Least herbrand Model

I read the question asked in Herbrand universe, Herbrand Base and Herbrand Model of binary tree (prolog) and the answers given, but I have a slightly different question more like a confirmation and hopefully my confusion will be clarified.
Let P be a program such that we have the following facts and rule:
q(a, g(b)).
q(b, g(b)).
q(X, g(X)) :- q(X, g(g(g(X)))).
From the above program, the Herbrand Universe
Up = {a, b, g(a), g(b), q(a, g(a)), q(a, g(b)), q(b, g(a)), q(b, g(b)), g(g(a)), g(g(b))...e.t.c}
Herbrand base:
Bp = {q(s, t) | s, t E Up}
Now come to my question(forgive me for my ignorance), i included q(a, g(a)) as an element in my Herbrand Universe but from the fact, it states q(a, g(b)). Does that mean that q(a, g(a)) does not suppose to be there?
Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?
Note: I have done a lot of research on this, and some parts are well clear to me but still i have this doubt in me thats why i want to seek the communities opinion. Thank you.
From having the fact q(a,g(b)) you cannot conclude whether or not q(a,g(a)) is in the model. You will have to generate the model first.
For determining the model, start with the facts {q(a,g(b)), q(b,g(b))} and now try to apply your rules to extend it. In your case, however, there is no way to match the right-hand side of the rule q(X,g(X)) :- q(X,g(g(g(X)))). to above facts. Therefore, you are done.
Now imagine the rule
q(a,g(Y)) :- q(b,Y).
This rule could be used to extend our set. In fact, the instance
q(a,g(g(b))) :- q(b,g(b)).
is used: If q(b,g(b)) is present, conclude q(a,g(g(b))). Note that we are using here the rule right-to-left. So we obtain
{q(a,g(b)), q(b,g(b)), q(a,g(g(b)))}
thereby reaching a fixpoint.
Now take as another example you suggested the rule
q(X, g(g(g(X)))) :- q(X, g(X)).
Which permits (I will no longer show the instantiated rule) to generate in one step:
{q(a,g(b)), q(b,g(b)), q(a,g(g(g(b)))), q(b, g(g(g(b))))}
But this is not the end, since, again, the rule can be applied to produce even more! In fact, you have now an infinite model!
{g(a,gn+1(b)), g(b, gn+1(b))}
This right-to-left reading is often very helpful when you are trying to understand recursive rules in Prolog. The top-down reading (left-to-right) is often quite difficult, in particular, since you have to take into account backtracking and general unification.
Concerning your question:
"Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?"
If you have a set P of horn clauses, the definite program, then you can define
a program operator:
T_P(M) := { H S | S is ground substitution, (H :- B) in P and B S in M }
The least model is:
inf(P) := intersect { M | M |= P }
Please note that not all models of a definite program are fixpoints of the
program operator. For example the full herbrand model is always a model of
the program P, which shows that definite programs are always consistent, but
it is not necessarily a fixpoint.
On the other hand each fixpoint of the program operator is a model of the
definite program. Namely if you have T_P(M) = M, then one can conclude
M |= P. So that after some further mathematical reasoning(*) one finds that
the least fixpoint is also the least model:
lfp(T_P) = inf(P)
But we need some further considerations so that we can say that we can determine
the least model by a kind of computation. Namely one easily observes that the
program operator is contiguous, i.e. preserves infinite unions of chains, since
horn clauses do not have forall quantifiers in their body:
union_i T_P(M_i) = T_P(union_i M_i)
So that again after some further mathematical reasoning(*) one finds that we can
compute the least fixpoint via iteration, witch can be used for simple
induction. Every element of the least model has a simple derivation of finite
depth:
union_i T_P^i({}) = lpf(T_P)
Bye
(*)
Most likely you find further hints on the exact mathematical reasoning
needed in this book, but unfortunately I can't recall which sections
are relevant:
Foundations of Logic Programming, John Wylie Lloyd, 1984
http://www.amazon.de/Foundations-Programming-Computation-Artificial-Intelligence/dp/3642968287

Resources