Can you give me an example of a transitive closure of a relation that is not an equivalence relation? - relation

I am having trouble finding examples of transitive closure of relations that are not an equivalence relation.

Any transitive relation is it's own transitive closure, so just think of small transitive relations to try to get a counterexample. Let your set be {a,b,c} with relations{(a,b),(b,c),(a,c)}. This relation is transitive, but because the relations like (a,a) are excluded, it's not an equivalence relation.
Even more trivial if you start with any nonempty set and define the empty relation on it, that relation is vacuously transitive, and even vacuously symmetric, but not an equivalence relation because you are missing the relations that would make it reflexive.

Related

Resolution refutation in proportion logic with 2 clashing literals

I have done process of elimination for the resolution rule and ended up with the set.
{pq, not p, not q}.
according to my text book: Lemma : if two clauses clash on more than one literal their resolvent is a trivial cause ... then goes on to say it is not strictly incorrect to perform resolution on such clauses but since trivial clauses contribute nothing to the satisfiability or unsatisfiability of a set of clauses we agree to delete them...
But elsewhere I Have read not to remove them since there is no reason that both of those clauses could be true.
So would the able clauses leave me with the empty set {} making my final answer that the set is unsatisfiable? Or do I leave that as my final answer? The problem said Prove that it IS satisfiable, so I'm guessing I should leave the clauses in the set so that it is, but the textbook says to remove them.
In your example, there aren't any two clauses that would clash in two literals. Two such clauses could be {p,r,q} and {~p,~r,s} and you would get {r,~r,q,s} which is always satisfiable (tautology). So, you would remove it us useless.
In your example, you will end up with an empty set after applying two resolution steps to three clauses: {pq}, ~p yields q and {q}, {~q} yields an empty set. So, the set of clauses is not satisfiable.
If the task was to prove something is satisfiable, there must be something wrong earlier in the derivation.

Are PROLOG facts bilateral?

So I have just started programming in PROLOG (SWI distribution). I have good logic bases and I am familiar with facts, rules, quantification and all that vocabulary.
As far as I know, you can define a fact such as:
married(a,b).
And I know that if you make a query like:
?: married(X,b).
The answer will be "a". My question is, if I wanted to make a rule that used the fact previously declared, it will consider that "a is married to b", but will it consider that "b is married to a" or do I have to declare another fact like:
married(b,a).
for it to work? Same for any kind of bilateral relationship that can be represented as a fact.
Do you mean if the relation is automatically symmetric? Then no - suppose you have a directed graph with edge(a,b), then you would not want the other direction edge(b,a) to be inferred. Also, what about relations of arity greater than 2?
You can always create the symmetric closure of a predicate as:
r_sym(X,Y) :-
r(X,Y).
r_sym(X,Y) :-
r(Y,X).
Using a new predicate name prevents infinite derivation chains r(X,Y) -> r(Y,X) -> r(X,Y) -> .....

Herbrand universe and Least herbrand Model

I read the question asked in Herbrand universe, Herbrand Base and Herbrand Model of binary tree (prolog) and the answers given, but I have a slightly different question more like a confirmation and hopefully my confusion will be clarified.
Let P be a program such that we have the following facts and rule:
q(a, g(b)).
q(b, g(b)).
q(X, g(X)) :- q(X, g(g(g(X)))).
From the above program, the Herbrand Universe
Up = {a, b, g(a), g(b), q(a, g(a)), q(a, g(b)), q(b, g(a)), q(b, g(b)), g(g(a)), g(g(b))...e.t.c}
Herbrand base:
Bp = {q(s, t) | s, t E Up}
Now come to my question(forgive me for my ignorance), i included q(a, g(a)) as an element in my Herbrand Universe but from the fact, it states q(a, g(b)). Does that mean that q(a, g(a)) does not suppose to be there?
Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?
Note: I have done a lot of research on this, and some parts are well clear to me but still i have this doubt in me thats why i want to seek the communities opinion. Thank you.
From having the fact q(a,g(b)) you cannot conclude whether or not q(a,g(a)) is in the model. You will have to generate the model first.
For determining the model, start with the facts {q(a,g(b)), q(b,g(b))} and now try to apply your rules to extend it. In your case, however, there is no way to match the right-hand side of the rule q(X,g(X)) :- q(X,g(g(g(X)))). to above facts. Therefore, you are done.
Now imagine the rule
q(a,g(Y)) :- q(b,Y).
This rule could be used to extend our set. In fact, the instance
q(a,g(g(b))) :- q(b,g(b)).
is used: If q(b,g(b)) is present, conclude q(a,g(g(b))). Note that we are using here the rule right-to-left. So we obtain
{q(a,g(b)), q(b,g(b)), q(a,g(g(b)))}
thereby reaching a fixpoint.
Now take as another example you suggested the rule
q(X, g(g(g(X)))) :- q(X, g(X)).
Which permits (I will no longer show the instantiated rule) to generate in one step:
{q(a,g(b)), q(b,g(b)), q(a,g(g(g(b)))), q(b, g(g(g(b))))}
But this is not the end, since, again, the rule can be applied to produce even more! In fact, you have now an infinite model!
{g(a,gn+1(b)), g(b, gn+1(b))}
This right-to-left reading is often very helpful when you are trying to understand recursive rules in Prolog. The top-down reading (left-to-right) is often quite difficult, in particular, since you have to take into account backtracking and general unification.
Concerning your question:
"Also since the Herbrand models are subset of the Herbrand base, how do i determine the least Herbrand model by induction?"
If you have a set P of horn clauses, the definite program, then you can define
a program operator:
T_P(M) := { H S | S is ground substitution, (H :- B) in P and B S in M }
The least model is:
inf(P) := intersect { M | M |= P }
Please note that not all models of a definite program are fixpoints of the
program operator. For example the full herbrand model is always a model of
the program P, which shows that definite programs are always consistent, but
it is not necessarily a fixpoint.
On the other hand each fixpoint of the program operator is a model of the
definite program. Namely if you have T_P(M) = M, then one can conclude
M |= P. So that after some further mathematical reasoning(*) one finds that
the least fixpoint is also the least model:
lfp(T_P) = inf(P)
But we need some further considerations so that we can say that we can determine
the least model by a kind of computation. Namely one easily observes that the
program operator is contiguous, i.e. preserves infinite unions of chains, since
horn clauses do not have forall quantifiers in their body:
union_i T_P(M_i) = T_P(union_i M_i)
So that again after some further mathematical reasoning(*) one finds that we can
compute the least fixpoint via iteration, witch can be used for simple
induction. Every element of the least model has a simple derivation of finite
depth:
union_i T_P^i({}) = lpf(T_P)
Bye
(*)
Most likely you find further hints on the exact mathematical reasoning
needed in this book, but unfortunately I can't recall which sections
are relevant:
Foundations of Logic Programming, John Wylie Lloyd, 1984
http://www.amazon.de/Foundations-Programming-Computation-Artificial-Intelligence/dp/3642968287

make relation transitive

A= set of actors
M: A → a
M={(x,y)|x and y appear in the same movie}
M is reflexive
M is symetric
M is NOT transitive
What my problem is to turn M relation to equvalance relation namely transitive relation.
It seems that you wanted to compute the transitive closure of a binary relation Transitive closure. The standard solutions should be the Floyd–Warshall algorithm.

Transitivity on relations

I have a question concerning proving properties of Relations.
The question is this:
How would I go about proving that, if R and S (R and S both being different Relations) are transitive, then R union S is transitive?
The answer is actually FALSE, and then a counter example is given as a solution in the book.
I understand how the counterexample works as explained in the book, but what I don't understand is, how exactly they arrive to the conclusion that the statement is actually false.
Basically I can see myself giving a proof that if that for all values (x,y,z) in R and S, if (x,y) is in R and (y,z) is in R, (x, z) is in R since R is transitive. And if (x,y) is in S and (y,z) is in S, (x,z) is in S since S is transitive. Since (x,z) is in both R and S, the intersection is true. But why wouldn't the union of R and S be true as well?
Is it because that the proof cannot be ended with "since (x,z) is in both R and S, (x,z) can be in R or S"? Basically, that a proof can't be ended with an OR statement at the end?
I understand how the counterexample works as explained in the book, but what I don't understand is, how exactly they arrive to the conclusion that the statement is actually false.
Given that there's a (presumably valid) counterexample, the statement has to be false. Trying to apply your proof to the counterexample can help reveal the error.
That's not to say that it's never the case that the union of two transitive relations is itself transitive. Indeed, there are obvious examples such as the union of a transitive relation with itself or the union of less-than and less-than-or-equal-to (which is equal to less-than-or-equal-to for any reasonable definition). But the original statement asserts that this is the case for any two transitive relations. A single counterexample disproves it. If you could provide a (valid) proof of the statement, you'd have discovered a paradox. This usually causes mathematicians to reevaluate the system's axioms to remove the paradox. In this case there is no paradox.
Let T be the union of R and S (for the sake of simplicity, let's assume domain equals range and that it's the same set for both). What you are trying to prove is that if xTy and yTz, then it must be the case that xTz. As part of your proof outline, you state the following:
if (x,y) is in R and (y,z) is in R, (x, z) is in R since R is transitive
This is clearly true as it's just the definition of transitivity. As you point out, it can be used to prove the transitivity of the intersection of two transitive relations. However, since T is the union, there's no reason to assume that xRy; it might be that xSy only. Since you can't prove the antecedent (that xRy and yRz), the consequent (that xRz) is irrelevant. Similarly, you can't show that xSz. If you can't show that xRz or xSz, there's no reason to believe that xTz.
This implies the sort of situation that gives a counter example to the statement: when one half of the transitive pair comes only from R and the other comes only from S. As a simple, contrived, example, define the relations over the set {1,2,3}:
R={(1,2)}
S={(2,3)}
Clearly, both R and S are transitive (as there are no x, y, z such that xRy and yRz or xSy and ySz). On the other hand,
T={(1,2),(2,3)}
is not transitive. While 1T2 (since 1R2) and 2T3 (since 2S3), it is not the case that 1T3. Your textbook probably gives a more natural counterexample, but I feel that this gives a good understanding of what can cause the assertion to fail.

Resources