A= set of actors
M: A → a
M={(x,y)|x and y appear in the same movie}
M is reflexive
M is symetric
M is NOT transitive
What my problem is to turn M relation to equvalance relation namely transitive relation.
It seems that you wanted to compute the transitive closure of a binary relation Transitive closure. The standard solutions should be the Floyd–Warshall algorithm.
Related
To start, I am not quite sure how to explain the problem and as a result have no clue how to search about it. It is as follows:
I have a large number of equations. The equations can be solved for any given variable, given the other unknowns, or can be solved simultaneously to solve for multiple unknowns. For example, given a and b, equations f(x, y) = a and g(x, y) = b, one can simultaneously solve to get x and y.
I need an algorithm that takes the known values and the equations and return the order in which solving them would result in the desired value.
Example equations:
f(a, b) = 0
f(b, c) = 0
Find c given a -> use eq1 to find b given a, then use eq2 to find c given b
Example 2:
f(x, y, a) = 0
f(x, y, b) = 0
Find x given a, b -> solve for x and y simultaneously using eq1 and eq2
I have attempted a simpler form of the problem using a graph, where the nodes are variables and edges are equations that connect them. However, this does not account for equations with more than 1 unknown and does not consider simultaneous solving.
There are a number of steps:
Match equations and variables as a standard bipartite matching; with edges between equations and variables (and if the maximum matching isn't perfect you have problems) https://cs.stackexchange.com/questions/50410/perfect-matching-in-a-graph-and-complete-matching-in-bipartite-graph
Find minimal set of equations to solve simultaneously using strongly connected components
https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm
Those sets can then be solved in various ways; tearing is a common technique to reduce the size of the system even further; see e.g.,
https://people.inf.ethz.ch/~fcellier/Lect/MMPS/Refs/tearing.pdf
I am new to Prolog and now I have to implement a path consistency algorithm for RCC8 calculus/Allen Time Interval Algebra in Prolog.
I am struggling with basic problems. A small example of my Background Knowledge. It is all about a given transitivity table and computing the resulting constraints between regions:
tpp(a,b).
dc(b,c).
dc(A,C) :- dc(A,C).
dc(A,C) :- tpp(A,B), dc(B,C).
So the constraint dc(a,c). should be generated. The whole algorithm:
Input: A network T
Output: A path consisten network
repeat
S <- T
for k:=1 to n do
for i,j := 1 to n do begin
Cij <- Cij UNION Cik COMPOSITION Ckj
until S = T
where Cij is the constraint between the nodes i and j. In the case of the BK tpp(a,b) for example. So my qeustion is, I am struggeling writing a rule which computes the resulting constraint from the transitivity table. I want to start with something like this:
comp([tpp(a,b), dc(b,c)], X).
Which stores dc(a,c) in X and I get X = [dc(a,c)] back.
I don't know if it is the right approach. I want to have a rule to which I give a list of constraints and it computes all resulting constraints. I am used to program in Python and Java and now I am struggeling starting with the problem.
Thank you very much! :D
I have boolean formulas A and B and want to check if "A -> B" (A implies B) is true in polynomial time.
For fully general formulas A and B, this is NP-complete because ""A -> B" is true" is the same as "not (A -> B)" is not satisfiable.
My goal is to find useful restrictions such that polynomial time verification is possible. I would also be interested in finding O(n) or O(n log n) restrictions (n is some kind of length |A| or |B|). I would prefer to restrict B rather than A.
In general, I know of the following classes of "easier" boolean formulas:
(renamable) Horn formulas can be solved in linear time (they are in CNF form with at most one positive variable).
All formulas in DNF form are trivial to check
2-SAT are CNF formulas with at most 2 variables per clause, solvable in linear time.
XOR-SAT are CNF formulas with XOR instead of OR. They can be solved via Gaussian elimination in O(n^3)
The main problem is that I have the formula "A -> B" aka "(not A) or B", which quickly becomes non-CNF and non-DNF for non-trivial A/B.
If I understand the Tseytin transformation correctly, then I can transform any formula X into CNF Y with O(|X|) = O(|Y|), thus I can assume - if I want - that I have my formula in CNF.
There is some low-hanging fruit:
if |B| is constant and small, I could enumerate all solutions to B and check if they are produce a true A.
similarly, if |A| is constant and small, I could enumerate all solutions to A and check if they produce a false B
More interestingly:
if B is in DNF then I can convert A to CNF, which will make "(not A) or B" DNF which is solvable in linear time.
For general B, if |B| is in O(log |A|), I could convert B to DNF and solve it that way
However, I'm not sure how I can use the other easier classes or if it is possible at all.
Due to distributivity, an A or B in CNF will almost certainly blow up exponentially when trying to bring "(not A) or B" back to CNF - if I'm not mistaken.
Note: my use case probably has more complex/longer A than B formulas.
So my questions boils down to: Are there useful classes of boolean formulas A and B such that "A -> B" can be proven in polynomial (preferably linear) time? - apart from the 4 cases that I already mentioned.
EDIT: A different take on this: Under what conditions of A and B is "A -> B" in one of the following classes:
in DNF
in CNF and a Horn formula (Horn-SAT)
in CNF and a binary formula (2-SAT)
in CNF and a arithmetic formula (CNF of XOR)
I am having trouble finding examples of transitive closure of relations that are not an equivalence relation.
Any transitive relation is it's own transitive closure, so just think of small transitive relations to try to get a counterexample. Let your set be {a,b,c} with relations{(a,b),(b,c),(a,c)}. This relation is transitive, but because the relations like (a,a) are excluded, it's not an equivalence relation.
Even more trivial if you start with any nonempty set and define the empty relation on it, that relation is vacuously transitive, and even vacuously symmetric, but not an equivalence relation because you are missing the relations that would make it reflexive.
I have a question concerning proving properties of Relations.
The question is this:
How would I go about proving that, if R and S (R and S both being different Relations) are transitive, then R union S is transitive?
The answer is actually FALSE, and then a counter example is given as a solution in the book.
I understand how the counterexample works as explained in the book, but what I don't understand is, how exactly they arrive to the conclusion that the statement is actually false.
Basically I can see myself giving a proof that if that for all values (x,y,z) in R and S, if (x,y) is in R and (y,z) is in R, (x, z) is in R since R is transitive. And if (x,y) is in S and (y,z) is in S, (x,z) is in S since S is transitive. Since (x,z) is in both R and S, the intersection is true. But why wouldn't the union of R and S be true as well?
Is it because that the proof cannot be ended with "since (x,z) is in both R and S, (x,z) can be in R or S"? Basically, that a proof can't be ended with an OR statement at the end?
I understand how the counterexample works as explained in the book, but what I don't understand is, how exactly they arrive to the conclusion that the statement is actually false.
Given that there's a (presumably valid) counterexample, the statement has to be false. Trying to apply your proof to the counterexample can help reveal the error.
That's not to say that it's never the case that the union of two transitive relations is itself transitive. Indeed, there are obvious examples such as the union of a transitive relation with itself or the union of less-than and less-than-or-equal-to (which is equal to less-than-or-equal-to for any reasonable definition). But the original statement asserts that this is the case for any two transitive relations. A single counterexample disproves it. If you could provide a (valid) proof of the statement, you'd have discovered a paradox. This usually causes mathematicians to reevaluate the system's axioms to remove the paradox. In this case there is no paradox.
Let T be the union of R and S (for the sake of simplicity, let's assume domain equals range and that it's the same set for both). What you are trying to prove is that if xTy and yTz, then it must be the case that xTz. As part of your proof outline, you state the following:
if (x,y) is in R and (y,z) is in R, (x, z) is in R since R is transitive
This is clearly true as it's just the definition of transitivity. As you point out, it can be used to prove the transitivity of the intersection of two transitive relations. However, since T is the union, there's no reason to assume that xRy; it might be that xSy only. Since you can't prove the antecedent (that xRy and yRz), the consequent (that xRz) is irrelevant. Similarly, you can't show that xSz. If you can't show that xRz or xSz, there's no reason to believe that xTz.
This implies the sort of situation that gives a counter example to the statement: when one half of the transitive pair comes only from R and the other comes only from S. As a simple, contrived, example, define the relations over the set {1,2,3}:
R={(1,2)}
S={(2,3)}
Clearly, both R and S are transitive (as there are no x, y, z such that xRy and yRz or xSy and ySz). On the other hand,
T={(1,2),(2,3)}
is not transitive. While 1T2 (since 1R2) and 2T3 (since 2S3), it is not the case that 1T3. Your textbook probably gives a more natural counterexample, but I feel that this gives a good understanding of what can cause the assertion to fail.