Can anyone explain the algorithm for 2-satisfiability problem or provide me the links for the same? I could not find good links to understand it.
If you have n variables and m clauses:
Create a graph with 2n vertices: intuitively, each vertex resembles a true or not true literal for each variable. For each clause (a v b), where a and b are literals, create an edge from !a to b and from !b to a. These edges mean that if a is not true, then b must be true and vica-versa.
Once this digraph is created, go through the graph and see if there is a cycle that contains both a and !a for some variable a. If there is, then the 2SAT is not satisfiable (because a implies !a and vica-versa). Otherwise, it is satisfiable, and this can even give you a satisfying assumption (pick some literal a so that a doesn't imply !a, force all implications from there, repeat). You can do this part with any of your standard graph algorithms, ala Breadth-First Search , Floyd-Warshall, or any algorithm like these, depending on how sensitive you are to the time complexity of your algorithm.
You can solve it with greedy approach.
Or using Graph theory, here is link which explains the solution using graph theory.
http://www.cs.tau.ac.il/~safra/Complexity/2SAT.ppt
Here is the Wikipedia page on the subject, which describes a polynomial time algorithm. (The brute force algorithm of just trying all the different truth assignments is exponential time.) Maybe a bit of further explanation will help.
The expression "if P then Q" is only false when P is true and Q is false. So the expression has the same truth table values as "Q or not P". It is also equivalent to its contrapositive, "if not Q then not P", and that in turn is equivalent to "not P or Q" (the same as the other one).
So the algorithm involves replacing every expression of the form "A or B", with the two expressions, "if not A then B" and "if not B then A". (Putting it another way, A and B can't both be false.)
Next, construct a graph representing these implications. Create nodes for each "A" and "not A", and add links for each of the implications obtained above.
The last step is to make sure that none of the variables is equivalent to its own negation. That is, for each variable A (or not A), follow the links to discover all the nodes that can be reached from it, taking care to detect loops.
If one of the variables, A, can reach "not A", and "not A" can also reach A, then the original expression is not satisfiable. (It is a paradox.) If none of the variables do this, then it is satisfiable.
(It's okay if A implies "not A", but not the other way around. That just means that A must be negated to satisfy the expression.)
2 satisfiabilty:
if x & !x is strongly connected
then from !x we can reach to x
from x we can reach to !x
so in our operation,
in case of x,
we have 2 options only,
1.taking x (x) that leads to !x
2.rejecting x (!x) that leads to x
and both the choices are leading to the paradox of taking and rejecting a choice at the same time
so the satisfiability is impossible :D
Related
I have boolean formulas A and B and want to check if "A -> B" (A implies B) is true in polynomial time.
For fully general formulas A and B, this is NP-complete because ""A -> B" is true" is the same as "not (A -> B)" is not satisfiable.
My goal is to find useful restrictions such that polynomial time verification is possible. I would also be interested in finding O(n) or O(n log n) restrictions (n is some kind of length |A| or |B|). I would prefer to restrict B rather than A.
In general, I know of the following classes of "easier" boolean formulas:
(renamable) Horn formulas can be solved in linear time (they are in CNF form with at most one positive variable).
All formulas in DNF form are trivial to check
2-SAT are CNF formulas with at most 2 variables per clause, solvable in linear time.
XOR-SAT are CNF formulas with XOR instead of OR. They can be solved via Gaussian elimination in O(n^3)
The main problem is that I have the formula "A -> B" aka "(not A) or B", which quickly becomes non-CNF and non-DNF for non-trivial A/B.
If I understand the Tseytin transformation correctly, then I can transform any formula X into CNF Y with O(|X|) = O(|Y|), thus I can assume - if I want - that I have my formula in CNF.
There is some low-hanging fruit:
if |B| is constant and small, I could enumerate all solutions to B and check if they are produce a true A.
similarly, if |A| is constant and small, I could enumerate all solutions to A and check if they produce a false B
More interestingly:
if B is in DNF then I can convert A to CNF, which will make "(not A) or B" DNF which is solvable in linear time.
For general B, if |B| is in O(log |A|), I could convert B to DNF and solve it that way
However, I'm not sure how I can use the other easier classes or if it is possible at all.
Due to distributivity, an A or B in CNF will almost certainly blow up exponentially when trying to bring "(not A) or B" back to CNF - if I'm not mistaken.
Note: my use case probably has more complex/longer A than B formulas.
So my questions boils down to: Are there useful classes of boolean formulas A and B such that "A -> B" can be proven in polynomial (preferably linear) time? - apart from the 4 cases that I already mentioned.
EDIT: A different take on this: Under what conditions of A and B is "A -> B" in one of the following classes:
in DNF
in CNF and a Horn formula (Horn-SAT)
in CNF and a binary formula (2-SAT)
in CNF and a arithmetic formula (CNF of XOR)
I have a question regarding Knights and Knaves and logical proposition. If I want to solve the puzzle and I assume I have two kinds of citizens: Knights, who always tell the truth, and knaves, who always tell lies. On the basis of utterances from some citizens, I must decide what kind they are.
There are three kinds of citizens: a, b and c, who are talking about themselves:
a says: ”All of us are knaves.”
b says: ”Exactly one of us is a knight.”
To solve the puzzle I should determine: What kinds of citizens are a, b and c? I should solve the puzzle by modelling the two utterances above using propositional logic, and I assume that I can use p to describe a knight and ¬p to describe a knave. How would I go about doing that? Any hint for someone who hasn't done any noticeable discrete mathematics in college?
A and C are Knaves. B is a Knight.
If A is a Knight, "All of us are knaves" is true. So, A would also be a Knave. This is a contradiction. Hence, A is a Knave.
If B is a Knave, then "Exactly one of us is a knight." is false. Meaning that 2 or more are Knights. But neither A nor B is a Knight. How can possibly be 2 or more Knights (since C is the only one with a possibility of being a Knight). This is also a contradiction. So, B is a Knight.
We just showed that B is a Knight. So, he himself is the only Knight he is talking about. So, C is a Knave.
Now, I don't think you can model this argument in the Propositional Logic. For one, notice the universal and existential quantifiers ("All" and "Exactly One") in the statements "All of us are knaves" and "Exactly one of us is a knight.". For another one, notice that A and B are talking about themselves. Modeling situations like this is one of the hardest problems in the history (not kidding!). Look at the following links for more info:
https://en.wikipedia.org/wiki/Liar_paradox
https://en.wikipedia.org/wiki/Self-reference
you can create a Truth table,
by first look at it i can say A must be knave, and B is a knight. because if A is a knight he can't say he's a knave(lie), also he can't be right about that all are knaves (can't say the truth) so B is a knight(if B Knave he can't say the truth that makes A a liar and he must be one) and then C is a Knave.
I have a Relation f defined as f: A -> B × C. I would like to write a firsr-order formula to constrain this relation to be a bijective function from A to B × C?
To be more precise, I would like the first order counter part of the following formula (actually conjunction of the three):
∀a: A, ∃! bc : B × C, f(a)=bc -- f is function
∀a1,a2: A, f(a1)=f(a2) → a1=a2 -- f is injective
∀(b, c) : B × C, ∃ a : A, f(a)=bc -- f is surjective
As you see the above formulae are in Higher Order Logic as I quantified over the relations. What is the first-order logic equivalent of these formulae if it is ever possible?
PS:
This is more general (math) question, rather than being more specific to any theorem prover, but for getting help from these communities --as I think there are mature understanding of mathematics in these communities-- I put the theorem provers tag on this question.
(Update: Someone's unhappy with my answer, and SO gets me fired up in general, so I say what I want here, and will probably delete it later, I suppose.
I understand that SO is not a place for debates and soapboxes. On the other hand, the OP, qartal, whom I assume is the unhappy one, wants to apply the answer from math.stackexchange.com, where ZFC sets dominates, to a question here which is tagged, at this moment, with isabelle and logic.
First, notation is important, and sloppy notation can result in a question that's ambiguous to the point of being meaningless.
Second, having a B.S. in math, I have full appreciation for the logic of ZFC sets, so I have full appreciation for math.stackexchange.com.
I make the argument here that the answer given on math.stackexchange.com, linked to below, is wrong in the context of Isabelle/HOL. (First hmmm, me making claims under ill-defined circumstances can be annoying to people.)
If I'm wrong, and someone teaches me something, the situation here will be redeemed.
The answerer says this:
First of all in logic B x C is just another set.
There's not just one logic. My immediate reaction when I see the symbol x is to think of a type, not a set. Consider this, which kind of looks like your f: A -> BxC:
definition foo :: "nat => int × real" where "foo x = (x,x)"
I guess I should be prolific in going back and forth between sets and types, and reading minds, but I did learn something by entering this term:
term "B × C" (* shows it's of type "('a × 'b) set" *)
Feeling paranoid, I did this to see if had fallen into a major gotcha:
term "f : A -> B × C"
It gives a syntax error. Here I am, getting all pedantic, and our discussion is ill-defined because the notation is ill-defined.
The crux: the formula in the other answer is not first-order in this context
(Another hmmm, after writing what I say below, I'm full circle. Saying things about stuff when the context of the stuff is ill-defined.)
Context is everything. The context of the other site is generally ZFC sets. Here, it's HOL. That answerer says to assume these for his formula, wich I give below:
Ax is true iff x∈A
Bx is true iff x∈B×C
Rxy is true iff f(x)=y
Syntax. No one has defined it here, but the tag here is isabelle, so I take it to mean that I can substitute the left-hand side of the iff for the right-hand side.
Also, the expression x ∈ A is what would be in the formula in a typical set theory textbook, not Rxy. Therefore, for the answerer's formula to have meaning, I can rightfully insert f(x) = y into it.
This then is why I did a lot of hedging in my first answer. The variable f cannot be in the formula. If it's in the formula, then it's a free variable which is implicitly quantified. Here's the formula in Isar syntax:
term "∀x. (Ax --> (∃y. By ∧ Rxy ∧ (∀z. (Bz ∧ Rxz) --> y = z)))"
Here it is with the substitutions:
∀x. (x∈A --> (∃y. y∈B×C ∧ f(x)=y ∧ (∀z. (z∈B×C ∧ f(x)=z) --> y = z)))
In HOL, f(x) = f x, and so f is implicitly, universally quantified. If this is the case, then it's not first-order.
Really, I should dig deep to recall what I was taught, that f(x)=y means:
(x,f(x)) = (x,y) which means we have to have (x,y)∈(A, B×C)
which finally gets me:
∀x. (x∈A -->
(∃y. y∈B×C ∧ (x,y)∈(A,B×C) ∧ (∀z. (z∈B×C ∧ (x,z)∈(A,B×C)) --> y = z)))
Finally, I guess it turns out that in the context of math.stackexchange.com, it's 100% on.
Am I the only one who feels compulsive about questioning what this means in the context of Isabelle/HOL? I don't accept that everything here is defined well enough to show that it's first order.
Really, qartal, your notation should be specific to a particular logic.
First answer
With Isabelle, I answer the question based on my interpretation of your
f: A -> B x C, which I take as a ZFC set, in particular a subset of the
Cartesian product A x (B x C)
You're sort of mixing notation from the two logics, that of ZFC
sets and that of HOL. Consequently, I might be off on what I think you're
asking.
You don't define your relation, so I keep things simple.
I define a simple ZFC function, and prove the first
part of your first condition, that f is a function. The second part would be
proving uniqueness. It can be seen that f satisfies that, so once a
formula for uniqueness is stated correctly, auto might easily prove it.
Please notice that the
theorem is a first-order formula. The characters ! and ? are ASCII
equivalents for \<forall> and \<exists>.
(Clarifications must abound when
working with HOL. It's first-order logic if the variables are atomic. In this
case, the type of variables are numeral. The basic concept is there. That
I'm wrong in some detail is highly likely.)
definition "A = {1,2}"
definition "B = A"
definition "C = A"
definition "f = {(1,(1,1)), (2,(1,1))}"
theorem
"!a. a \<in> A --> (? z. z \<in> (B × C) & (a,z) \<in> f)"
by(auto simp add: A_def B_def C_def f_def)
(To completely give you an example of what you asked for, I would have to redefine my function so its bijective. Little examples can take a ton of work.)
That's the basic idea, and the rest of proving that f is a function will
follow that basic pattern.
If there's a problem, it's that your f is a ZFC set function/relation, and
the logical infrastructure of Isabelle/HOL is set up for functions as a type.
Functions as ordered pairs, ZFC style, can be formalized in Isabelle/HOL, but
it hasn't been done in a reasonably complete way.
Generalizing it all is where the work would be. For a particular relation, as
I defined above, I can limit myself to first-order formulas, if I ignore that
the foundation, Isabelle/HOL, is, of course, higher-order logic.
I have a question concerning proving properties of Relations.
The question is this:
How would I go about proving that, if R and S (R and S both being different Relations) are transitive, then R union S is transitive?
The answer is actually FALSE, and then a counter example is given as a solution in the book.
I understand how the counterexample works as explained in the book, but what I don't understand is, how exactly they arrive to the conclusion that the statement is actually false.
Basically I can see myself giving a proof that if that for all values (x,y,z) in R and S, if (x,y) is in R and (y,z) is in R, (x, z) is in R since R is transitive. And if (x,y) is in S and (y,z) is in S, (x,z) is in S since S is transitive. Since (x,z) is in both R and S, the intersection is true. But why wouldn't the union of R and S be true as well?
Is it because that the proof cannot be ended with "since (x,z) is in both R and S, (x,z) can be in R or S"? Basically, that a proof can't be ended with an OR statement at the end?
I understand how the counterexample works as explained in the book, but what I don't understand is, how exactly they arrive to the conclusion that the statement is actually false.
Given that there's a (presumably valid) counterexample, the statement has to be false. Trying to apply your proof to the counterexample can help reveal the error.
That's not to say that it's never the case that the union of two transitive relations is itself transitive. Indeed, there are obvious examples such as the union of a transitive relation with itself or the union of less-than and less-than-or-equal-to (which is equal to less-than-or-equal-to for any reasonable definition). But the original statement asserts that this is the case for any two transitive relations. A single counterexample disproves it. If you could provide a (valid) proof of the statement, you'd have discovered a paradox. This usually causes mathematicians to reevaluate the system's axioms to remove the paradox. In this case there is no paradox.
Let T be the union of R and S (for the sake of simplicity, let's assume domain equals range and that it's the same set for both). What you are trying to prove is that if xTy and yTz, then it must be the case that xTz. As part of your proof outline, you state the following:
if (x,y) is in R and (y,z) is in R, (x, z) is in R since R is transitive
This is clearly true as it's just the definition of transitivity. As you point out, it can be used to prove the transitivity of the intersection of two transitive relations. However, since T is the union, there's no reason to assume that xRy; it might be that xSy only. Since you can't prove the antecedent (that xRy and yRz), the consequent (that xRz) is irrelevant. Similarly, you can't show that xSz. If you can't show that xRz or xSz, there's no reason to believe that xTz.
This implies the sort of situation that gives a counter example to the statement: when one half of the transitive pair comes only from R and the other comes only from S. As a simple, contrived, example, define the relations over the set {1,2,3}:
R={(1,2)}
S={(2,3)}
Clearly, both R and S are transitive (as there are no x, y, z such that xRy and yRz or xSy and ySz). On the other hand,
T={(1,2),(2,3)}
is not transitive. While 1T2 (since 1R2) and 2T3 (since 2S3), it is not the case that 1T3. Your textbook probably gives a more natural counterexample, but I feel that this gives a good understanding of what can cause the assertion to fail.
I wan to use the destruct tactic to prove a statement by cases. I have read a couple of examples online and I'm confused. Could someone explain it better?
Here is a small example (there are other ways to solve it but try using destruct):
Inductive three := zero
| one
| two.
Lemma has2b2: forall a:three, a<>zero /\ a<>one -> a=two.
Now some examples online suggest doing the following:
intros. destruct a.
In which case I get:
3 subgoals H : zero <> zero /\ zero <> one
______________________________________(1/3)
zero = two
______________________________________(2/3)
one = two
______________________________________(3/3)
two = two
So, I want to prove that the first two cases are impossible. But the machine lists them as subgoals and wants me to PROVE them... which is impossible.
Summary:
How to exactly discard the impossible cases?
I have seen some examples using inversion but I don't understand the procedure.
Finally, what happens if my lemma depends on several inductive types and I still want to cover ALL cases?
How to discard impossible cases? Well, it's true that the first two obligations are impossible to prove, but note they have contradicting assumptions (zero <> zero and one <> one, respectively). So you will be able to prove those goals with tauto (there are also more primitive tactics that will do the trick, if you are interested).
inversion is a more advanced version of destruct. Additional to 'destructing' the inductive, it will sometimes generate some equalities (that you may need). It itself is a simple version of induction, which will additionally generate an induction hypothesis for you.
If you have several inductive types in your goal, you can destruct/invert them one by one.
More detailed walk-through:
Inductive three := zero | one | two .
Lemma test : forall a, a <> zero /\ a <> one -> a = two.
Proof.
intros a H.
destruct H. (* to get two parts of conjunction *)
destruct a. (* case analysis on 'a' *)
(* low-level proof *)
compute in H. (* to see through the '<>' notation *)
elimtype False. (* meaning: assumptions are contradictory, I can prove False from them *)
apply H.
reflexivity.
(* can as well be handled with more high-level tactics *)
firstorder.
(* the "proper" case *)
reflexivity.
Qed.
If you see an impossible goal, there are two possibilities: either you made a mistake in your proof strategy (perhaps your lemma is wrong), or the hypotheses are contradictory.
If you think the hypotheses are contradictory, you can set the goal to False, to get a little complexity out of the way. elimtype False achieves this. Often, you prove False by proving a proposition P and its negation ~P; the tactic absurd P deduces any goal from P and ~P. If there's a particular hypothesis which is contradictory, contradict H will set the goal to ~H, or if the hypothesis is a negation ~A then the goal will be A (stronger than ~ ~A but usually more convenient). If one particular hypothesis is obviously contradictory, contradiction H or just contradiction will prove any goal.
There are many tactics involving hypotheses of inductive types. Figuring out which one to use is mostly a matter of experience. Here are the main ones (but you will run into cases not covered here soon):
destruct simply breaks down the hypothesis into several parts. It loses information about dependencies and recursion. A typical example is destruct H where H is a conjunction H : A /\ B, which splits H into two independent hypotheses of types A and B; or dually destruct H where H is a disjunction H : A \/ B, which splits the proof into two different subproofs, one with the hypothesis A and one with the hypothesis B.
case_eq is similar to destruct, but retains the connections that the hypothesis has with other hypotheses. For example, destruct n where n : nat breaks the proof into two subproofs, one for n = 0 and one for n = S m. If n is used in other hypotheses (i.e. you have a H : P n), you may need to remember that the n you've destructed is the same n used in these hypotheses: case_eq n does this.
inversion performs a case analysis on the type of a hypothesis. It is particularly useful when there are dependencies in the type of the hypothesis that destruct would forget. You would typically use case_eq on hypotheses in Set (where equality is relevant) and inversion on hypotheses in Prop (which have very dependent types). The inversion tactic leaves a lot of equalities behind, and it's often followed by subst to simplify the hypotheses. The inversion_clear tactic is a simple alternative to inversion; subst but loses a little information.
induction means that you are going to prove the goal by induction (= recursion) on the given hypothesis. For example, induction n where n : nat means that you'll perform integer induction and prove the base case (n replaced by 0) and the inductive case (n replaced by m+1).
Your example is simple enough that you can prove it as “obvious by case analysis on a”.
Lemma has2b2: forall a:three, a<>zero/\a<>one ->a=two.
Proof. destruct a; tauto. Qed.
But let's look at the cases generated by the destruct tactic, i.e. after just intros; destruct a.. (The case where a is one is symmetric; the last case, where a is two, is obvious by reflexivity.)
H : zero <> zero /\ zero <> one
============================
zero = two
The goal looks impossible. We can tell this to Coq, and here it can spot the contradiction automatically (zero=zero is obvious, and the rest is a first-order tautology handled by the tauto tactic).
elimtype False. tauto.
In fact tauto works even if you don't start by telling Coq not to worry about the goal and wrote tauto without the elimtype False first (IIRC it didn't in older versions of Coq). You can see what Coq is doing with the tauto tactic by writing info tauto. Coq will tell you what proof script the tauto tactic generated. It's not very easy to follow, so let's look at a manual proof of this case. First, let's split the hypothesis (which is a conjunction) into two.
destruct H as [H0 H1].
We now have two hypotheses, one of which is zero <> zero. This is clearly false, because it's the negation of zero = zero which is clearly true.
contradiction H0. reflexivity.
We can look in even more detail at what the contradiction tactic does. (info contradiction would reveal what happens under the scene, but again it's not novice-friendly). We claim that the goal is true because the hypotheses are contradictory so we can prove anything. So let's set the intermediate goal to False.
assert (F : False).
Run red in H0. to see that zero <> zero is really notation for ~(zero=zero) which in turn is defined as meaning zero=zero -> False. So False is the conclusion of H0:
apply H0.
And now we need to prove that zero=zero, which is
reflexivity.
Now we've proved our assertion of False. What remains is to prove that False implies our goal. Well, False implies any goal, that's its definition (False is defined as an inductive type with 0 case).
destruct F.