When attempting to solve logic problems on a computer, it is usual to first convert them to CNF, because the best solving algorithms expect CNF as input.
For propositional logic, the textbook rules for this conversion are simple, but if you apply them as is, the result is one of the very rare cases where a program encounters double exponential resource consumption without being specifically constructed to do so:
a <=> (b <=> (c <=> ...))
with N variables, generates 2^2^N clauses, one exponential blowup in the conversion of equivalence to AND/OR, and another in the distribution of OR into AND.
The solution to this is to rename subterms. If we rewrite the above as something like
r <=> (c <=> ...)
a <=> (b <=> r)
where r is a fresh symbol that is being defined to be equal to a subterm - in general, we may need O(N) such symbols - the exponential blowups can be avoided.
Unfortunately, this runs into a problem when we try to extend it to first-order logic. Using TPTP notation where ? means 'there exists' and variables begin with capital letters, consider
a <=> ?[X]:p(X)
Admittedly this case is simple enough that there is no actual need to rename the subterm, but it's necessary to use a simple case for illustration, so suppose we are using an algorithm that just automatically renames arguments of the equivalence operator; the point generalizes to more complex cases.
If we try the above trick and rename the ? subterm, we get
r <=> ?[X]:p(X)
Existential variables are converted to Skolem symbols, so that ends up as
r <=> p(s)
The original formula then expands to
(~a | r) & (a | ~r)
Which is by construction equivalent to
(~a | p(s)) & (a | ~p(s))
But this is not correct! Suppose we had not done the renaming, but just expanded the original formula as it was, we would get
(~a | ?[X]:p(X)) & (a | ~?[X]:p(X))
(~a | ?[X]:p(X)) & (a | ![X]:~p(X))
(~a | p(s)) & (a | ~p(X))
which is critically different from the version we got with the renaming.
The problem is that equivalence needs both the positive and negative versions of each argument, but applying negation to terms that contain universal or existential quantifiers, structurally changes those terms; you cannot just encapsulate them in a definition, then apply the negation to the defined symbol.
The upshot of this is that when you have equivalence and the arguments may contain such quantifiers, you actually need to recur through each argument twice, once for the positive version, once for the negative. This suffices to bring back the existential blowup we hoped to avoid by doing the renaming. As far as I can see, this problem is not caused by the way a particular algorithm works, but by the nature of the task.
So my question:
Given an input formula that may contain arbitrary nesting of equivalence and quantifiers, is there any algorithm that will correctly turn this to CNF with a polynomial rather than exponential number of clauses?
As you observed, an existential such as ∃X.p(X) is not in fact equivalent to a Skolemized expression p(S). Its negation ¬∃X.p(X) is not equivalent to ¬p(S), but to ∀Y.¬p(Y).
Possible approaches that avoid the exponential blow-up:
Convert existentials such as ∃X.p(X) to universals such as ¬∀Y.p(Y), or vice versa, so you have a canonical form. Skolemize at a later step.
Remember when you convert that your p(S) is a Skolemized existential, and that its negation is ∀Y.¬p(Y).
Define terms equivalent to universals and existentials, such that a represents ∀Y.p(Y) and ¬a then represents ¬∀Y.p(Y), or equivalently, ∃X.¬p(X).
Use the symmetry of Boolean duals, so that the same transformations apply with AND and OR swapped, De Morgan’s Laws, and the equivalence between existentials and negated universals, to restore the symmetry between the expansions of r and ~r. The negations in the conversion between universals and existentials and in De Morgan's Laws cancel each other out, and the duality of switching AND and OR means you can re-use the result on the left to generate the one on the right mechanically again?
Given that you need to support ALL and NOT ALL statements anyway, this should not create any new problems. Just canonicalize and use the same approach you would for a universal.
If you’re solving by converting to SAT, your terms can represent universals, too. So, in your example, you’re trying to replace a with r, but you can still use ~a, equivalent to the negative universal.
In your expressions. you’d still use (~a | r) & (a | ~r), but expand ~r to its correct rather than the incorrect value. That example is trivial, since that’s just ~a, but you’d normally define r as equivalent to a more complex transformation, and in that case you need to remember what both r and ~r represent. It is not really a simple mechanical transformation of the Skolemized expression.
In this example, I’m not sure why it’s a problem that (~a | r) & (a | ~r) is equivalent to (~a | r) & (a | ~a), which simplifies to (~a | r). That’s not going to give you exponential blow-up? When you translate back to first-order predicate logic, make the correct translation.
Update
Thanks for clarifying what the problem was in chat. As I currently think I understand it, what you have is an equivalence with a left and a right side, which contains other nested equivalences, and you want to expand both the equivalence and its negation. The problem is that, because the negation does not have symmetrical form, you need to recurse twice for each nested equivalence in the tree, once when expanding the equivalence and once when expanding its negation?
You should define a transformation that generates the negative expansion from the positive expansion in linear time, and divide-and-conquer the expressions containing nested equivalences using that. This seems to be what you were after with the ~p(S) transformation.
To do this, you recall that ¬∃X.p(X) is equivalent to ∀X.¬p(X), and vice versa. Then if you’ve expanded p(x) into normal form as conjunctions and disjunctions, De Morgan’s Laws lets you turn an expression like ¬(a ∨ ¬b) into ¬a ∧ b. The inner ¬ on the quantifier transformation and the outer ¬ on the De Morgan transformation cancel each other out. Finally, the dual of any Boolean equivalence remains valid when you replace each ∨ and ∧ with the other and any atom a or ¬a with its inverse.
So, while I might be making an error, especially at 1 AM, it looks to me like what you want is the dual transformation that substitutes:
An outer ∃ for ∀ and vice versa
∧ for ∨ and vice versa
Each term t with ¬t and vice versa
Apply this to the expansion of the positive equivalence to generate the negative dual in time proportional to its length, without further recursion.
Related
In the car industry you have thousand of different variants of components available to choose from when you buy a car. Not every component is combinable, so for each car there exist a lot of rules that are expressed in propositional logic. In my case each car has between 2000 and 4000 rules.
They look like this:
A → B ∨ C ∨ D
C → ¬F
F ∧ G → D
...
where "∧" = "and" / "∨" = "or" / "¬" = "not" / "→" = "implication".
With the tool "limboole" (http://fmv.jku.at/limboole/) I am able to to convert the propositional logic expressions into conjunctive normal form (CNF). This is needed in case I have to use a SAT solver.
Now, I would like to check the buildability feasibility for specific components within the rule set. For example, for each of the following expressions or combinations, I would like to check if the are feasible within the rule set.
(A) ∧ (B)
(A) ∧ (C ∨ F)
(B ∨ G)
...
My question is how to solve this problem. I asked a similar questions before (Tool to solve propositional logic / boolean expressions (SAT Solver?)), but with a different focus and now I am stuck again. Or I just do not understand it.
One option is to calculate all solutions with an ALLSAT approach of the rule set. Then I could check if each combination is part of any solution. If yes, I can derive that this specific combination is feasible.
Another option would be, that I add the combination to the rule set and then run a normal SAT solver. But I would have to do it for each expression I want to check.
What do you think is the most elegant or rather easiest way to solve this problem?
The best method which is known to me is to use "incremental solving under assumptions" technique. It was motivated by the same problem you have: multiple SAT instances (CNF formulae) which have some common subformulae.
Formally, you have some core Boolean formula C in CNF. And you have a set of assumptions {A_i}, i=1..n, where A_i is a Boolean formula in CNF also.
On the step 0 you provide to the solver your core formula C. It tries to solve it, says a result to you and save its state (lets call this state as core-state). If formula C is satisfiable, on the step i you provide assumption A_i to the solver and it continues its execution from the core-state. Actually, it tries to solve a formula C ∧ A_i but not from the beginning.
You can find a bunch of papers related to this topic easily, where much information is located. Also, you can check you favorite SAT-solver for the support of this technique.
I have a Relation f defined as f: A -> B × C. I would like to write a firsr-order formula to constrain this relation to be a bijective function from A to B × C?
To be more precise, I would like the first order counter part of the following formula (actually conjunction of the three):
∀a: A, ∃! bc : B × C, f(a)=bc -- f is function
∀a1,a2: A, f(a1)=f(a2) → a1=a2 -- f is injective
∀(b, c) : B × C, ∃ a : A, f(a)=bc -- f is surjective
As you see the above formulae are in Higher Order Logic as I quantified over the relations. What is the first-order logic equivalent of these formulae if it is ever possible?
PS:
This is more general (math) question, rather than being more specific to any theorem prover, but for getting help from these communities --as I think there are mature understanding of mathematics in these communities-- I put the theorem provers tag on this question.
(Update: Someone's unhappy with my answer, and SO gets me fired up in general, so I say what I want here, and will probably delete it later, I suppose.
I understand that SO is not a place for debates and soapboxes. On the other hand, the OP, qartal, whom I assume is the unhappy one, wants to apply the answer from math.stackexchange.com, where ZFC sets dominates, to a question here which is tagged, at this moment, with isabelle and logic.
First, notation is important, and sloppy notation can result in a question that's ambiguous to the point of being meaningless.
Second, having a B.S. in math, I have full appreciation for the logic of ZFC sets, so I have full appreciation for math.stackexchange.com.
I make the argument here that the answer given on math.stackexchange.com, linked to below, is wrong in the context of Isabelle/HOL. (First hmmm, me making claims under ill-defined circumstances can be annoying to people.)
If I'm wrong, and someone teaches me something, the situation here will be redeemed.
The answerer says this:
First of all in logic B x C is just another set.
There's not just one logic. My immediate reaction when I see the symbol x is to think of a type, not a set. Consider this, which kind of looks like your f: A -> BxC:
definition foo :: "nat => int × real" where "foo x = (x,x)"
I guess I should be prolific in going back and forth between sets and types, and reading minds, but I did learn something by entering this term:
term "B × C" (* shows it's of type "('a × 'b) set" *)
Feeling paranoid, I did this to see if had fallen into a major gotcha:
term "f : A -> B × C"
It gives a syntax error. Here I am, getting all pedantic, and our discussion is ill-defined because the notation is ill-defined.
The crux: the formula in the other answer is not first-order in this context
(Another hmmm, after writing what I say below, I'm full circle. Saying things about stuff when the context of the stuff is ill-defined.)
Context is everything. The context of the other site is generally ZFC sets. Here, it's HOL. That answerer says to assume these for his formula, wich I give below:
Ax is true iff x∈A
Bx is true iff x∈B×C
Rxy is true iff f(x)=y
Syntax. No one has defined it here, but the tag here is isabelle, so I take it to mean that I can substitute the left-hand side of the iff for the right-hand side.
Also, the expression x ∈ A is what would be in the formula in a typical set theory textbook, not Rxy. Therefore, for the answerer's formula to have meaning, I can rightfully insert f(x) = y into it.
This then is why I did a lot of hedging in my first answer. The variable f cannot be in the formula. If it's in the formula, then it's a free variable which is implicitly quantified. Here's the formula in Isar syntax:
term "∀x. (Ax --> (∃y. By ∧ Rxy ∧ (∀z. (Bz ∧ Rxz) --> y = z)))"
Here it is with the substitutions:
∀x. (x∈A --> (∃y. y∈B×C ∧ f(x)=y ∧ (∀z. (z∈B×C ∧ f(x)=z) --> y = z)))
In HOL, f(x) = f x, and so f is implicitly, universally quantified. If this is the case, then it's not first-order.
Really, I should dig deep to recall what I was taught, that f(x)=y means:
(x,f(x)) = (x,y) which means we have to have (x,y)∈(A, B×C)
which finally gets me:
∀x. (x∈A -->
(∃y. y∈B×C ∧ (x,y)∈(A,B×C) ∧ (∀z. (z∈B×C ∧ (x,z)∈(A,B×C)) --> y = z)))
Finally, I guess it turns out that in the context of math.stackexchange.com, it's 100% on.
Am I the only one who feels compulsive about questioning what this means in the context of Isabelle/HOL? I don't accept that everything here is defined well enough to show that it's first order.
Really, qartal, your notation should be specific to a particular logic.
First answer
With Isabelle, I answer the question based on my interpretation of your
f: A -> B x C, which I take as a ZFC set, in particular a subset of the
Cartesian product A x (B x C)
You're sort of mixing notation from the two logics, that of ZFC
sets and that of HOL. Consequently, I might be off on what I think you're
asking.
You don't define your relation, so I keep things simple.
I define a simple ZFC function, and prove the first
part of your first condition, that f is a function. The second part would be
proving uniqueness. It can be seen that f satisfies that, so once a
formula for uniqueness is stated correctly, auto might easily prove it.
Please notice that the
theorem is a first-order formula. The characters ! and ? are ASCII
equivalents for \<forall> and \<exists>.
(Clarifications must abound when
working with HOL. It's first-order logic if the variables are atomic. In this
case, the type of variables are numeral. The basic concept is there. That
I'm wrong in some detail is highly likely.)
definition "A = {1,2}"
definition "B = A"
definition "C = A"
definition "f = {(1,(1,1)), (2,(1,1))}"
theorem
"!a. a \<in> A --> (? z. z \<in> (B × C) & (a,z) \<in> f)"
by(auto simp add: A_def B_def C_def f_def)
(To completely give you an example of what you asked for, I would have to redefine my function so its bijective. Little examples can take a ton of work.)
That's the basic idea, and the rest of proving that f is a function will
follow that basic pattern.
If there's a problem, it's that your f is a ZFC set function/relation, and
the logical infrastructure of Isabelle/HOL is set up for functions as a type.
Functions as ordered pairs, ZFC style, can be formalized in Isabelle/HOL, but
it hasn't been done in a reasonably complete way.
Generalizing it all is where the work would be. For a particular relation, as
I defined above, I can limit myself to first-order formulas, if I ignore that
the foundation, Isabelle/HOL, is, of course, higher-order logic.
I want to use universal quantifier in the body of a predicate rule, i.e., something like
A(x,y) <- ∀B(x,a), C(y,a).
It means that only if for each a from C(y, a), B(x,a) always has x to match (x,a), then A(x,y) is true.
Since in Datalog, every variable bounded in rule body is existential quantifier by default, the a would be an existential quantifier too. What should I do to express universal quantifier in the body of a predicate rule?
Thank you.
P.S. The Datalog engine I am using is logicblox.
The basic idea is to use the logical axiom
∀x φ(x) ⇔ ¬∃x ¬φ(x)
to put your rules in a form where only existential quantifiers are required (along with negation). Intuitively, this usually means computing the complement of your answer first, and then computing its complement to produce the final answer.
For example, suppose you are given a graph G(V,E) and you want to find the vertices which are adjacent to all others in the graph. If universal quantification were allowed in a Datalog rule body, you might write something like
Q(x) <- ∀y E(x,y).
To write this without the universal quantifier, you first compute the vertices which are not adjacent to all others
NQ(x) <- V(x), V(y), !E(x,y).
then return its complement as the answer
Q(x) <- V(x), !NQ(x).
The same kind of trick can be used in SQL, which also lacks universal quantifiers.
I wan to use the destruct tactic to prove a statement by cases. I have read a couple of examples online and I'm confused. Could someone explain it better?
Here is a small example (there are other ways to solve it but try using destruct):
Inductive three := zero
| one
| two.
Lemma has2b2: forall a:three, a<>zero /\ a<>one -> a=two.
Now some examples online suggest doing the following:
intros. destruct a.
In which case I get:
3 subgoals H : zero <> zero /\ zero <> one
______________________________________(1/3)
zero = two
______________________________________(2/3)
one = two
______________________________________(3/3)
two = two
So, I want to prove that the first two cases are impossible. But the machine lists them as subgoals and wants me to PROVE them... which is impossible.
Summary:
How to exactly discard the impossible cases?
I have seen some examples using inversion but I don't understand the procedure.
Finally, what happens if my lemma depends on several inductive types and I still want to cover ALL cases?
How to discard impossible cases? Well, it's true that the first two obligations are impossible to prove, but note they have contradicting assumptions (zero <> zero and one <> one, respectively). So you will be able to prove those goals with tauto (there are also more primitive tactics that will do the trick, if you are interested).
inversion is a more advanced version of destruct. Additional to 'destructing' the inductive, it will sometimes generate some equalities (that you may need). It itself is a simple version of induction, which will additionally generate an induction hypothesis for you.
If you have several inductive types in your goal, you can destruct/invert them one by one.
More detailed walk-through:
Inductive three := zero | one | two .
Lemma test : forall a, a <> zero /\ a <> one -> a = two.
Proof.
intros a H.
destruct H. (* to get two parts of conjunction *)
destruct a. (* case analysis on 'a' *)
(* low-level proof *)
compute in H. (* to see through the '<>' notation *)
elimtype False. (* meaning: assumptions are contradictory, I can prove False from them *)
apply H.
reflexivity.
(* can as well be handled with more high-level tactics *)
firstorder.
(* the "proper" case *)
reflexivity.
Qed.
If you see an impossible goal, there are two possibilities: either you made a mistake in your proof strategy (perhaps your lemma is wrong), or the hypotheses are contradictory.
If you think the hypotheses are contradictory, you can set the goal to False, to get a little complexity out of the way. elimtype False achieves this. Often, you prove False by proving a proposition P and its negation ~P; the tactic absurd P deduces any goal from P and ~P. If there's a particular hypothesis which is contradictory, contradict H will set the goal to ~H, or if the hypothesis is a negation ~A then the goal will be A (stronger than ~ ~A but usually more convenient). If one particular hypothesis is obviously contradictory, contradiction H or just contradiction will prove any goal.
There are many tactics involving hypotheses of inductive types. Figuring out which one to use is mostly a matter of experience. Here are the main ones (but you will run into cases not covered here soon):
destruct simply breaks down the hypothesis into several parts. It loses information about dependencies and recursion. A typical example is destruct H where H is a conjunction H : A /\ B, which splits H into two independent hypotheses of types A and B; or dually destruct H where H is a disjunction H : A \/ B, which splits the proof into two different subproofs, one with the hypothesis A and one with the hypothesis B.
case_eq is similar to destruct, but retains the connections that the hypothesis has with other hypotheses. For example, destruct n where n : nat breaks the proof into two subproofs, one for n = 0 and one for n = S m. If n is used in other hypotheses (i.e. you have a H : P n), you may need to remember that the n you've destructed is the same n used in these hypotheses: case_eq n does this.
inversion performs a case analysis on the type of a hypothesis. It is particularly useful when there are dependencies in the type of the hypothesis that destruct would forget. You would typically use case_eq on hypotheses in Set (where equality is relevant) and inversion on hypotheses in Prop (which have very dependent types). The inversion tactic leaves a lot of equalities behind, and it's often followed by subst to simplify the hypotheses. The inversion_clear tactic is a simple alternative to inversion; subst but loses a little information.
induction means that you are going to prove the goal by induction (= recursion) on the given hypothesis. For example, induction n where n : nat means that you'll perform integer induction and prove the base case (n replaced by 0) and the inductive case (n replaced by m+1).
Your example is simple enough that you can prove it as “obvious by case analysis on a”.
Lemma has2b2: forall a:three, a<>zero/\a<>one ->a=two.
Proof. destruct a; tauto. Qed.
But let's look at the cases generated by the destruct tactic, i.e. after just intros; destruct a.. (The case where a is one is symmetric; the last case, where a is two, is obvious by reflexivity.)
H : zero <> zero /\ zero <> one
============================
zero = two
The goal looks impossible. We can tell this to Coq, and here it can spot the contradiction automatically (zero=zero is obvious, and the rest is a first-order tautology handled by the tauto tactic).
elimtype False. tauto.
In fact tauto works even if you don't start by telling Coq not to worry about the goal and wrote tauto without the elimtype False first (IIRC it didn't in older versions of Coq). You can see what Coq is doing with the tauto tactic by writing info tauto. Coq will tell you what proof script the tauto tactic generated. It's not very easy to follow, so let's look at a manual proof of this case. First, let's split the hypothesis (which is a conjunction) into two.
destruct H as [H0 H1].
We now have two hypotheses, one of which is zero <> zero. This is clearly false, because it's the negation of zero = zero which is clearly true.
contradiction H0. reflexivity.
We can look in even more detail at what the contradiction tactic does. (info contradiction would reveal what happens under the scene, but again it's not novice-friendly). We claim that the goal is true because the hypotheses are contradictory so we can prove anything. So let's set the intermediate goal to False.
assert (F : False).
Run red in H0. to see that zero <> zero is really notation for ~(zero=zero) which in turn is defined as meaning zero=zero -> False. So False is the conclusion of H0:
apply H0.
And now we need to prove that zero=zero, which is
reflexivity.
Now we've proved our assertion of False. What remains is to prove that False implies our goal. Well, False implies any goal, that's its definition (False is defined as an inductive type with 0 case).
destruct F.
I'm still very new to prolog, and am trying to wrap my head around why math constraints don't seem to work the same way logical ones do.
It seems like there's enough information to solve this:
f(A, B) :- A = (B xor 2).
But when I try f(C, 3), I get back C = 3 xor 2. which isn't very helpful. Even less useful is the fact that it simply can't find a solution if the inputs are reversed. Using is instead of = makes the example input return the correct answer, but the reverse refuses to even attempt anything.
From my earlier experimentation, it seems that I could write a function that did this logically using the binary without trouble, and it would in fact go both ways. What makes the math different?
For reference, my first attempt at solving my problem looks like this:
f(Input, Output) :-
A is Input xor (Input >> 11),
B is A xor ((A >> 7) /\ 2636928640),
C is B xor ((B << 15) /\ 4022730752),
Output is C xor (C >> 18).
This works fine going from input to output, but not the other way around. If I switch the is to =, it produces a long logical sequence with values substituted but can't find a numerical solution.
I'm using swi-prolog which has xor built in, but it could just as easily be defined. I was hoping to be able to use prolog to work this function in both directions, and really don't want to have to implement the logical behaviors by hand. Any suggestions about how I might reformulate the problem are welcome.
Pure Prolog is not supposed to handle math. The basic algorithm that drives Prolog - Unify and backtrack on failure - Doesn't mention arithmetic operators. Most Prolog implementations add arithmetics as an ugly hack into their bytecode.
The reason for this is that arithmetic functions do not act the same way as functors. They cannot be unified in the same way. Not every function is guaranteed to work for each combination of ground and unground arguments. For example, the algorithm for raising X to the power of Y is not symmetric to finding the Yth root of X. If all arithmetic functions were symmetric, encryption and cryptography wouldn't work!
That said, here are the missing facts about Prolog operators:
First, '=' is not "equals" in Prolog, but "unify". The goal X = Y op Z where op is an operator, unifies X with the functor 'op'(Y,Z). It has nothing to do with arithmetic equality or assignment.
Second, is, the ugly math hack, is not guaranteed to be reversible. The goal X is Expr, where Expr is an arithmetic expression, first evaluates the expression and then tries to assign it to X. It won't always work for each combination of numbers and variables - Check your Prolog library documentation.
To summarize:
Writing reversible mathematical functions requires the mathematical knowledge and algorithm to make the function reversible. Prolog won't do the magic for you in this case.
If you're looking for smart equation solving, you might want to check Prolog constraint-solving libraries for finite and contiguous domains. Not the same thing as reversible math, but somewhat smarter than Prolog's naive arithmetic operators.
If you want to compare the result of evaluating expression, you should use the operator (=:=)/2, or when checking for apartness the operator (=/=)/2.
The operator works also for bitwise operations, since bitwise operations work on integeres, and integers are numbers. The operator is part of the ISO core standard. For the following clause:
f(A, B) :- A =:= (B xor 2).
I get the following runs, in SWI-Prolog, Jekejeke Prolog etc..:
Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.31)
Copyright (c) 1990-2016 University of Amsterdam, VU Amsterdam
?- f(100, 102).
true.
?- f(102, 100).
true.
?- f(100, 101).
false.
If you want a more declarative way of handling bits, you can use a SAT solver integrated into Prolog. A good SAT solver should also support limited or unlimited bit vectors, but I cant currenty tell whats available here and what the restrictions would be.
See for example this question here:
Prolog SAT Solver