Can the negation introduction rule of inference "a, b=>¬a / ¬b" be used instead of the usual "b=>a, b=>¬a / ¬b"? - logic

I find the negation introduction rule which I learned at university a bit confusing to reason out and think that "a, b=>¬a / ¬b" makes more sense as it means that if b implies something which is not true, then b is itself not true. I can't seem to find an example of where the usual rule is more useful than the one I would like to use. Is there a reason why "b=>a, b=>¬a / ¬b" is used as a rule?

OK, I think I have a pretty rigorous argument which validates said replacement.
Let's say that we need to introduce a negation on P. So using the usual inference rule, we prove
P => Q
P => ¬Q
and thereby prove ¬P.
Let's say that there is no way to derive both Q and ¬Q if P is not assumed. But then from P we can derive Q /\ ¬Q which will allow us to derive anything, including the negation of a tautology.
So we can prove ¬P using the proposed rule by doing something like this:
1. |P Assumed
... |...
10. |Q
... |...
20. |¬Q
21. |Q /\ ¬Q /\ introduction on line 10 and 20
22. |¬(A => A) Derived from line 21 using contradiction lemma
23. P => ¬(A => A) => introduction on lines 1-22
24. A => A Anything implies itself (a tautology)
25. ¬P ¬ introduction on line 23 and 24
So using tautologies we can always use the proposed rule of inference.
In other words, if you can use the usual rule of inference to introduce a negation, you can use the proposed rule of inference too.

Related

Expanding all definitions in Isabelle lemma

How can I tell Isabelle to expand all my definitions, please, because that way the proof is trivial? Unfortunately there is no default expansion or simplification happens, and basically I get back the original expression as the subgoal.
Example:
theory Test
imports Main
begin
definition b0 :: "nat⇒nat"
where "b0 n ≡ (n mod 2)"
definition b1 :: "nat⇒nat"
where "b1 n ≡ (n div 2)"
lemma "(a::nat)≤3 ∧ (b::nat)≤3 ⟶
2*(b1 a)+(b0 a)+2*(b1 b)+(b0 b) = a+b"
apply auto
oops
end
Respose before oops:
proof (prove)
goal (1 subgoal):
1. a ≤ 3 ⟹
b ≤ 3 ⟹ 2 * b1 a + b0 a + 2 * b1 b + b0 b = a + b
My recommendation: unfolding
There is a special keyword unfolding for unpacking definitions at the start of proofs. For your example this would read:
unfolding b0_def b1_def by simp
I consider unfolding the most elegant way. It also helps while writing the proofs. Internally, this is (mostly?) equivalent to using the unfold-method:
apply (unfold b0_def b1_def) by simp
This will recursively (!) use the set of equalities you supply to rewrite the proof goal. (Due to the recursion, you should rather not supply a set of equalities that could generate cycles...)
Alternative: Using the simplifier
In cases with possible loops, the simplifier might be able to reach a nice unfolding without running into these cycles, maybe by interleaving with other simplifications. In such cases, by (simp add: b0_def b1_def), as you've suggested, is great!
Alternative definition: Maybe it's just an abbreviation (and no definition)?
If you find yourself unfolding a definition in every single instance, you could consider, using abbreviation instead of definition. Then, some Isabelle magic will do the packing/unpacking for you without further hints. abbeviation does only affect how the user communicates with Isabelle. It does not introduce new symbols at the object level, and consequently, there would be no b1_def facts and the like.
abbreviation b0 :: "nat⇒nat"
where "b0 n ≡ (n mod 2)"
Usually not recommended: Building something like an abbreviation using the simplifier
If you (for whatever reason) want to have a defined name at the object level, but unfold it in almost every instance, you can also feed the defining equality directly into the simplifier.
definition b0 :: "nat⇒nat"
where [simp]: "b0 n ≡ (n mod 2)"
(Usually there should be little reason for the last option.)
Yes, I keep forgetting that definitions are not used in simplifications by default.
Adding the definitions explicitly to the simplification rules solves this problem:
lemma "(a::nat)≤3 ∧ (b::nat)≤3 ⟶
2*(b1 a)+(b0 a)+2*(b1 b)+(b0 b) = a+b"
by (simp add: b0_def b1_def)
This way the definitions (b0, b1) are correctly used.

Explanation of conversion of CNF to imperative normal form in 2-SAT problem?

so this question might seem dumb to many of you but I'm finding it somewhat hard to grasp the conversion of a CNF clause to INF one.
I was going through this article where it states:
First we need to convert the problem to a different form, the so-called implicative normal form. Note that the expression a∨b is equivalent to ¬a⇒b∧¬b⇒a (if one of the two variables is false, then the other one must be true).
Can somebody explain how do we get to this result/how does this conversion makes sense? I've no idea what that " => " sign means, either. Thanks in advance!
Update 1: If in doubt with different logical symbols, refer to this wiki.
=> is implication, with the truth table:
A B | A => B
----+-------
F F | T
F T | T
T F | F
T T | T
In fact, you can show that a => b is equivalent to ~a \/ b. (Just fill out the truth tables.)
Now, we have:
~a => b
= ~(~a) \/ b
= a \/ b
So, it's even stronger: a \/ b is equivalent to ~a => b. You can similarly show it is also equivalent to ~b => a; so taking the conjunction is redundant perhaps, but it doesn't change the equivalence.
If in doubt, always draw the full truth tables, assuming you have 4-5 variables it would be very educational. If you have more variables, use a SAT/SMT solver to prove equivalence. That's what they are good for.

Can you prove Excluded Middle is wrong in Coq if I do not import classical logic

I know excluded middle is impossible in the logic of construction. However, I am stuck when I try to show it in Coq.
Theorem em: forall P : Prop, ~P \/ P -> False.
My approach is:
intros P H.
unfold not in H.
intuition.
The system says following:
2 subgoals
P : Prop
H0 : P -> False
______________________________________(1/2)
False
______________________________________(2/2)
False
How should I proceed?
Thanks
What you are trying to construct is not the negation of LEM, which would say "there exists some P such that EM doesn't hold", but the claim that says that no proposition is decidable, which of course leads to a trivial inconsistency:
Axiom not_lem : forall (P : Prop), ~ (P \/ ~ P).
Goal False.
now apply (not_lem True); left.
No need to use the fancy double-negation lemma; as this is obviously inconsistent [imagine it would hold!]
The "classical" negation of LEM is indeed:
Axiom not_lem : exists (P : Prop), ~ (P \/ ~ P).
and it is not provable [otherwise EM wouldn't be admissible], but you can assume it safely; however it won't be of much utility for you.
One cannot refute the law of excluded middle (LEM) in Coq.
Let's suppose you proved your refutation of LEM. We model this kind of situation by postulating it as an axiom:
Axiom not_lem : forall (P : Prop), ~ (P \/ ~ P).
But then we also have a weaker version (double-negated) of LEM:
Lemma not_not_lem (P : Prop) :
~ ~ (P \/ ~ P).
Proof.
intros nlem. apply nlem.
right. intros p. apply nlem.
left. exact p.
Qed.
These two facts together would make Coq's logic inconsistent:
Lemma Coq_would_be_inconsistent :
False.
Proof.
apply (not_not_lem True).
apply not_lem.
Qed.
I'm coming from mathoverflow, but I don't have permission to comment on #Anton Trunov's answer. I think his answer is unjust, or at least incomplete: he hides the following "folklore":
Coq + Impredicative Set + Weak Excluded-middle -> False
This folklore is a variation of the following facts:
proof irrelevance + large elimination -> false
And Coq + Impredicative Set is canonical, soundness, strong normalization, So it is consistent.
Coq + Impredicative Set is the old version of Coq. I think this at least shows that the defense of the LEM based on double negative translation is not that convincing.
If you want to get information about the solutions, you can get it from here https://github.com/FStarLang/FStar/issues/360
On the other hand, you may be interested in the story of how Coq-HoTT+UA went against LEM∞...
=====================================================
Ok, let's have some solutions.
use command-line flag -impredicative-set, or the install old version(<8.0) of coq.
excluded-middle -> proof-irrelevance
proof-irrelevance -> False
Or you can work with standard coq + coq-hott.
install coq-hott
Univalence + Global Excluded-middle (LEM∞) -> False
It is not recommended that you directly click on the code in question without grasping the specific concept.
I skipped a lot about meta-theoretic implementations, such as Univalence not being computable in Coq-HoTT but only in Agda-CuTT, such as the consistency proof for Coq+Impredicative Set/Coq-HoTT.
However, metatheoretical considerations are important. If we just want to get an Anti-LEM model and don't care about metatheory, then we can use "Boolean-valued forcing" in coq to wreak havoc on things that only LEM can introduce, such as "every function about real set is continuous", Dedekind infinite...
But this answer ends there.

Intro rule for "∀r>0" in Isabelle

When I have a goal such as "∀x. P x" in Isabelle, I know that I can write
show "∀x. P x"
proof (rule allI)
However, when the goal is "∀x>0. P x", I cannot do that. Is there a similar rule/method that I can use after proof in order to simplify my goal? I would also be interested in one for the situation where you have a goal of the form "∃x>0. P x".
I'm looking for an Isar proof that uses the proof (rule something) style.
Universal quantifier
To expand on Lars's answer: ∀x>0. P x is just syntactic sugar for ∀x. x > 0 ⟶ P x. As a consequence, if you want to prove a statement like this, you first have to strip away the universal quantifier with allI and then strip away the implication with impI. You can do something like this:
lemma "∀x>0. P x"
proof (rule allI, rule impI)
Or using intro, which is more or less the same as applying rule until it is not possible anymore:
lemma "∀x>0. P x"
proof (intro allI impI)
Or you can use safe, which eagerly applies all introduction rules that are declared as ‘safe’, such as allI and impI:
lemma "∀x>0. P x"
proof safe
In any case, your new proof state is then
proof (state)
goal (1 subgoal):
1. ⋀x. 0 < x ⟹ P x
And you can proceed like this:
lemma "∀x>0. P (x :: nat)"
proof safe
fix x :: nat assume "x > 0"
show "P x"
Note that I added an annotation; I didn't know what type your P has, so I just used nat. When you fix a variable in Isar and the type is not clear from the assumptions, you will get a warning that a new free type variable was introduced, which is not what you want. When you get that warning, you should add a type annotation to the fix like I did above.
Existential quantifier
For an existential quantifier, safe will not work because the intro rule exI is not always safe due to technical reasons. The typical proof pattern for an ∃x>0. P x would be something like:
lemma "∃x>0. P (x :: nat)"
proof -
have "42 > (0 :: nat)" by simp
moreover have "P 42" sorry
ultimately show ?thesis by blast
qed
Or a little more explicitly:
lemma "∃x>0. P (x :: nat)"
proof -
have "42 > 0 ∧ P 42" sorry
thus ?thesis by (rule exI)
qed
In cases when the existential witness (i.e. the 42 in this example) does not depend on any variables that you got out of an obtain command, you can also do it more directly:
lemma "∃x>0. P (x :: nat)"
proof (intro exI conjI)
This leaves you with the goals ?x > 0 and P ?x. Note that the ?x is a schematic variable for which you can put it anything. So you can complete the proof like this:
lemma "∃x>0. P (x :: nat)"
proof (intro exI conjI)
show "42 > (0::nat)" by simp
show "P 42" sorry
qed
As I said, this does not work if your existential witness depends on some variable that you got from obtain due to technical restrictions. In that case, you have to fall back to the other solution I mentioned.
The following works in Isabelle2016-1-RC2:
lemma "∀ x>0. P x"
apply (rule allI)
In general, you can also just use apply rule, which will select the default introduction rule. Same is true for the existential quantifier.

Natural deduction: is this a sound proof?

I have attempted to solve the following but I have no means to check it....or does wolfram do this ? I do not know if my handling of the operators (scope) is occrect...do you know?
for all x: upended A operator (universality)
there exists an x: inverted E operator (existence)
for all x(P(x) -> R(x)), for all x(P(x) v not_Q(x)); there exists an x(Q(x)) hold under partial correctness: there exists an x(R(x))
proof:
The structure of your deduction is reasonable, but there are steps missing to take you from the quantified statements to a particular and then back to quantified.
It is not correct to say that P-->Q is "equivalent" to the first premise: that's misrepresenting a predicate statement as a propositional statements. What you can say is that if the first premise holds true for all x, then it is certainly true for one specific x. So universal instantiation of the first premise can give you P(a)-->R(a). Similarly, since the third premise tells us that there is at least one x such that Q(x), we can say let's call one of those x's "a" and so claim Q(a).
Once you get to the point where you have proved R(a) you can then use existential generalisation to get your final conclusion.
I disagree with #MattClarke in that the structure of your reasoning is reasonable. It does not adhere to the rules of natural deduction. For example, your boxed proof assumes Q and ~Q (I am using ~ for negation) and concludes P. But there is no natural deduction rule that allows you to use more than one assumption inside a box (and even if there was, and such a rule could be justified, then the result of the boxed proof is not just P as you seem to claim, but rather the implication (Q /\ ~Q) --> P, which is trivial, since there is already a natural deduction rule that allows us to deduce anything from a contradiction).
From the OP it is not really clear to me what exactly you want to prove. I am just assuming that from the three premises ALL x. (P(x) --> R(x)), ALL x. (P(x) \/ ~Q(x)), and EX x. Q(x) you want to prove EX x. R(x).
Since the formula you want to prove starts with an existential quantifier it will be obtained by exists-introduction. But first we start with the premises:
1 ALL x. (P(x) --> R(x)) premise
2 ALL x. (P(x) \/ ~Q(x)) premise
3 EX x. Q(x) premise
The rule for exists-elimination opens a box (boxes will be indicated by braces { and }) and allows us to conclude a formula that is provable under the assumption that there is a witness for the existential formula to which the rule is applied, i.e.,
4 { for an arbitrary but fixed y that is not used outside this box
5 Q(y) assumption
6 P(y) \/ ~Q(y) ALL-e 2
at this point we apply a disjunction-elimination which amounts to the case analysis whether P(y) holds or ~Q(y) holds (at least one of which has to be true since we have P(y) \/ ~Q(y)). Each case gets its own box
7 {
8 P(y) assumption
9 P(y) --> R(y) ALL-e 1
10 R(y) -->-e 9, 8
11 }
12 {
13 ~Q(y) assumption
14 bottom bottom-i 5, 14
15 R(y) bottom-e 15
16 }
17 R(y) \/-e 6, 7-11, 12-16
18 EX x. R(x) EX-i 17
19 }
20 EX x. R(x) EX-e 3, 4-19

Resources