I am fairly new to Coq and am trying out sample lemmas from Ruth and Ryan. The proof using natural deduction is pretty trivial, and this is what I want to prove using Coq.
assume p -> q.
assume ~q.
assume p.
q.
False.
therefore ~p.
therefore ~q -> ~p.
therefore (p -> q) => ~q => ~p.
I am stuck at the line 3 assume p.
Can someone please tell me if there is an one-to-one mapping from natural deduction to Coq keywords?
NNPP is useless !
Theorem easy : forall p q:Prop, (p->q)->(~q->~p).
Proof. intros. intro. apply H0. apply H. exact H1. Qed.
You can start your proof like this:
Section CONTRA.
Variables P Q : Prop.
Hypothesis PimpQ : P -> Q.
Hypothesis notQ : ~Q.
Hypothesis Ptrue : P.
Theorem contra : False.
Proof.
Here is the environment at that point:
1 subgoal
P : Prop
Q : Prop
PimpQ : P -> Q
notQ : ~ Q
Ptrue : P
============================
False
You should be able to continue the proof. It will be a bit more verbose than your proof (on line 4 you just wrote q, here you will have to prove it by combining PimpQ and Ptrue. It should be fairly trivial though... :)
Not that difficult, actually.
Just played around, introduced a double negation, and things fell flat automatically. This is how the proof looks like.
Theorem T1 : (~q->~p)->(p->q).
Proof.
intros.
apply NNPP.
intro.
apply H in H1.
contradiction.
Qed.
Ta daaaa!
Related
so here a grammar R and a Langauge L, I want to prove that from R comes out L.
R={S→abS|ε} , L={(ab)n|n≥0}
so I thought I would prove that L(G) ⊆ L and L(G) ⊇ L are right.
for L (G) ⊆ L: I show by induction on the number i of derivative steps that after every derivative step u → w through which w results from u according to the rules of R, w = v1v2 or w = v1v2w with | v2 | = | v1 | and v1 ∈ {a} ∗ and v2 ∈ {b} ∗.
and in the induction start: at i = 0 it produces that w is ε and at i = 1 w is {ε, abS}.
is that right so far ?
so here a grammar R and a Langauge L, I want to prove that from R comes out L.
Probably what you want to do is show that the language L(R) of some grammar R is the same as some other language L specified another way (in your case, set-builder notation with a regular expression).
so I thought I would prove that L(G) ⊆ L and L(G) ⊇ L are right.
Given the above assumption, you are correct in thinking this is the right way to proceed with the proof.
for L (G) ⊆ L: I show by induction on the number i of derivative steps that after every derivative step u → w through which w results from u according to the rules of R, w = v1v2 or w = v1v2w with | v2 | = | v1 | and v1 ∈ {a} ∗ and v2 ∈ {b} ∗. and in the induction start: at i = 0 it produces that w is ε and at i = 1 w is {ε, abS}.
This is hard for me to follow. That's not to say it's wrong. Let me write it down in my own words and perhaps you or others can judge whether we are saying the same thing.
We want to show that L(R) is a subset of L. That is, any string generated by the grammar R is contained in the language L. We can prove this by mathematical induction on the number of steps in the derivation of strings generated by the grammar. We start with the base case of one derivation step: S -> e produces the empty word, which is a string in the language L by choosing n = 0. Now that we have established the base case, we can state the induction hypothesis: assume that for all strings derived from the grammar in a number of steps up to and including k, those strings are also in L. Now we must prove the induction step: that any string derived in k+1 steps from the grammar is also in L. Let w be any string derived from the grammar in k+1 steps. From the grammar it is clear that the derivation of w must be S -> abS -> ababS -> ... -> abab...abS -> abab...abe = abab...ab. But this derivation is the same as the derivation of a string from the grammar in k steps, except that there was one extra application of S -> abS before the application of S -> e. By the induction hypothesis we know that the string w' derived in k steps is of the form (ab)^m for some m at least zero, and adding an extra application of S -> abS to the derivation adds ab. Because (ab)^m(ab) = (ab)^(m+1) we can choose n = m+1. So, all strings derived from the grammar in k+1 steps are also in the language, as required.
To prove that all strings in the language can be derived in the grammar, consider the following construction: to derive the string (ab)^n in the grammar, apply the production S -> abS a number of times equal to n, and the production S -> e exactly once. The first step gives an intermediate form (ab)^nS and the second step gives a closed form string (ab)^n.
I am stuck with a theorem and I think that it's unprovable.
Theorem double_negation : forall A : Prop, ~~A -> A.
Can you prove it or explain why it is unprovable?
Is it due to Gödel's incompleteness theorems?
Double negation elimination is not provable in constructive logic which underpins Coq. Attempting to prove it we quickly get stuck:
Theorem double_negation_elim : forall A : Prop, ~~A -> A.
Proof.
unfold not.
intros A H.
(* stuck because no way to reach A with H : (A -> False) -> False *)
Abort.
We can show that if double negation elimination was provable then Law of Excluded Middle would hold, that is, (forall (A : Prop) , (~~A -> A)) -> forall A : Prop, A \/ ~A.
First we prove intermediate result ∼∼(A ∨ ∼A):
Lemma not_not_lem: forall A: Prop, ~ ~(A \/ ~A).
Proof.
intros A H.
unfold not in H.
apply H.
right.
intro a.
destruct H.
left.
apply a.
Qed.
Therefore
Theorem not_not_lem_implies_lem:
(forall (A : Prop) , (~~A -> A)) -> forall A : Prop, A \/ ~A.
Proof.
intros H A.
apply H.
apply not_not_lem.
Qed.
But this is a contradiction as LEM does not hold in constructive logic.
I was trying to prove the following simple theorem from an online course that excluded middle is irrefutable, but got stuck pretty much at step 1:
Theorem excluded_middle_irrefutable: forall (P:Prop), ~~(P \/ ~ P).
Proof.
intros P. unfold not. intros H.
Now I get:
1 subgoals
P : Prop
H : P \/ (P -> False) -> False
______________________________________(1/1)
False
If I apply H, then the goal would be P \/ ~P, which is excluded middle and can't be proven constructively. But other than apply, I don't know what can be done about the hypothesis P \/ (P -> False) -> False: implication -> is primitive, and I don't know how to destruct or decompose it. And this is the only hypothesis.
My question is, how can this be proven using only primitive tactics (as characterized here, i.e. no mysterious autos)?
Thanks.
I'm not an expert on this subject, but it was recently discussed on the Coq mailing-list. I'll summarize the conclusion from this thread. If you want to understand these kinds of problems more thoroughly, you should look at double-negation translation.
The problem falls within intuitionistic propositional calculus and can thus be decided by tauto.
Theorem excluded_middle_irrefutable: forall (P:Prop), ~~(P \/ ~ P).
tauto.
Qed.
The thread also provides a more elaborate proof. I'll attempt to explain how I would have come up with this proof. Note that it's usually easier for me to deal with the programming language interpretation of lemmas, so that's what I'll do:
Theorem excluded_middle_irrefutable: forall (P:Prop), ~~(P \/ ~ P).
unfold not.
intros P f.
We are asked to write a function that takes the function f and produces a value of type False. The only way to get to False at this point is to invoke the function f.
apply f.
Consequently, we are asked to provide the arguments to the function f. We have two choices, either pass P or P -> False. I don't see a way to construct a P so I'm choosing the second option.
right.
intro p.
We are back at square one, except that we now have a p to work with!
So we apply f because that's the only thing we can do.
apply f.
And again, we are asked to provide the argument to f. This is easy now though, because we have a p to work with.
left.
apply p.
Qed.
The thread also mentions a proof that is based on some easier lemmas. The first lemma is ~(P /\ ~P).
Lemma lma (P:Prop) : ~(P /\ ~P).
unfold not.
intros H.
destruct H as [p].
apply H.
apply p.
Qed.
The second lemma is ~(P \/ Q) -> ~P /\ ~Q:
Lemma lma' (P Q:Prop) : ~(P \/ Q) -> ~P /\ ~Q.
unfold not.
intros H.
constructor.
- intro p.
apply H.
left.
apply p.
- intro q.
apply H.
right.
apply q.
Qed.
These lemmas suffice to the show:
Theorem excluded_middle_irrefutable: forall (P:Prop), ~~(P \/ ~ P).
intros P H.
apply lma' in H.
apply lma in H.
apply H.
Qed.
How does one prove (R->P) in Coq. I'm a beginner at this and don't know much of this tool. This is what I wrote:
Require Import Classical.
Theorem intro_neg : forall P Q : Prop,(P -> Q /\ ~Q) -> ~P.
Proof.
intros P Q H.
intro HP.
apply H in HP.
inversion HP.
apply H1.
assumption.
Qed.
Section Question1.
Variables P Q R: Prop.
Hypotheses H1 : R -> P \/ Q.
Hypotheses H2 : R -> ~Q.
Theorem trans : R -> P.
Proof.
intro HR.
apply NNPP.
apply intro_neg with (Q := Q).
intro HNP.
I can only get to this point.
The subgoals at this point are:
1 subgoals
P : Prop
Q : Prop
R : Prop
H1 : R -> P \/ Q
H2 : R -> ~ Q
HR : R
HNP : ~ P
______________________________________(1/1)
Q /\ ~ Q
You can use tauto to prove it automatically:
Section Question1.
Variables P Q R: Prop.
Hypotheses H1 : R -> P \/ Q.
Hypotheses H2 : R -> ~Q.
Theorem trans : R -> P.
Proof.
intro HR.
tauto.
Qed.
If you want to prove it manually, H1 says that given R, either P or Q is true. So if you destruct H1, you get 3 goals. One to prove the premise (R), one to prove the goal (P) using the left conclusion (P) of the or, and one to prove the goal (P) using the right conclusion (Q).
Theorem trans' : R -> P.
Proof.
intro HR.
destruct H1.
- (* Prove the premise, R *)
assumption.
- (* Prove that P is true given that P is true *)
assumption.
- (* Prove that P is true given that Q is false *)
contradiction H2.
Qed.
End Question1.
Unless I'm mistaken, there is no proof for
∀ {A : Set} → ¬ (¬ A) → A
in Agda.
This means you cannot use proofs by contradiction.
Many Maths textbooks use those kinds of proofs, so I was wondering: is it always possible to find an alternative constructive proof? Could you write, e.g., an Algebra textbook using only constructive logic?
In case the answer is no. Does this mean constructive logic is in some sense less powerful then classical logic?
Indeed, double negation elimination (and other statements which are logically equivalent to that) cannot be proven in Agda.
-- Law of excluded middle
lem : ∀ {p} {P : Set p} → P ⊎ ¬ P
-- Double negation elimination
dne : ∀ {p} {P : Set p} → ¬ ¬ P → P
-- Peirce's law
peirce : ∀ {p q} {P : Set p} {Q : Set q} →
((P → Q) → P) → P
(If you want, you can show that these are indeed logically equivalent, it's an interesting exercise). But this is a consequence we cannot avoid - one of the important things about constructive logic is that proofs have a computational context. However, assuming law of excluded middle basically kills any computational context.
Consider for example the following proposition:
end-state? : Turing → Set
end-state? t = ...
simulate_for_steps : Turing → ℕ → Turing
simulate t for n steps = ...
Terminates : Turing → Set
Terminates machine = Σ ℕ λ n →
end-state? (simulate machine for n steps)
So, a Turing machine terminates if there exists a number n such that after n steps, the machine is in an end state. Sounds reasonable, right? What happens when we add excluded middle in the mix?
terminates? : Turing → Bool
terminates? t with lem {P = Terminates t}
... | inj₁ _ = true
... | inj₂ _ = false
If we have excluded middle, then any proposition is decidable. Which also means that we can decide whether a Turing machine terminates or not and we've solved the halting problem. So we can either have computability or classical logic, but not both! While excluded middle and other equivalent statements help us with proofs, it comes at the cost of computational meaning of the program.
So yes, in this sense, constructive logic is less powerful than classical. However, we can simulate classical logic via double negation translation. Notice that doubly negated versions of the previous principles hold in Agda:
¬¬dne : ∀ {p} {P : Set p} → ¬ ¬ (¬ ¬ P → P)
¬¬dne f = f λ g → ⊥-elim (g (f ∘ const))
¬¬lem : ∀ {p} {P : Set p} → ¬ ¬ (P ⊎ ¬ P)
¬¬lem f = f (inj₂ (f ∘ inj₁))
If we were in classical logic, you would then use double negation elimination to get the original statements. There's even a monad dedicated to this transformation, take a look at the double negation monad in the Relation.Nullary.Negation module (in the standard library).
What this means is that we can selectively use the classical logic. From certain point of view, constructive logic is more powerful than classical precisely because of this. In classical logic, you cannot opt out of these statements, they just are there. On the other hand, constructive logic doesn't force you to use these, but if you need them, you can "enable" them in this way.
Another statement which cannot be proven in Agda is function extensionality. But unlike with classical statements, this one is desirable in constructive logics.
ext : ∀ {a b} {A : Set a} {B : A → Set b}
(f g : ∀ x → B x) → (∀ x → f x ≡ g x) → f ≡ g
However, this doesn't mean that it doesn't hold in constructive logic. It's just a property of the theory on which Agda is based (which is mostly intensional type theory with axiom K), there are other flavors of type theory where this statement holds, for example the usual formulations of extensional type theory or Conor McBride's and Thorsten Altenkirch's observational type theory.