P is undecidable and not semidecidable, Q is undecidable and semidecidable and P ⊂ Q - formal-languages

My problem: Define two sets P and Q of words (that is, two problems) such that:
P is undecidable and not semidecidable, Q is undecidable and semidecidable and P ⊂ Q

One example of such sets:
Define Q as the set of turing machines, which halt on empty input. Define P as the set of turing machines, which halt on every input.
Clearly P ⊂ Q and P is undecidable and not semidecidable, but Q is undecidable and semidecidable.

Related

Simple Cardinality Proof

So I'm trying to perform a simple proof using cardinalities. It looks like:
⟦(A::nat set) ∩ B = {}⟧ ⟹ (card (A ∪ B) = card A + card B)
Which seems to makes sense, but for some reason blast hangs, the rest of the provers fail to apply, and sledgehammer times out. Is there a gap in what I think I know about cardinalities? If not, how can I prove this lemma?
Thanks in advance!
I believe that the lemma you are trying to prove does not appropriately consider the case of infinite sets.
In Isabelle/HOL, infinite cardinalities are represented by zero. As we can see by the following lemma.
lemma "¬(finite A) ⟹ card A = 0"
by simp
If we consider the case of an infinite set, A, and a set of one element, B, then assume the intersection, A ∩ B is an empty set.
We are left with:
card (A ∪ B) = 0 as their union will also be infinite.
card A = 0
card B = 1
So we can see that in this case, the lemma does not hold.
The lemma can be corrected by asserting both sets are finite:
lemma
"⟦finite A; finite B; ((A::nat set) ∩ B) = {}⟧ ⟹ (card (A ∪ B) = card A + card B)"
by (simp add: card_Un_disjoint)
Which is essentially the same as the card_Un_disjoint used by the proof:
lemma card_Un_disjoint: "finite A ⟹ finite B ⟹ A ∩ B = {} ⟹ card (A ∪ B) = card A + card B"
using card_Un_Int [of A B] by simp

proving that a language is part of a grammar and vice versa

so here a grammar R and a Langauge L, I want to prove that from R comes out L.
R={S→abS|ε} , L={(ab)n|n≥0}
so I thought I would prove that L(G) ⊆ L and L(G) ⊇ L are right.
for L (G) ⊆ L: I show by induction on the number i of derivative steps that after every derivative step u → w through which w results from u according to the rules of R, w = v1v2 or w = v1v2w with | v2 | = | v1 | and v1 ∈ {a} ∗ and v2 ∈ {b} ∗.
and in the induction start: at i = 0 it produces that w is ε and at i = 1 w is {ε, abS}.
is that right so far ?
so here a grammar R and a Langauge L, I want to prove that from R comes out L.
Probably what you want to do is show that the language L(R) of some grammar R is the same as some other language L specified another way (in your case, set-builder notation with a regular expression).
so I thought I would prove that L(G) ⊆ L and L(G) ⊇ L are right.
Given the above assumption, you are correct in thinking this is the right way to proceed with the proof.
for L (G) ⊆ L: I show by induction on the number i of derivative steps that after every derivative step u → w through which w results from u according to the rules of R, w = v1v2 or w = v1v2w with | v2 | = | v1 | and v1 ∈ {a} ∗ and v2 ∈ {b} ∗. and in the induction start: at i = 0 it produces that w is ε and at i = 1 w is {ε, abS}.
This is hard for me to follow. That's not to say it's wrong. Let me write it down in my own words and perhaps you or others can judge whether we are saying the same thing.
We want to show that L(R) is a subset of L. That is, any string generated by the grammar R is contained in the language L. We can prove this by mathematical induction on the number of steps in the derivation of strings generated by the grammar. We start with the base case of one derivation step: S -> e produces the empty word, which is a string in the language L by choosing n = 0. Now that we have established the base case, we can state the induction hypothesis: assume that for all strings derived from the grammar in a number of steps up to and including k, those strings are also in L. Now we must prove the induction step: that any string derived in k+1 steps from the grammar is also in L. Let w be any string derived from the grammar in k+1 steps. From the grammar it is clear that the derivation of w must be S -> abS -> ababS -> ... -> abab...abS -> abab...abe = abab...ab. But this derivation is the same as the derivation of a string from the grammar in k steps, except that there was one extra application of S -> abS before the application of S -> e. By the induction hypothesis we know that the string w' derived in k steps is of the form (ab)^m for some m at least zero, and adding an extra application of S -> abS to the derivation adds ab. Because (ab)^m(ab) = (ab)^(m+1) we can choose n = m+1. So, all strings derived from the grammar in k+1 steps are also in the language, as required.
To prove that all strings in the language can be derived in the grammar, consider the following construction: to derive the string (ab)^n in the grammar, apply the production S -> abS a number of times equal to n, and the production S -> e exactly once. The first step gives an intermediate form (ab)^nS and the second step gives a closed form string (ab)^n.

Case distinction for propositional logic

I would like to prove
P ==> P
by case distinction, to understand the latter.
lemma "P ⟹ P"
proof (cases P)
goal (2 subgoals):
1. P ⟹ P ⟹ P
2. P ⟹ ¬ P ⟹ P
I am not quite sure if I want these. I wanted to assume that P is true and then show P is true by assumption, then assume not P and prove not P by assumption. Like in a truth table.
The not P in the second subgoal seems strange, is that provable at all?
assume P then show P by assumption
Successful attempt to solve goal by exported rule:
(P) ⟹ P
next
goal (1 subgoal):
1. P ⟹ ¬ P ⟹ P
assume P assume "¬P" then show "¬P" by (rule HOL.FalseE)
This went completely bad.
How can I take P and not P as the cases?
P and not P already are your cases. If you write "cases P", Isabelle copys the current goal and adds P to the assumptions of the first and ¬ P to the assumption of the second new subgoal. The right-hand side of the goals is not affected by the cases method if used in this way.
In your case, you don't have to prove the ¬ P in the second subgoal, but you may use it as the additional assumption, introduced by the case destinction.
Obviously you can't prove P from ¬ P in the second subgoal. Luckily, the global assumption P is still there, so that both cases still prove P from P, which is as trviial as it already is without the case destinction ;).
If you want to prove something by having Isabelle insert the possible values of a variable directly, you could try:
lemma "P ⟹ P"
proof (induct P)
goal (2 subgoals):
1. True ⟹ True
2. False ⟹ False

Why is P ⊆ co-NP?

I've seen several places that have simply stated that it's known that P is a subset of the intersection of NP and co-NP. Proofs that show that P is a subset of NP are not hard to find. So to show that it's a subset of the intersection, all that's left to be done is show that P is a subset of co-NP. What might a proof of this be like? Thank you much!
The class P is closed under complementation: if L is a language in P, then the complement of L is also in P. You can see this by taking any polynomial-time decider for L and switching the accept and reject states; this new machine now decides the complement of L and does so in polynomial time.
A language L is in co-NP iff its complement is in NP. So consider any language L ∈ P. The complement of L is also in P, so the complement of L is therefore in NP (because P ⊆ NP). Therefore, L is in co-NP. Consequently, P ⊆ co-NP.
Hope this helps!
Think of it this way. Consider the class co-P. Since P is closed under compliment, P=co-P.
It should also be clear that co-P is a subset of co-NP because P is contained in NP. Since P = co-P, it follows that P is contained in co-NP.

Proving (p->q)->(~q->~p) using Coq Proof Assistant

I am fairly new to Coq and am trying out sample lemmas from Ruth and Ryan. The proof using natural deduction is pretty trivial, and this is what I want to prove using Coq.
assume p -> q.
assume ~q.
assume p.
q.
False.
therefore ~p.
therefore ~q -> ~p.
therefore (p -> q) => ~q => ~p.
I am stuck at the line 3 assume p.
Can someone please tell me if there is an one-to-one mapping from natural deduction to Coq keywords?
NNPP is useless !
Theorem easy : forall p q:Prop, (p->q)->(~q->~p).
Proof. intros. intro. apply H0. apply H. exact H1. Qed.
You can start your proof like this:
Section CONTRA.
Variables P Q : Prop.
Hypothesis PimpQ : P -> Q.
Hypothesis notQ : ~Q.
Hypothesis Ptrue : P.
Theorem contra : False.
Proof.
Here is the environment at that point:
1 subgoal
P : Prop
Q : Prop
PimpQ : P -> Q
notQ : ~ Q
Ptrue : P
============================
False
You should be able to continue the proof. It will be a bit more verbose than your proof (on line 4 you just wrote q, here you will have to prove it by combining PimpQ and Ptrue. It should be fairly trivial though... :)
Not that difficult, actually.
Just played around, introduced a double negation, and things fell flat automatically. This is how the proof looks like.
Theorem T1 : (~q->~p)->(p->q).
Proof.
intros.
apply NNPP.
intro.
apply H in H1.
contradiction.
Qed.
Ta daaaa!

Resources