So I'm trying to perform a simple proof using cardinalities. It looks like:
⟦(A::nat set) ∩ B = {}⟧ ⟹ (card (A ∪ B) = card A + card B)
Which seems to makes sense, but for some reason blast hangs, the rest of the provers fail to apply, and sledgehammer times out. Is there a gap in what I think I know about cardinalities? If not, how can I prove this lemma?
Thanks in advance!
I believe that the lemma you are trying to prove does not appropriately consider the case of infinite sets.
In Isabelle/HOL, infinite cardinalities are represented by zero. As we can see by the following lemma.
lemma "¬(finite A) ⟹ card A = 0"
by simp
If we consider the case of an infinite set, A, and a set of one element, B, then assume the intersection, A ∩ B is an empty set.
We are left with:
card (A ∪ B) = 0 as their union will also be infinite.
card A = 0
card B = 1
So we can see that in this case, the lemma does not hold.
The lemma can be corrected by asserting both sets are finite:
lemma
"⟦finite A; finite B; ((A::nat set) ∩ B) = {}⟧ ⟹ (card (A ∪ B) = card A + card B)"
by (simp add: card_Un_disjoint)
Which is essentially the same as the card_Un_disjoint used by the proof:
lemma card_Un_disjoint: "finite A ⟹ finite B ⟹ A ∩ B = {} ⟹ card (A ∪ B) = card A + card B"
using card_Un_Int [of A B] by simp
Related
so here a grammar R and a Langauge L, I want to prove that from R comes out L.
R={S→abS|ε} , L={(ab)n|n≥0}
so I thought I would prove that L(G) ⊆ L and L(G) ⊇ L are right.
for L (G) ⊆ L: I show by induction on the number i of derivative steps that after every derivative step u → w through which w results from u according to the rules of R, w = v1v2 or w = v1v2w with | v2 | = | v1 | and v1 ∈ {a} ∗ and v2 ∈ {b} ∗.
and in the induction start: at i = 0 it produces that w is ε and at i = 1 w is {ε, abS}.
is that right so far ?
so here a grammar R and a Langauge L, I want to prove that from R comes out L.
Probably what you want to do is show that the language L(R) of some grammar R is the same as some other language L specified another way (in your case, set-builder notation with a regular expression).
so I thought I would prove that L(G) ⊆ L and L(G) ⊇ L are right.
Given the above assumption, you are correct in thinking this is the right way to proceed with the proof.
for L (G) ⊆ L: I show by induction on the number i of derivative steps that after every derivative step u → w through which w results from u according to the rules of R, w = v1v2 or w = v1v2w with | v2 | = | v1 | and v1 ∈ {a} ∗ and v2 ∈ {b} ∗. and in the induction start: at i = 0 it produces that w is ε and at i = 1 w is {ε, abS}.
This is hard for me to follow. That's not to say it's wrong. Let me write it down in my own words and perhaps you or others can judge whether we are saying the same thing.
We want to show that L(R) is a subset of L. That is, any string generated by the grammar R is contained in the language L. We can prove this by mathematical induction on the number of steps in the derivation of strings generated by the grammar. We start with the base case of one derivation step: S -> e produces the empty word, which is a string in the language L by choosing n = 0. Now that we have established the base case, we can state the induction hypothesis: assume that for all strings derived from the grammar in a number of steps up to and including k, those strings are also in L. Now we must prove the induction step: that any string derived in k+1 steps from the grammar is also in L. Let w be any string derived from the grammar in k+1 steps. From the grammar it is clear that the derivation of w must be S -> abS -> ababS -> ... -> abab...abS -> abab...abe = abab...ab. But this derivation is the same as the derivation of a string from the grammar in k steps, except that there was one extra application of S -> abS before the application of S -> e. By the induction hypothesis we know that the string w' derived in k steps is of the form (ab)^m for some m at least zero, and adding an extra application of S -> abS to the derivation adds ab. Because (ab)^m(ab) = (ab)^(m+1) we can choose n = m+1. So, all strings derived from the grammar in k+1 steps are also in the language, as required.
To prove that all strings in the language can be derived in the grammar, consider the following construction: to derive the string (ab)^n in the grammar, apply the production S -> abS a number of times equal to n, and the production S -> e exactly once. The first step gives an intermediate form (ab)^nS and the second step gives a closed form string (ab)^n.
I would like to prove
P ==> P
by case distinction, to understand the latter.
lemma "P ⟹ P"
proof (cases P)
goal (2 subgoals):
1. P ⟹ P ⟹ P
2. P ⟹ ¬ P ⟹ P
I am not quite sure if I want these. I wanted to assume that P is true and then show P is true by assumption, then assume not P and prove not P by assumption. Like in a truth table.
The not P in the second subgoal seems strange, is that provable at all?
assume P then show P by assumption
Successful attempt to solve goal by exported rule:
(P) ⟹ P
next
goal (1 subgoal):
1. P ⟹ ¬ P ⟹ P
assume P assume "¬P" then show "¬P" by (rule HOL.FalseE)
This went completely bad.
How can I take P and not P as the cases?
P and not P already are your cases. If you write "cases P", Isabelle copys the current goal and adds P to the assumptions of the first and ¬ P to the assumption of the second new subgoal. The right-hand side of the goals is not affected by the cases method if used in this way.
In your case, you don't have to prove the ¬ P in the second subgoal, but you may use it as the additional assumption, introduced by the case destinction.
Obviously you can't prove P from ¬ P in the second subgoal. Luckily, the global assumption P is still there, so that both cases still prove P from P, which is as trviial as it already is without the case destinction ;).
If you want to prove something by having Isabelle insert the possible values of a variable directly, you could try:
lemma "P ⟹ P"
proof (induct P)
goal (2 subgoals):
1. True ⟹ True
2. False ⟹ False
The textbook I'm learning from (Lafore) presents Red-Black Trees first, and does not include any pseudo-code, although the associated algorithms as presented seems fairly complex, with many unique cases.
Next he presents 2-3-4 Trees which seem to me much simpler to understand, and I would guess, to implement. He includes some actual Java code which is very clear. He seems to imply that 2-3-4 is easier to implement, and I would agree based on what I've seen so far.
Wikipedia, however, says the opposite... I think it is incorrect perhaps:
http://en.wikipedia.org/wiki/2-3-4_tree
2-3-4 trees are an isometry of red-black trees, meaning that they are
equivalent data structures. In other words, for every 2-3-4 tree,
there exists at least one red-black tree with data elements in the
same order. Moreover, insertion and deletion operations on 2-3-4 trees
that cause node expansions, splits and merges are equivalent to the
color-flipping and rotations in red-black trees. Introductions to
red-black trees usually introduce 2-3-4 trees first, because they are
conceptually simpler. 2-3-4 trees, however, can be difficult to
implement in most programming languages because of the large number of
special cases involved in operations on the tree. Red-black trees are
simpler to implement, so tend to be used instead.
Specifically, the part about the "larger number of special cases" seems quite backward to me (I think it is the RB which have the large number of special cases, not the 2-3-4). But, I'm still learning (and still honestly have not quite fully wrapped my head around the Red-Black Trees), so I'd love to hear other opinions.
As a sidenote... while I do agree with most of what Lafore says, I think it's interesting he presented them in the opposite order compared to what Wikipedia says is common (2-3-4 before RB). I do think 2-3-4 first would make more sense since the RB is so much harder to conceptualize. Perhaps he chose that order because the RB was more closely related to BST's which he had just finished in the previous chapter.
the part about the "larger number of special cases" seems quite backward to me (I think it is the RB which have the large number of special cases, not the 2-3-4)
RB Trees can be implemented in a dozen or so lines, if you have pattern matching in your langugage:
data Color = R | B
data Tree a = E | T Color (Tree a) a (Tree a)
balance :: Color -> Tree a -> a -> Tree a -> Tree a
balance B (T R (T R a x b) y c ) z d = T R (T B a x b) y (T B c z d)
balance B (T R a x (T R b y c)) z d = T R (T B a x b) y (T B c z d)
balance B a x (T R (T R b y c) z d ) = T R (T B a x b) y (T B c z d)
balance B a x (T R b y (T R c z d)) = T R (T B a x b) y (T B c z d)
balance col a x b = T col a x b
insert :: Ord a => a -> Tree a -> Tree a
insert x s = T B a y b where
ins E = T R E x E
ins s#(T col a y b)
| x < y = balance col (ins a) y b
| x > y = balance col a y (ins b)
| otherwise = s
T _ a y b = ins s
This famous definition from Okasaki's paper
I need help proving the following:
(a ∨ b) ∨ c = a ∨ (b ∨ c)
I don't want the answer... just a hint that will help me understand the process of proving this.
Thank you.
Why not just prove it by doing all possible values of a, b and c = True, False? -- there are only 2^3 = 8 different cases.
Here's a start, for a=T, b=F, c=T
(a v b) v c = a ∨ (b ∨ c)
(T v F) v T = T v (F v T)
T v T = T v T
T = T
(However, this isn't really a programming question...)
What is your axiom set?
Not knowing the set, you could build a truth table
In mathematical set we have
A={1,2,3}
B={4,5,6}
A U B = B U A = {1,2,3,4,5,6} ={6,5,2,3,4,1} //order does not matter
But in theory of computation we get
a u b is either a or b but not both
also in a* u b* we get aaa or bbb but not aaabbb or bbbaaa as order does not matter in union.
why is that?
Thanks
Rahman
why?
No. In formal language theory, there is a correspondence between regular expressions and regular sets over an alphabet Σ. The function L maps a regular expression u to the corresponding regular set L(u); conversely, every every regular set A corresponds to a regular expression in L-1(A):
L(∅) = ∅
L(λ) = {λ}
L(a) = {a} (for all a ∈ Σ)
L(uv) = L(u)L(v) = {xy ∈ Σ* : x ∈ L(u) ∧ x ∈ L(v)}
L(u|v) = L(u) ∪ L(v) = {x ∈ Σ* : x ∈ L(u) ∨ x ∈ L(v)}
L(u*) = ∪[i ∈ ℕ] L(u)i = ∪[i ∈ ℕ] {xi ∈ Σ* : x ∈ L(u)}
The union of regular expressions corresponds to the union of regular sets, which is the familiar union operation from set theory. A regular expression u matches a string x iff x is a member of the corresponding set L(u). Therefore, u|v matches x iff x is a member of L(u) ∪ L(v).
This probably still belongs on math overflow, and you haven't really supplied enough context for a definitive answer so I'm going to make some assumptions.
Union Types in most languages might give you an expression like:
type C = B union A
So type C is the type/set of all values that might exist in the types B or C. So a value x of type B is also a value of type C.
And this is indeed the case for many languages. However stack overflow is targeting the more concrete world of programming. Math overflow is will have more theorists that will be better able to answer your question.