I think for the logic question for a long time. It seems easy but very hard. I don't know how to prove it using equivalence.
(A->(B->C))->((A->(C->D))->(A->(B->D)))
Use the definition of A->B (that is, ~A v B) and DeMorgan's Law to come up with an equivalent expression in disjunctive normal form (all ands and ors). From there, repeatedly apply the fact that A v ~A is true to simplify the expression to True. (The latter may require the use of distribution: A v (B ^ C) = (A v B) ^ (A v C), as well as A ^ (B v C) = (A ^ B) v (A ^ C), although the former will be of more use here.)
Related
So I'm trying to perform a simple proof using cardinalities. It looks like:
⟦(A::nat set) ∩ B = {}⟧ ⟹ (card (A ∪ B) = card A + card B)
Which seems to makes sense, but for some reason blast hangs, the rest of the provers fail to apply, and sledgehammer times out. Is there a gap in what I think I know about cardinalities? If not, how can I prove this lemma?
Thanks in advance!
I believe that the lemma you are trying to prove does not appropriately consider the case of infinite sets.
In Isabelle/HOL, infinite cardinalities are represented by zero. As we can see by the following lemma.
lemma "¬(finite A) ⟹ card A = 0"
by simp
If we consider the case of an infinite set, A, and a set of one element, B, then assume the intersection, A ∩ B is an empty set.
We are left with:
card (A ∪ B) = 0 as their union will also be infinite.
card A = 0
card B = 1
So we can see that in this case, the lemma does not hold.
The lemma can be corrected by asserting both sets are finite:
lemma
"⟦finite A; finite B; ((A::nat set) ∩ B) = {}⟧ ⟹ (card (A ∪ B) = card A + card B)"
by (simp add: card_Un_disjoint)
Which is essentially the same as the card_Un_disjoint used by the proof:
lemma card_Un_disjoint: "finite A ⟹ finite B ⟹ A ∩ B = {} ⟹ card (A ∪ B) = card A + card B"
using card_Un_Int [of A B] by simp
I have some problems understanding the following rules applied for first sets of LL(1) parser:
b) Else X1 is a nonterminal, so add First(X1) - ε to First(u).
a. If X1 is a nullable nonterminal, i.e., X1 =>* ε, add First(X2) - ε to First(u).
Furthermore, if X2 can also go to ε, then add First(X3) - ε and so on, through all Xn until the first nonnullable symbol is encountered.
b. If X1X2...Xn =>* ε, add ε to the first set.
How at b) if X1 nonterminal it can't add ε to First(u)? So if I have
S-> A / a
A-> b / ε
F(A) = {b,ε}
F(S) = {b,ε,a}
it's not correct? Also the little points a and b are confusing.
All it says is what are the terminals you can expect in a sentential form so that you can replace S by AB in the leftmost derivation. So, if A derives ε then in leftmost derivation you can replace A by ε. So now you depend upon B and say on. Consider this sample grammar:
S -> AB
A -> ε
B -> h
So, if there is a string with just one character/terminal "h" and you start verifying whether this string is valid by checking if there is any leftmost derivation deriving the string using the above grammar, then you can safely replace S by AB because A will derive ε and B will derive h.
Therefore, the language recognized by above grammar cannot have a null ε string. For having ε in the language, B should also derive ε. So now both the non-terminals A and B derive ε, therefore S derives ε.
That is, if there is some production S->ABCD and if all the non-terminals A,B,C and D derive ε, then only S can also derive ε and therefore ε will be in FIRST(S).
The FIRST sets given by you are correct. I think you are confused since the production S->A has only one terminal A on rhs and this A derives ε. Now as per b) FIRST(S) = {FIRST(A) - ε, a,} = {b, a} which is incorrect. Since rhs has only one terminal so there is this following possibility S -> A -> ε which specifies that FIRST(S) has ε or S can derive a null string ε.
I'm trying to do a proof by contradiction, but don't quite understand how to write it down formally or how to come to an answer in this case. I'm doing a conditional statement.
The problem I'm trying to solve is "Given the premises, h ^ ~r and (h^n) --> r, show that you can conclude ~n using proof by contradiction.
I've taken the negation of both h ^ ~r and (h^n) --> r, but I'm unsure how to use these two to prove ~n
so far I've written:
(i.)~((h^n) --> r)
(ii.)~(h ^ ~r)
therefore, ~n
The hardest part I'm having is that this isn't an actual statement that I can imagine a negation of, and a step by step answer of how to do one of these proofs would be really useful, thanks!
Suppose
~(((h ^ ~r) ^ ((h^n) --> r)) --> ~n)
Then,
~(~((h ^ ~r) ^ ((h^n) --> r)) v ~n)
=> ~(~(h ^ ~r) v ~((h^n) --> r)) v ~n)
=> ~((~h v r) v ~(~(h^n) v r)) v ~n)
=> ~((~h v r) v ((h^n) ^ ~r)) v ~n)
=> ~((~h v r) v (h ^ n ^ ~r)) v ~n)
=> ~((((~h v r v h) ^ (~h v r v n) ^ ((~h v r) v ~r)) v ~n)
=> ~(((true) ^ (~h v r v n) ^ (true)) v ~n)
=> ~((~h v r v n) v ~n)
=> ~(~h v r v n v ~n)
=> ~((~h v r) v (n v ~n))
=> ~((~h v r) v (true))
=> ~(true)
=> false //contradiction
Therefore,
((h ^ ~r) ^ ((h^n) --> r)) --> ~n
Let's define:
p1 := h ^ ~r, p2 := (h ^ n) -> r and q := ~n
we want to prove that p1 ^ p2 -> q.
Assume by contradiction that q=false. Then n=true. There are two cases r=true and r=false.
Case r=true
Then p1 cannot be true because ~r=false. Contradiction.
Case r=false
From p2 we deduce that (h ^ n) must be false. And given that we have assumed n=true, it must be h=false, in contradiction with p1.
Direct proof
From p1 we get h=true and r=false. Now from p2 we deduce (h ^ n) = false. And since h=true, it must be n=false, or ~n=true.
I think the OP is probably asking about, or mis-interpretting, the structure of a proof by contradiction rather than requesting a detailed proof for the specific example.
The structure goes like this ...
We've been told to assume a set of things A1, A2, ... An
Let's also assume the negation of what we eventually hope to prove, i.e. ~C
Do some logic that ends with any contradiction, by which we mean any statement of the form X & ~X
Now we ponder what that means. Since a contradiction can never be true, there must be something wrong with at least one of our n+1 assumptions. Could be that several or all of the assumptions are false. But if any n of the assumptions are true then the remaining one cannot be true. We cannot tell at this stage which one is the problem.
In this case we have been told ahead of time to accept A1, A2, ... An, and on that basis we can select the assumption of ~C as the one to be rejected.
As a final step we conclude that if A1, A2, ... An are true then C must be true.
I'm working with an exercise where I need to show that KB |= ~D.
And I know that the Knowledge Base is:
- (B v ¬C) => ¬A
- (¬A v D) => B
- A ∧ C
After converting to CNF:
A ∧ C ∧ (¬A v ¬B) ∧ (¬A v C) ∧ (A v B) ∧ (B v ¬D)
So now I have converted to CNF but from there, I don't know how to go any further. Would appreciate any help. Thanks!
The general resolution rule is that, for any two clauses
(that is, disjunctions of literals)
P_1 v ... v P_n
and
Q_1 v ... v Q_m
in your CNF such that there is i and j with P_i and Q_j being the negation of each other,
you can add a new clause
P_1 v ... v P_{i-1} v P_{i+1} ... v P_n v Q_1 v ... v Q_{j-1} v Q_{j+1} ... v Q_m
This is just a rigorous way to say that you can form a new clause by joining two of them, minus a literal with opposite "signs" in each.
For example
(A v ¬B)∧(B v ¬C)
is equivalent to
(A v ¬B)∧(B v ¬C)∧(A v ¬C),
by joining the two clauses while removing the opposites B and ¬B, obtaining A v ¬C.
Another example is
A∧(¬A v ¬C)
which is equivalent to
A∧(¬A v ¬C) ∧ ¬C.
since A counts as a clause with a single literal (A itself). So the two clauses are joined, while A and ¬A are removed, yielding a new clause ¬C.
Applying this to your problem, we can resolve A and ¬A v ¬B, obtaining ¬B.
We then resolve this new clause ¬B with B v ¬D, obtaining ¬D.
Because the CNF is a conjunction, the fact that it holds means that every clause in it holds. That is to say, the CNF implies all of its clauses. Since ¬D is one of its clauses, ¬D is implied by the CNF. Since the CNF is equivalent to the original KB, the KB implies ¬D.
I need help proving the following:
(a ∨ b) ∨ c = a ∨ (b ∨ c)
I don't want the answer... just a hint that will help me understand the process of proving this.
Thank you.
Why not just prove it by doing all possible values of a, b and c = True, False? -- there are only 2^3 = 8 different cases.
Here's a start, for a=T, b=F, c=T
(a v b) v c = a ∨ (b ∨ c)
(T v F) v T = T v (F v T)
T v T = T v T
T = T
(However, this isn't really a programming question...)
What is your axiom set?
Not knowing the set, you could build a truth table