Trying to convert a logical expression into CNF - logic

I'm trying to convert these following Boolean expressions into a CNF:
(1)P ∨ (¬Q ∧ (R → ¬P))
(2)¬((P → (Q → R))) → ((P → Q) → (Q → R))
I apply various laws until I get to:
(1) P ∨ (¬(Q ∨ R) ∨ ¬(Q ∨ P))
(2) (P ∧ (Q ∧ ¬R)) ∧ ((¬P ∨ Q) ∧ (Q ∧ ¬R))
These doesn't fit the form of a CNF for sure, but I'm not sure how to get to that. If you could help please!

Related

How to prove exists goal in this Isabelle/HOL lemma?

I have the following Isabelle/HOL theorem I'd like to prove:
lemma involution:
"∀P h. (∀x. ¬P x ⟶ P (h x)) ⟶ (∃x. P x ∧ P (h (h x)))"
but I have so far not found the correct inference rules to prove it. I believe it follows from directly from inference rule applications since metis can prove it trivially.
My proof script has only the following:
apply (rule allI; rename_tac P; rule allI; rename_tac h; rule impI)
apply (rule exI; rule conjI)
leaving me with the goal:
proof (prove)
goal (2 subgoals):
- ⋀P h. ∀x. ¬ P x ⟶ P (h x) ⟹
P (?x17 P h)
- ⋀P h. ∀x. ¬ P x ⟶ P (h x) ⟹
P (h (h (?x17 P h)))
after which I'm quite stumped as to how to proceed. I may need some invocation of the law of excluded middle, I tried both:
P x \/ ~ P x
(P x /\ ~ P (h x)) \/ ~(P x /\ ~P (h x))
to no avail.
I am more familiar with Coq than Isabelle/HOL but even there I could not prove it (even with the additional assumption that the argument type for P is inhabited, and the classic axiom).
Any clues would be much appreciated.
First of all, as you already mentioned, your lemma can be trivially proved by some of the Isabelle's built-in proof methods, e.g., blast in Isabelle2021-1. However, since I guess you are looking for a more pedagogical answer, I'll elaborate a bit on it.
Before tackling the proof of a non-trivial result, it's often useful to have a pen-and-paper proof sketch first. Here's the one I got off the top of my head (perhaps there's a simpler one, but for illustration purposes I think it will suffice):
My proof is by case distinction. Let a by an arbitrary but fixed value. Then, the following table shows all the cases to consider and an associated witness that satisfies the conclusion:
Case # P a P (h a) P (h (h a)) P (h (h (h a))) P (h (h (h (h a)))) Witness
----------------------------------------------------------------------------------
1 T ? T ? ? a
2 T F -----> T ? ? a
3 T T F ---------> T ? h a
4 F -> T ? T ? h a
5 F -> T F ---------> T ? h a
6 F -> T T F -------------> T h (h a)
In the table above, T, F and ? stands for True, False and "don't care" respectively, and a dashed arrow represents an instantiation of the premise ∀x. ¬P x ⟶ P (h x) with a specific value of x. This concludes our proof sketch.
Regarding an Isabelle/HOL proof for your lemma, I think a few remarks are in order:
Since the outermost universal quantifiers are superfluous, I'll remove them from the lemma statement and use free variables instead.
*_tac tactics are considered obsolete nowadays, and Isabelle/Isar (i.e., structured) proofs are strongly preferred over apply-scripts (except for the experimental stages of a proof). Please refer to the Isabelle/Isar Reference Manual for further details.
Now, please find below a structured proof for your lemma, based on the proof sketch above. For pedagogical purposes, I tried to break down the proof to the most elementary steps and included inline comments in the code in order to aid the reader:
lemma involution:
assumes "∀x. ¬P x ⟶ P (h x)"
shows "∃x. P x ∧ P (h (h x))"
proof -
fix a (* an arbitrary but fixed value *)
show ?thesis
proof (cases "P a")
case True
then consider
("Case #1") "P (h (h a))"
| ("Case #2") "¬P (h a)"
| ("Case #3") "P (h a)" and "¬P (h (h a))"
by blast
then show ?thesis
proof cases
case "Case #1"
from ‹P a› and ‹P (h (h a))› have "P a ∧ P (h (h a))"
by (intro conjI)
then show ?thesis
by (intro exI) (* `exI` using `a` as witness *)
next
case "Case #2"
have "¬P (h a)"
by fact (* assumption of case #2 *)
moreover have "¬P (h a) ⟶ P (h (h a))"
by (rule assms [THEN spec]) (* instantiation of premise with `h a` *)
ultimately have "P (h (h a))"
by (rule rev_mp) (* modus ponens *)
(* NOTE: The three steps above can be replaced by `then have "P (h (h a))" using assms by simp` *)
from ‹P a› and ‹P (h (h a))› have "P a ∧ P (h (h a))"
by (intro conjI)
then show ?thesis
by (intro exI) (* `exI` using `a` as witness *)
next
case "Case #3"
then have "P (h (h (h a)))" (* use of shortcut explained above *)
using assms by simp (* instantiation of premise with `h (h a)` *)
from ‹P (h a)› and ‹P (h (h (h a)))› have "P (h a) ∧ P (h (h (h a)))"
by (intro conjI)
then show ?thesis
by (intro exI) (* `exI` using `h a` as witness *)
qed
next
case False (* i.e., `¬P a` *)
then have "P (h a)"
using assms by simp (* instantiation of premise with `a` *)
then consider
("Case #4") "P (h (h (h a)))"
| ("Case #5") "¬P (h (h a))"
| ("Case #6") "P (h (h a))" and "¬P (h (h (h a)))"
by blast
then show ?thesis
sorry (* proof omitted, similar to cases #1, #2, and #3 *)
qed
qed

Algebraic Logic, Boolean Algebra, proofing an equation

proof that:
((a∧b) ∨ (not a ∧ b)) ∧ (( not c ∧ not d) ∨ (not(c ∨ d)) is equal to b ∧ not c ∧ not d.
My suggestion would be to use the Morgan's law in the beginning:
((a∧b) ∨ (not a ∧ b)) ∧ (( not c ∧ not d) ∨ (not c ∧ not d)) is equal to b ∧ not c ∧ not d.
Could anyone help me out?
The expression has the form x ∧ y where
x = (a∧b) ∨ (not a ∧ b)
= (a ∨ not a) ∧ b ; distribution
= true ∧ b
= b
y = (not c ∧ not d) ∨ (not(c ∨ d))
= (not c ∧ not d) ∨ (not c ∧ not d) ; De Morgan
= not c ∧ not d ; A ∨ A = A
Therefore
x ∧ y = b ∧ not c ∧ not d

How would one prove ((p ⇒ q) ⇒ p) ⇒ p, using the Fitch system

FYI, the logic program I use cannot do contradiction introductions. This point is most likely irrelevant, for I highly doubt I would need to use any form of contradiction for this proof.
In my attempt to solve this, I started off with assuming (p ⇒ q) ⇒ p)
Is this correct?
If so, what next? Forgive me if the solution seems so obvious.
(p ⇒ q) ⇒ p
((p ⇒ q) ⇒ p) ∨ (p ⇒ p) ; (X ⇒ X) and Or introduction
((p ⇒ q) ∨ p) ⇒ p ; (X ⇒ Z) ∨ (Y ⇒ Z) |- (X ∨ Y ⇒ Z)
((¬p ∨ q) ∨ p) ⇒ p ; (p ⇒ q) ⇔ (¬p ∨ q)
((¬p ∨ p) ∨ q) ⇒ p ; (X ∨ Y) ∨ Z |- (X ∨ Z) ∨ Y
(true ∨ q) ⇒ p ; (¬X ∨ X) ⇔ true
true ⇒ p ; (true ∨ X) ⇔ true
p ; Implication elimination
((p ⇒ q) ⇒ p) ⇒ p ; Implication introduction

Propositional Logic Formal Proof

I'm trying to formally prove the following equation, as practice ahead of my logic exam. However, I'm having a little difficulty working out the steps. Here are the rules that I'm using;
A ∧ A ≡ A, A ∨ A ≡ A idempotence
A ∧ B ≡ B ∧ A, A ∨ B ≡ B ∨ A commutativity
A ∧ (B ∧ C ) ≡ (A ∧ B) ∧ C , A ∨ (B ∨ C ) ≡ (A ∨ B) ∨ C associativity
A ∧ (A ∨ B) ≡ A, A ∨ (A ∧ B) ≡ A absorption
A ∧ (B ∨ C ) ≡ (A ∧ B) ∨ (A ∧ C ) distributivity
A ∨ (B ∧ C ) ≡ (A ∨ B) ∧ (A ∨ C ) distributivity
A ∧ (¬A) ≡ false, A ∨ (¬A) ≡ true negation
¬(¬A) ≡ A double negation
¬(A ∧ B) ≡ (¬A) ∨ (¬B), ¬(A ∨ B) ≡ (¬A) ∧ (¬B) de Morgan
A ⇒ B ≡ (¬A) ∨ B implication
A ⇔ B ≡ (A ⇒ B) ∧ (B ⇒ A) bi-implication
And this is the equation;
(p⇒r) ∧ (q⇒r) ≣ (p∨q) ⇒ r
I've figured that I use, Implication, Commuatability and Distributivity but I'm stuck at this point. Appreciate any help!
here is a formal proof
(p∨q) ⇒ r ≣ ¬(p∨q) ∨ r implication
≣ (¬p∧¬q) ∨ r de Morgan
≣ (¬p∨r) ∧ (¬q∨r) distributivity and commutativity
≣ (p⇒r) ∧ (q⇒r) implication
Note however that nobody thinks this way and that the actual exercise should consist in giving an explanation of why both expressions are equivalent.
EXPLANATION
Given that p ⇒ (p∨q), from (p∨q) ⇒ r, we get p ⇒ (p∨q) ⇒ r and therefore p ⇒ r. Since the same argument is valid for q we also get q ⇒ r. So, (p ⇒ r) ∧ (q ⇒ r).
Conversely, if (p ⇒ r) ∧ (q ⇒ r) then r must be true whenever any of p and q happens to be true. In other words, whenever (p∨q) is true. Hence (p∨q) ⇒ r.

SICP Types and Variables

This is from the MIT 6.001 Online Tutor, it's part of the third problem set.
Question: Indicate the type of each of the following expressions. If you need type variables, use A,B,C, etc., starting with A as the leftmost variable.
(lambda (x y) x) = A,B->A
(lambda (p) (p 3))
(lambda (p x) (p x)) = (A->B), A->B
(lambda (x y comp) (if (comp x y) x y))
As you can see I solved 1 and 3, but that was mainly out of luck. I still am having issues with understanding the concept and that is stopping me from solving 2 and 4.
Lecture slides can be found here (view the last few).
A, B -> A
(number -> A) -> A
(A -> B), A -> B
A, A, (A, A -> boolean) -> A
(the last assumes that x and y are the same types)

Resources