How to prove exists goal in this Isabelle/HOL lemma? - logic

I have the following Isabelle/HOL theorem I'd like to prove:
lemma involution:
"∀P h. (∀x. ¬P x ⟶ P (h x)) ⟶ (∃x. P x ∧ P (h (h x)))"
but I have so far not found the correct inference rules to prove it. I believe it follows from directly from inference rule applications since metis can prove it trivially.
My proof script has only the following:
apply (rule allI; rename_tac P; rule allI; rename_tac h; rule impI)
apply (rule exI; rule conjI)
leaving me with the goal:
proof (prove)
goal (2 subgoals):
- ⋀P h. ∀x. ¬ P x ⟶ P (h x) ⟹
P (?x17 P h)
- ⋀P h. ∀x. ¬ P x ⟶ P (h x) ⟹
P (h (h (?x17 P h)))
after which I'm quite stumped as to how to proceed. I may need some invocation of the law of excluded middle, I tried both:
P x \/ ~ P x
(P x /\ ~ P (h x)) \/ ~(P x /\ ~P (h x))
to no avail.
I am more familiar with Coq than Isabelle/HOL but even there I could not prove it (even with the additional assumption that the argument type for P is inhabited, and the classic axiom).
Any clues would be much appreciated.

First of all, as you already mentioned, your lemma can be trivially proved by some of the Isabelle's built-in proof methods, e.g., blast in Isabelle2021-1. However, since I guess you are looking for a more pedagogical answer, I'll elaborate a bit on it.
Before tackling the proof of a non-trivial result, it's often useful to have a pen-and-paper proof sketch first. Here's the one I got off the top of my head (perhaps there's a simpler one, but for illustration purposes I think it will suffice):
My proof is by case distinction. Let a by an arbitrary but fixed value. Then, the following table shows all the cases to consider and an associated witness that satisfies the conclusion:
Case # P a P (h a) P (h (h a)) P (h (h (h a))) P (h (h (h (h a)))) Witness
----------------------------------------------------------------------------------
1 T ? T ? ? a
2 T F -----> T ? ? a
3 T T F ---------> T ? h a
4 F -> T ? T ? h a
5 F -> T F ---------> T ? h a
6 F -> T T F -------------> T h (h a)
In the table above, T, F and ? stands for True, False and "don't care" respectively, and a dashed arrow represents an instantiation of the premise ∀x. ¬P x ⟶ P (h x) with a specific value of x. This concludes our proof sketch.
Regarding an Isabelle/HOL proof for your lemma, I think a few remarks are in order:
Since the outermost universal quantifiers are superfluous, I'll remove them from the lemma statement and use free variables instead.
*_tac tactics are considered obsolete nowadays, and Isabelle/Isar (i.e., structured) proofs are strongly preferred over apply-scripts (except for the experimental stages of a proof). Please refer to the Isabelle/Isar Reference Manual for further details.
Now, please find below a structured proof for your lemma, based on the proof sketch above. For pedagogical purposes, I tried to break down the proof to the most elementary steps and included inline comments in the code in order to aid the reader:
lemma involution:
assumes "∀x. ¬P x ⟶ P (h x)"
shows "∃x. P x ∧ P (h (h x))"
proof -
fix a (* an arbitrary but fixed value *)
show ?thesis
proof (cases "P a")
case True
then consider
("Case #1") "P (h (h a))"
| ("Case #2") "¬P (h a)"
| ("Case #3") "P (h a)" and "¬P (h (h a))"
by blast
then show ?thesis
proof cases
case "Case #1"
from ‹P a› and ‹P (h (h a))› have "P a ∧ P (h (h a))"
by (intro conjI)
then show ?thesis
by (intro exI) (* `exI` using `a` as witness *)
next
case "Case #2"
have "¬P (h a)"
by fact (* assumption of case #2 *)
moreover have "¬P (h a) ⟶ P (h (h a))"
by (rule assms [THEN spec]) (* instantiation of premise with `h a` *)
ultimately have "P (h (h a))"
by (rule rev_mp) (* modus ponens *)
(* NOTE: The three steps above can be replaced by `then have "P (h (h a))" using assms by simp` *)
from ‹P a› and ‹P (h (h a))› have "P a ∧ P (h (h a))"
by (intro conjI)
then show ?thesis
by (intro exI) (* `exI` using `a` as witness *)
next
case "Case #3"
then have "P (h (h (h a)))" (* use of shortcut explained above *)
using assms by simp (* instantiation of premise with `h (h a)` *)
from ‹P (h a)› and ‹P (h (h (h a)))› have "P (h a) ∧ P (h (h (h a)))"
by (intro conjI)
then show ?thesis
by (intro exI) (* `exI` using `h a` as witness *)
qed
next
case False (* i.e., `¬P a` *)
then have "P (h a)"
using assms by simp (* instantiation of premise with `a` *)
then consider
("Case #4") "P (h (h (h a)))"
| ("Case #5") "¬P (h (h a))"
| ("Case #6") "P (h (h a))" and "¬P (h (h (h a)))"
by blast
then show ?thesis
sorry (* proof omitted, similar to cases #1, #2, and #3 *)
qed
qed

Related

How would one prove ((p ⇒ q) ⇒ p) ⇒ p, using the Fitch system

FYI, the logic program I use cannot do contradiction introductions. This point is most likely irrelevant, for I highly doubt I would need to use any form of contradiction for this proof.
In my attempt to solve this, I started off with assuming (p ⇒ q) ⇒ p)
Is this correct?
If so, what next? Forgive me if the solution seems so obvious.
(p ⇒ q) ⇒ p
((p ⇒ q) ⇒ p) ∨ (p ⇒ p) ; (X ⇒ X) and Or introduction
((p ⇒ q) ∨ p) ⇒ p ; (X ⇒ Z) ∨ (Y ⇒ Z) |- (X ∨ Y ⇒ Z)
((¬p ∨ q) ∨ p) ⇒ p ; (p ⇒ q) ⇔ (¬p ∨ q)
((¬p ∨ p) ∨ q) ⇒ p ; (X ∨ Y) ∨ Z |- (X ∨ Z) ∨ Y
(true ∨ q) ⇒ p ; (¬X ∨ X) ⇔ true
true ⇒ p ; (true ∨ X) ⇔ true
p ; Implication elimination
((p ⇒ q) ⇒ p) ⇒ p ; Implication introduction

How do you convert to lambda syntax?

Part of a question I'm trying to understand involves this:
twice (twice) f x , where twice == lambda f x . f (f x)
I'm trying to understand how to make that substitution, and what it means.
My understanding is that (lambda x y . x + y) 2 3 == 2 + 3 == 5. I don't understand what twice (twice) means, or f ( f x ).
Two ways of looking at this.
Mechanical application of beta-reduction
You can solve this mechanically just by expanding any subterm of the form twice F X - with this term you will eventually eliminate all the occurences of twice, although you need to take care that you really understand the syntax tree of the lambda calculus to avoid mistakes.
twice takes two arguments, so your expression twice (twice) f x is the redex twice (twice) f applied to x. (A redex is a subterm that you can reduce independently of the rest of the term).
Expand the definition of twice in the redex: twice (twice) f x -> twice (twice f).
Substitute this into the original term to get twice (twice f) x, which is another redex we can expand twice in to get twice f (twice f x) (take care with the brackets in this step).
We have two twice redexes we can expand here, expanding the one inside the brackets is slightly simpler, giving twice f (f (f x)), which can again be expanded to give f (f (f (f x))).
Semantics of twice via abstraction
You can see what's going on at a more intuitive level by appealing to a higher-order combinator, the "○" infix combinator for function composition:
f ○ g = lambda x. f (g x)
It's easy to verify that twice f x and (f ○ f) x both expand to the same normal form, i.e., f (f x), so by extensionality, we have
twice f = f ○ f
Using this, we can expand very straightforwardly, first eliminating twice in favour of the composition combinator:
twice (twice) f x
= (twice ○ twice) f x
= (twice (twice f)) x /* expand out '○' */
= (twice (f ○ f)) x
= ((f ○ f) ○ (f ○ f)) x
and then expanding out '○':
= (f ○ f) ((f ○ f) x)
= (f ○ f) (f (f x))
= (f (f (f (f x))))
That's more expansion steps, because we first expand to terms containing the '○' operator, and then expand these operators out, but the steps are simpler, more intuitive ones, where you are less likely to misunderstand what you are doing. The '○' is widely used, standard operator in Haskell and is well worth getting used to.

SICP Types and Variables

This is from the MIT 6.001 Online Tutor, it's part of the third problem set.
Question: Indicate the type of each of the following expressions. If you need type variables, use A,B,C, etc., starting with A as the leftmost variable.
(lambda (x y) x) = A,B->A
(lambda (p) (p 3))
(lambda (p x) (p x)) = (A->B), A->B
(lambda (x y comp) (if (comp x y) x y))
As you can see I solved 1 and 3, but that was mainly out of luck. I still am having issues with understanding the concept and that is stopping me from solving 2 and 4.
Lecture slides can be found here (view the last few).
A, B -> A
(number -> A) -> A
(A -> B), A -> B
A, A, (A, A -> boolean) -> A
(the last assumes that x and y are the same types)

Lambda calculus predecessor function reduction steps

I am getting stuck with the Wikipedia description of the predecessor function in lambda calculus.
What Wikipedia says is the following:
PRED := λn.λf.λx. n (λg.λh. h (g f)) (λu.x) (λu.u)
Can someone explain reduction processes step-by-step?
Thanks.
Ok, so the idea of Church numerals is to encode "data" using functions, right? The way that works is by representing a value by some generic operation you'd perform with it. We can therefore go in the other direction as well, which can sometimes make things clearer.
Church numerals are a unary representation of the natural numbers. So, let's use Z to mean zero and Sn to represent the successor of n. Now we can count like this: Z, SZ, SSZ, SSSZ... The equivalent Church numeral takes two arguments--the first corresponding to S, and second to Z--then uses them to construct the above pattern. So given arguments f and x, we can count like this: x, f x, f (f x), f (f (f x))...
Let's look at what PRED does.
First, it creates a lambda taking three arguments--n is the Church numeral whose predecessor we want, of course, which means that f and x are the arguments to the resulting numeral, which thus means that the body of that lambda will be f applied to x one time fewer than n would.
Next, it applies n to three arguments. This is the tricky part.
The second argument, that corresponds to Z from earlier, is λu.x--a constant function that ignores one argument and returns x.
The first argument, that corresponds to S from earlier, is λgh.h (g f). We can rewrite this as λg. (λh.h (g f)) to reflect the fact that only the outermost lambda is being applied n times. What this function does is take the accumulated result so far as g and return a new function taking one argument, which applies that argument to g applied to f. Which is absolutely baffling, of course.
So... what's going on here? Consider the direct substitution with S and Z. In a non-zero number Sn, the n corresponds to the argument bound to g. So, remembering that f and x are bound in an outside scope, we can count like this: λu.x, λh. h ((λu.x) f), λh'. h' ((λh. h ((λu.x) f)) f) ... Performing the obvious reductions, we get this: λu.x, λh. h x, λh'. h' (f x) ... The pattern here is that a function is being passed "inward" one layer, at which point an S will apply it, while a Z will ignore it. So we get one application of f for each S except the outermost.
The third argument is simply the identity function, which is dutifully applied by the outermost S, returning the final result--f applied one fewer times than the number of S layers n corresponds to.
McCann's answer explains it pretty well. Let's take a concrete example for Pred 3 = 2:
Consider expression: n (λgh.h (g f)) (λu.x). Let K = (λgh.h (g f))
For n = 0, we encode 0 = λfx.x, so when we apply the beta reduction for (λfx.x)(λgh.h(gf)) means (λgh.h(gf)) is replaced 0 times. After further beta-reduction we get:
λfx.(λu.x)(λu.u)
reduces to
λfx.x
where λfx.x = 0, as expected.
For n = 1, we apply K for 1 times:
(λgh.h (g f)) (λu.x)
=> λh. h((λu.x) f)
=> λh. h x
For n = 2, we apply K for 2 times:
(λgh.h (g f)) (λh. h x)
=> λh. h ((λh. h x) f)
=> λh. h (f x)
For n = 3, we apply K for 3 times:
(λgh.h (g f)) (λh. h (f x))
=> λh.h ((λh. h (f x)) f)
=> λh.h (f (f x))
Finally, we take this result and apply an id function to it, we got
λh.h (f (f x)) (λu.u)
=> (λu.u)(f (f x))
=> f (f x)
This is the definition of number 2.
The list based implementation might be easier to understand, but it takes many intermediate steps. So it is not as nice as the Church's original implementation IMO.
After Reading the previous answers (good ones), I’d like to give my own vision of the matter in hope it helps someone (corrections are welcomed). I’ll use an example.
First off, I’d like to add some parenthesis to the definition that made everything clearer to me. Let’s redifine the given formula to:
PRED := λn λf λx.(n (λgλh.h (g f)) (λu.x)) (λu.u)
Let’s also define three Church numerals that will help with the example:
Zero := λfλx.x
One := λfλx. f (Zero f x)
Two := λfλx. f (One f x)
Three := λfλx. f (Two f x)
In order to understand how this works, let's focus first on this part of the formula:
n (λgλh.h (g f)) (λu.x)
From here, we can extract this conclusions:
n is a Church numeral, the function to be applied is λgλh.h (g f) and the starting data is λu.x
With this in mind, let's try an example:
PRED Three := λf λx.(Three (λgλh.h (g f)) (λu.x)) (λu.u)
Let's focus first on the reduction of the numeral (the part we explained before):
Three (λgλh.h (g f)) (λu.x)
Which reduces to:
(λgλh.h (g f)) (Two (λgλh.h (g f)) (λu.x))
(λgλh.h (g f)) ((λgλh.h (g f)) (One (λgλh.h (g f)) (λu.x)))
(λgλh.h (g f)) ((λgλh.h (g f)) ((λgλh.h (g f)) (Zero (λgλh.h (g f)) (λu.x))))
(λgλh.h (g f)) ((λgλh.h (g f)) ((λgλh.h (g f)) ((λfλx.x) (λgλh.h (g f)) (λu.x)))) -- Here we lose one application of f
(λgλh.h (g f)) ((λgλh.h (g f)) ((λgλh.h (g f)) (λu.x)))
(λgλh.h (g f)) ((λgλh.h (g f)) (λh.h ((λu.x) f)))
(λgλh.h (g f)) ((λgλh.h (g f)) (λh.h x))
(λgλh.h (g f)) (λh.h ((λh.h x) f))
(λgλh.h (g f)) (λh.h (f x))
(λh.h ((λh.h (f x) f)))
Ending up with:
λh.h f (f x)
So, we have:
PRED Three := λf λx.(λh.h (f (f x))) (λu.u)
Reducing again:
PRED Three := λf λx.((λu.u) (f (f x)))
PRED Three := λf λx.f (f x)
As you can see in the reductions, we end up applying the function one time less thanks to a clever way of using functions.
Using add1 as f and 0 as x, we get:
PRED Three add1 0 := add1 (add1 0) = 2
Hope this helps.
You can try to understand this definition of the predecessor function (not my favourite one) in terms of continuations.
To simplify the matter a bit, let us consider the following variant
PRED := λn.n (λgh.h (g S)) (λu.0) (λu.u)
then, you can replace S with f, and 0 with x.
The body of the function iterates n times a transformation M over an argument N. The argument N is a function of type (nat -> nat) -> nat that expects a continuation for nat and returns a nat. Initially, N = λu.0, that is it ignores the continuation and just returns 0.
Let us call N the current computation.
The function M: (nat -> nat) -> nat) -> (nat -> nat) -> nat modifies the computation g: (nat -> nat)->nat as follows.
It takes in input a continuation h, and applies it to the
result of continuing the current computation g with S.
Since the initial computation ignored the continuation, after one application of M we get the computation (λh.h 0), then (λh.h (S 0)), and so on.
At the end, we apply the computation to the identity continuation
to extract the result.
I'll add my explanation to the above good ones, mostly for the sake of my own understanding. Here's the definition of PRED again:
PRED := λnfx. (n (λg (λh.h (g f))) ) λu.x λu.u
The stuff on the right side of the first dot is supposed to be the (n-1) fold composition of f applied to x: f^(n-1)(x).
Let's see why this is the case by incrementally grokking the expression.
λu.x is the constant function valued at x. Let's just denote it const_x.
λu.u is the identity function. Let's call it id.
λg (λh.h (g f)) is a weird function that we need to understand. Let's call it F.
Ok, so PRED tells us to evaluate the n-fold composition of F on the constant function and then to evaluate the result on the identity function.
PRED := λnfx. F^n const_x id
Let's take a closer look at F:
F:= λg (λh.h (g f))
F sends g to evaluation at g(f).
Let's denote evaluation at value y by ev_y.
That is, ev_y := λh.h y
So
F = λg. ev_{g(f)}
Now we figure out what F^n const_x is.
F const_x = ev_{const_x(f)} = ev_x
and
F^2 const_x = F ev_x = ev_{ev_x(f)} = ev_{f(x)}
Similarly,
F^3 const_x = F ev_{f(x)} = ev_{f^2(x)}
and so on:
F^n const_x = ev_{f^(n-1)(x)}
Now,
PRED = λnfx. F^n const_x id
= λnfx. ev_{f^(n-1)(x)} id
= λnfx. id(f^(n-1)(x))
= λnfx. f^(n-1)(x)
which is what we wanted.
Super goofy. The idea is to turn doing something n times into doing f n-1 times. The solution is to apply F n times to const_x to obtain
ev_{f^(n-1)(x)} and then to extract f^(n-1)(x) by evaluating at the identity function.
Split this definition
PRED := λn.λf.λx.n (λg.λh.h (g f)) (λu.x) (λu.u)
into 4 parts:
PRED := λn.λf.λx. | n | (λg.λh.h (g f)) | (λu.x) | (λu.u)
- --------------- ------ ------
A B C D
For now, ignore D. By definition of Church numerals, A B C is B^n C: Apply n folds of B to C.
Now treat B like a machine that turns one input into one output. Its input g has form λh.h *, when appended by f, becomes (λh.h *) f = f *. This adds one more application of f to *. The result f * is then prepended by λh.h to become λh.h (f *).
You see the pattern: Each application of B turns λh.h * into λh.h (f *). If we had λh.h x as the begin term, we would have λh.h (f^n x) as the end term (after n applications of B).
However, the begin term is C = (λu.x), when appended by f, becomes (λu.x) f = x, then prepended by λh.h to become λh.h x. So we had λh.h x after, not before, the first application of B. This is why we have λh.h (f^(n-1) x) as the end term: The first application of f was ignored.
Finally, apply λh.h (f^(n-1) x) to D = (λu.u), which is identity, to get f^(n-1) x. That is:
PRED := λn.λf.λx.f^(n-1) x

Recursively modifying parts of a data structure in Haskell

Hello guys I am new to Haskell, I would like to create a Haskell Program that can apply DeMorgan's laws on logic expressions. The problem is I can't change the given expression to a new expression (after applying DeMorgan's laws)
To be specific here is my data structure
data LogicalExpression = Var Char
| Neg LogicalExpression
| Conj LogicalExpression LogicalExpression
| Disj LogicalExpression LogicalExpression
| Impli LogicalExpression LogicalExpression
deriving(Show)
I would like to create a function that takes in a "LogicalExpression" and return a "LogicalExpression" after applying DeMorgan's laws.
For example whenever I find this pattern: Neg ( Conj (Var 'a') (Var 'b') ) in a logicalExpression, I need to convert it to Conj ( Neg (Var 'a') Neg (Var 'b') ).
The idea is simple but it's so hard to implement in haskell, it's like trying to make a function (let's call it Z) that searches for x and converts it to y, so if Z is given "vx" it converts it to "vy" only instead of strings it takes in the data structure "logicalExpression" and instead of x it take the pattern I mentioned and spits out the whole logicalExpression again but with the pattern changed.
P.S: I want the function to take any Complex logic expression and simplifies it using DeMorgan's laws
Any hints?
Thanks in advance.
Luke (luqui) has presented probably the most elegant way to think about the problem. However, his encoding requires you to manually get right large swathes of the traversal for each such rewrite rule you want to create.
Bjorn Bringert's compos from A Pattern for Almost Composable Functions can make this easier, especially if you have multiple such normalization passes you need to write. It is usually written with Applicatives or rank 2 types, but to keep things simple here I'll defer that:
Given your data type
data LogicalExpression
= Var Char
| Neg LogicalExpression
| Conj LogicalExpression LogicalExpression
| Disj LogicalExpression LogicalExpression
| Impl LogicalExpression LogicalExpression
deriving (Show)
We can define a class used to hunt down non-top-level sub-expressions:
class Compos a where
compos' :: (a -> a) -> a -> a
instance Compos LogicalExpression where
compos' f (Neg e) = Neg (f e)
compos' f (Conj a b) = Conj (f a) (f b)
compos' f (Disj a b) = Disj (f a) (f b)
compos' f (Impl a b) = Impl (f a) (f b)
compos' _ t = t
For instance, we could eliminate all implications:
elimImpl :: LogicalExpression -> LogicalExpression
elimImpl (Impl a b) = Disj (Not (elimImpl a)) (elimImpl b)
elimImpl t = compos' elimImpl t -- search deeper
Then we can apply it, as luqui does above, hunting down negated conjunctions and disjunctions. And since, as Luke points out, it is probably better to do all your negation distribution in one pass, we'll also include normalization of negated implication and double negation elimination, yielding a formula in negation normal form (assuming that we've already eliminated implication)
nnf :: LogicalExpression -> LogicalExpression
nnf (Neg (Conj a b)) = Disj (nnf (Neg a)) (nnf (Neg b))
nnf (Neg (Disj a b)) = Conj (nnf (Neg a)) (nnf (Neg b))
nnf (Neg (Neg a)) = nnf a
nnf t = compos' nnf t -- search and replace
The key is the last line, which says that if none of the other cases above match, go hunt for subexpressions where you can apply this rule. Also, since we push the Neg into the terms, and then normalize those, you should only wind up with negated variables at the leaves, since all other cases where Neg precedes another constructor will be normalized away.
The more advanced version would use
import Control.Applicative
import Control.Monad.Identity
class Compos a where
compos :: Applicative f => (a -> f a) -> a -> f a
compos' :: Compos a => (a -> a) -> a -> a
compos' f = runIdentity . compos (Identity . f)
and
instance Compos LogicalExpression where
compos f (Neg e) = Neg <$> f e
compos f (Conj a b) = Conj <$> f a <*> f b
compos f (Disj a b) = Disj <$> f a <*> f b
compos f (Impl a b) = Impl <$> f a <*> f b
compos _ t = pure t
This doesn't help in your particular case here, but is useful later if you need to return multiple rewritten results, perform IO, or otherwise engage in more complicated activities in your rewrite rule.
You might need to use this, if for instance, you wanted to try to apply the deMorgan laws in any subset of the locations where they apply rather than pursue a normal form.
Notice that no matter what function you are rewriting, Applicative you are using, or even directionality of information flow during the traversal, the compos definition only has to be given once per data type.
If I understand correctly, you want to apply De Morgan's laws to push the negation down into the tree as far as possible. You'll have to explicitly recurse down the tree many times:
-- no need to call self on the top-level structure,
-- since deMorgan is never applicable to its own result
deMorgan (Neg (a `Conj` b)) = (deMorgan $ Neg a) `Disj` (deMorgan $ Neg b)
deMorgan (Neg (a `Disj` b)) = (deMorgan $ Neg a) `Conj` (deMorgan $ Neg b)
deMorgan (Neg a) = Neg $ deMorgan a
deMorgan (a `Conj` b) = (deMorgan a) `Conj` (deMorgan b)
-- ... etc.
All of this would be much easier in a term-rewriting system, but that's not what Haskell is.
(Btw., life becomes a lot easier if you translate P -> Q into not P or Q in your formula parser and remove the Impli constructor. The number of cases in each function on formulas becomes smaller.)
Others have given good guidance. But I would phrase this as a negation eliminator, so that means you have:
deMorgan (Neg (Var x)) = Neg (Var x)
deMorgan (Neg (Neg a)) = deMorgan a
deMorgan (Neg (Conj a b)) = Disj (deMorgan (Neg a)) (deMorgan (Neg b))
-- ... etc. match Neg against every constructor
deMorgan (Conj a b) = Conj (deMorgan a) (deMorgan b)
-- ... etc. just apply deMorgan to subterms not starting with Neg
We can see by induction that in the result, Neg will only be applied to Var terms, and at most once.
I like to think of transformations like this as eliminators: i.e. things that try to "get rid" of a certain constructor at the top level by pushing them down. Match the constructor you are eliminating against every inner constructor (including itself), and then just forward the rest. For example, a lambda calculus evaluator is an Apply eliminator. An SKI converter is a Lambda eliminator.
The important point is the recursive application of deMorgan. It is quite different from (for example) :
deMorgan' z#(Var x) = z
deMorgan' (Neg (Conj x y)) = (Disj (Neg x) (Neg y))
deMorgan' (Neg (Disj x y)) = (Conj (Neg x) (Neg y))
deMorgan' z#(Neg x) = z
deMorgan' (Conj x y) = Conj x y
deMorgan' (Disj x y) = Disj x y
which does not work :
let var <- (Conj (Disj (Var 'A') (Var 'B')) (Neg (Disj (Var 'D') (Var 'E'))))
*Main> deMorgan' var
Conj (Disj (Var 'A') (Var 'B')) (Neg (Disj (Var 'D') (Var 'E')))
The problem here is that you do not apply transformations in the subexpressiosn (the x and ys).

Resources