How to substract TRUE and TRUE in lambda calculus correctly? - lambda-calculus

I am trying to understand the lambda calculus. However, I am a bit stuck on this expression: TRUE and TRUE. I can't figure out how you can get from
((\T F -> T) (\T F -> T))
to
(\F T F -> T)
, not
(\F -> (\T F -> T))
\ is lambda-signature

(\F T F -> T)
and
(\F -> (\T F -> T))
are the same thing.
https://en.wikipedia.org/wiki/Lambda_calculus_definition#Notation:
Outermost parentheses are dropped: M N instead of (M N)
[...]
The body of an abstraction extends as far right as possible: λx. M N means λx. (M N) and not (λx. M) N
A sequence of abstractions is contracted: λx. λy. λz. N is abbreviated as λxyz. N
In particular,
(\F -> (\T F -> T))
can be written
(\F -> \T F -> T)
because we can drop redundant parentheses and the body of the outer lambda extends as far right as possible, which can then be written
(\F -> \T -> \F -> T)
or
(\F T F -> T)
by the last rule (contraction).

Related

Prove that ¬P → ( P → ( P → Q)) is a tautology without using truth tables

I can't find a proper formula for this considering it's almost exclusively made up of implications. Can somebody help me?
EDIT: Sorry, I'm new to this site and still learning to use it. I've tried writing (P → Q) as (¬P ∨ Q) and then applying the distibutive laws but I feel like I've reached a dead end.
P -> q is the same as no(p) OR q
If you replace, in your expression :
P -> (P -> Q) is the same as no(P) OR (no(P) OR Q)
no(P) -> P (P -> (P -> Q)) is the same as no(no(p)) OR (no(P) OR (no(P) OR Q))
which is the same as p OR no(P) OR no(P) OR Q which is always true ( because p or no(p) is always true)
Simply:
!P -> (P -> (P -> Q)) apply implication
P v (!P v (P -> Q))) P v !P is T
T v (...) T v anything is T
T

Sorting a list using a "a -> a -> Maybe Ordering" function

Is there a variant of
sortBy :: (a -> a -> Ordering) -> [a] -> [a]
(in Data.List) that allows me to use a a -> a -> Maybe Ordering sorting function instead of a -> a -> Ordering?
What this variant would do is this:
sortBy' :: (a -> a -> Maybe Ordering) -> [a] -> Maybe [a]
If a -> a -> Maybe Ordering ever returns Nothing when it's called during the sort, sortBy' would return Nothing. Otherwise it would return the sorted list wrapped in Just.
If such a variant is not already available, can you please help me construct one? (Preferably one that is at least as efficient as sortBy.)
You can adapt quickSort :
quickSortBy :: (a -> a -> Maybe Ordering) -> [a] -> Maybe [a]
quickSortBy f [] = Just []
quickSortBy f (x:xs) = do
comparisons <- fmap (zip xs) $ mapM (f x) xs
sortLesser <- quickSortBy f . map fst $ filter ((`elem` [GT, EQ]) . snd) comparisons
sortUpper <- quickSortBy f . map fst $ filter ((== LT) . snd) comparisons
return $ sortLesser ++ [x] ++ sortUpper
At least assume that your sorting predicate f :: a -> a -> Maybe Ordering is anti-symmetric : f x y == Just LT if and only if f y x == Just GT. Then when quickSortBy f returns Just [x1,...,xn], I think you have this guarantee : for all i in [1..n-1], f xi x(i+1) is Just LT or Just EQ.
When in particular f is a partial order (transitive), then [x1,...,xn] is totally ordered.

Performance of Foldable's default methods

I've been exploring the Foldable class and also the the Monoid class.
Firstly, lets say I want to fold over a list of the Monoid First. Like so:
x :: [First a]
fold? mappend mempty x
Then I assume in this case the most appropriate fold would be foldr, as mappend for First is lazy in it's second argument.
Conversely, for Last we'd want to foldl' (or just foldl I'm not sure).
Now moving away from lists, I've defined a simple binary tree like so:
{-# LANGUAGE GADTs #-}
data BinaryTree a where
BinaryTree :: BinaryTree a -> BinaryTree a -> BinaryTree a
Leaf :: a -> BinaryTree a
And I've made it Foldable with the most straightforward definition:
instance Foldable BinaryTree where
foldMap f (BinaryTree left right) =
(foldMap f left) `mappend` (foldMap f right)
foldMap f (Leaf x) = f x
As Foldable defines fold as simply foldMap id we can now do:
x1 :: BinaryTree (First a)
fold x1
x2 :: BinaryTree (Last a)
fold x2
Assuming our BinaryTree is balanced, and there's not many Nothing values, these operations should take O(log(n)) time I believe.
But Foldable also defines a whole lot of default methods like foldl, foldl', foldr and foldr' based on foldMap.
These default definitions seem to be implemented by composing a bunch of functions, wrapped in a Monoid called Endo, one for each element in the collection, and then composing them all.
For the purpose of this discussion I am not modifying these default definitions.
So lets now consider:
x1 :: BinaryTree (First a)
foldr mappend mempty x1
x2 :: BinaryTree (Last a)
foldl mappend mempty x2
Does running these retain O(log(n)) performance of the ordinary fold? (I'm not worried about constant factors for the moment). Does laziness result in the tree not needing to be fully traversed? Or will the default definitions of foldl and foldr require an entire traversal of the tree?
I tried to go though the algorithm step by step (much like they did on the Foldr Foldl Foldl' article) but I ended up completely confusing myself as this is a bit more complex as it involves an interaction between Foldable, Monoid and Endo.
So what I'm looking for is an explanation of why (or why not) the default definition of say foldr, would only take O(log(n)) time on a balanced binary tree like above. A step by step example like what's from the Foldr Foldl Foldl' article would be really helpful, but I understand if that's too difficult, as I totally confused myself attempting it.
Yes, it has O(log(n)) best case performance.
Endo is a wrapper around (a -> a) kind of functions that:
instance Monoid (Endo a) where
mempty = Endo id
Endo f `mappend` Endo g = Endo (f . g)
And the default implementation of foldr in Data.Foldable:
foldr :: (a -> b -> b) -> b -> t a -> b
foldr f z t = appEndo (foldMap (Endo #. f) t) z
The definition of . (function composition) in case:
(.) f g = \x -> f (g x)
Endo is defined by newtype constructor, so it only exists at compile stage, not run-time.
#. operator changes the type of it's second operand and discard the first.
The newtype constructor and #. operator guarantee that you can ignore the wrapper when considering performance issues.
So the default implementation of foldr can be reduced to:
-- mappend = (.), mempty = id from instance Monoid (Endo a)
foldr :: (a -> b -> b) -> b -> t a -> b
foldr f z t = foldMap f t z
For your Foldable BinaryTree:
foldr f z t
= foldMap f t z
= case t of
Leaf a -> f a z
-- what we care
BinaryTree l r -> ((foldMap f l) . (foldMap f r)) z
The default lazy evaluation in Haskell is ultimately simple, there are just two rules:
function application first
evaluate the arguments from left to right if the values matter
That makes it easy to trace the evaluation of the last line of the code above:
((foldMap f l) . (foldMap f r)) z
= (\z -> foldMap f l (foldMap f r z)) z
= foldMap f l (foldMap f r z)
-- let z' = foldMap f r z
= foldMap f l z' -- height 1
-- if the branch l is still not a Leaf node
= ((foldMap f ll) . (foldMap f lr)) z'
= (\z -> foldMap f ll (foldMap f lr)) z'
= foldMap f ll (foldMap f lr z')
-- let z'' = foldMap f lr z'
= foldMap f ll z'' -- height 2
The right branch of the tree is never expanded before the left has been fully expanded, and it goes one level higher after an O(1) operation of function expansion and application, therefore when it reached the left-most Leaf node:
= foldMap f leaf#(Leaf a) z'heightOfLeftMostLeaf
= f a z'heightOfLeftMostLeaf
Then f looks at the value a and decides to ignore its second argument (like what mappend will do to First values), the evaluation short-circuits, results O(height of the left-most leaf), or O(log(n)) performance when the tree is balanced.
foldl is all the same, it's just foldr with mappend flipped i.e. O(log(n)) best case performance with Last.
foldl' and foldr' are different.
foldl' :: (b -> a -> b) -> b -> t a -> b
foldl' f z0 xs = foldr f' id xs z0
where f' x k z = k $! f z x
At every step of reduction, the argument is evaluated first and then the function application, the tree will be traversed i.e. O(n) best case performance.

Haskell - foldl' in terms of foldr and performance issues

While studying fold in depth with A tutorial on the universality and expressiveness of fold
I found an amazing definition of foldl using foldr:
-- I used one lambda function inside another only to improve reading
foldl :: (b -> a -> b) -> b -> [a] -> b
foldl f z xs = foldr (\x g -> (\a -> g (f a x))) id xs z
After understanding what is going on, I thought I could even use foldr to define foldl', which would be like this:
foldl' :: (b -> a -> b) -> b -> [a] -> b
foldl' f z xs = foldr (\x g -> (\a -> let z' = a `f` x in z' `seq` g z')) id xs z
Which is parallel to this:
foldl' :: (b -> a -> b) -> b -> [a] -> b
foldl' f z (x:xs) = let z' = z `f` x
in seq z' $ foldl' f z' xs
foldl' _ z _ = z
It seems that both of them are running in constant space (not creating thunks) in simple cases like this:
*Main> foldl' (+) 0 [1..1000000]
500000500000
May I consider both definitions of foldl' equivalent in terms of performance?
In GHC 7.10+, foldl and foldl' are both defined in terms of foldr. The reason that they weren't before is that GHC didn't optimize the foldr definition well enough to participate in foldr/build fusion. But GHC 7.10 introduced a new optimization specifically to allow foldr/build fusion to succeed while using foldl' or foldl' defined that way.
The big win here is that an expression like foldl' (+) 0 [1..10] can be optimized down to never allocating a (:) constructor at all. And we all know that the absolute fastest garbage collection is when there's no garbage to collect.
See http://www.joachim-breitner.de/publications/CallArity-TFP.pdf for information on the new optimization in GHC 7.10, and why it was necessary.

Is it possible to make `foldrRanges` as fast as `foldrRange2D`?

This:
foldrRange :: (Int -> t -> t) -> t -> Int -> Int -> t
foldrRange cons nil a b = foldr cons nil [a..b-1]
Defines a function that folds over a list from a til b. This:
foldrRange :: (Int -> t -> t) -> t -> Int -> Int -> t
foldrRange cons nil a b = go (b-1) nil where
go b !r | b < a = r
| otherwise = go (b-1) (cons b r)
{-# INLINE foldrRange #-}
is a ~50x faster version due to proper strictness usage (we know the last element, so we can roll like foldl').
This:
foldrRange2D cons nil (ax,ay) (bx,by)
= foldr cons nil
$ do
y <- [ay..by-1]
x <- [ax..bx-1]
return (x,y)
Is a 2D version of foldrRange, i.e., it works over 2D rectangles so that foldrRange2d (:) [] (0,0) (2,2) == [(0,0),(1,0),(0,1),(1,1)]. This:
foldrRange2D :: ((Int,Int) -> t -> t) -> t -> (Int,Int) -> (Int,Int) -> t
foldrRange2D cons nil (ax,ay) (bx,by) = go (by-1) nil where
go by !r
| by < ay = r
| otherwise = go (by-1) (foldrRange (\ ax -> cons (ax,by)) r ax bx)
Is, again, an ~50x faster definition due to better strictness usage. Writing foldrRange3D, foldrRange4D, etc., would be cumbersome, so one can generalize it like so:
foldrRangeND :: forall t . ([Int] -> t -> t) -> t -> [Int] -> [Int] -> t
foldrRangeND cons nil as bs = foldr co ni (zip as bs) [] nil where
co (a,b) tail lis = foldrRange (\ h t -> tail (h:lis) . t) id a b
ni lis = cons lis
Unfortunately this definition is at around 120 times slower than foldrRange2D, as one can verify with this test:
main = do
let n = 2000
print $ foldrRange2D (\ (a,b) c -> a+b+c) 0 (0,0) (n,n)
print $ foldrRanges (\ [a,b] c -> a+b+c) 0 [0,0] [n,n]
I could probably use ST to get a faster foldrRanges, but is it possible to do so with recursion alone?
You have an efficient implementation of your algorithm which is inductive on the dimension of the input. Fortunately, you can do that in Haskell!
First, replace lists with type level Nat indexed vectors. This gives us a type to do induction on (it could probably be done with lists ... but this is much safer).
data Nat = Z | S Nat
infixl 5 :<
data Vec (n :: Nat) (a :: *) where
Nil :: Vec Z a
(:<) :: Vec n a -> a -> Vec (S n) a
instance Functor (Vec n) where
fmap _ Nil = Nil
fmap f (xs :< x) = fmap f xs :< f x
Then your desired function is just the same as the 2D case - just generalize the recursive call:
{-# INLINE foldrRangeN #-}
foldrRangeN :: (Vec n Int -> t -> t) -> t -> Vec n Int -> Vec n Int -> t
foldrRangeN f x Nil Nil = f Nil x
foldrRangeN cons nil (ax :< ay) (bx :< by) = go (by-1) nil where
go by !r
| by < ay = r
| otherwise = go (by-1) (foldrRangeN (\ ax -> cons (ax :< by)) r ax bx)
Although when I tested the performance, I was disappointed to see it couldn't keep up with the 2D version. The trick seems to be more inlining. By putting the function in a class, you can get it to inline at each 'dimension' (there must be a better way to do this...)
class FoldrRange n where
foldrRangeN' :: (Vec n Int -> t -> t) -> t -> Vec n Int -> Vec n Int -> t
instance FoldrRange Z where
{-# INLINE foldrRangeN' #-}
foldrRangeN' f x Nil Nil = f Nil x
instance FoldrRange n => FoldrRange (S n) where
{-# INLINE foldrRangeN' #-}
foldrRangeN' cons nil (ax :< ay) (bx :< by) = go (by-1) nil where
go by !r
| by < ay = r
| otherwise = go (by-1) (foldrRangeN' (\ ax -> cons (ax :< by)) r ax bx)
Tested as follows:
main = do
i:n':_ <- getArgs
let n = read n' :: Int
rs = [ foldrRange2D (\ (a,b) c -> a+b+c) 0 (0,0) (n,n)
, foldrRangeND (\ [a,b] c -> a+b+c) 0 [0,0] [n,n]
, foldrRangeN (\ (Nil :< a :< b) c -> a+b+c) 0 (Nil :< 0 :< 0) (Nil :< n :< n)
, foldrRangeN' (\ (Nil :< a :< b) c -> a+b+c) 0 (Nil :< 0 :< 0) (Nil :< n :< n)
]
print $ rs !! read i
and the results on my system
./test 0 4000 +RTS -s : 0.02s
./test 1 4000 +RTS -s : 7.63s
./test 2 4000 +RTS -s : 0.59s
./test 3 4000 +RTS -s : 0.03s

Resources