edit distance algorithm in Haskell - performance tuning - performance

I'm trying to implement the levenshtein distance (or edit distance) in Haskell, but its performance decreases rapidly when the string lenght increases.
I'm still quite new to Haskell, so it would be nice if you could give me some advice on how I could improve the algorithm. I already tried to "precompute" values (the inits), but since it didn't change anything I reverted that change.
I know there's already an editDistance implementation on Hackage, but I need it to work on lists of arbitrary tokens, not necessarily strings. Also, I find it a bit complicated, at least compared to my version.
So, here's the code:
-- standard levenshtein distance between two lists
editDistance :: Eq a => [a] -> [a] -> Int
editDistance s1 s2 = editDistance' 1 1 1 s1 s2
-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: Eq a => Int -> Int -> Int -> [a] -> [a] -> Int
editDistance' _ _ ins s1 [] = ins * length s1
editDistance' _ _ ins [] s2 = ins * length s2
editDistance' del sub ins s1 s2
| last s1 == last s2 = editDistance' del sub ins (init s1) (init s2)
| otherwise = minimum [ editDistance' del sub ins s1 (init s2) + del -- deletion
, editDistance' del sub ins (init s1) (init s2) + sub -- substitution
, editDistance' del sub ins (init s1) s2 + ins -- insertion
]
It seems to be a correct implementation, at least it gives exactly the same results as this online tool.
Thanks in advance for your help! If you need any additional information, please let me know.
Greetings,
bzn

Ignoring that this is a bad algorithm (should be memoizing, I get to that second)...
Use O(1) Primitives and not O(n)
One problem is you use a whole bunch calls that are O(n) for lists (haskell lists are singly linked lists). A better data structure would give you O(1) operations, I used Vector:
import qualified Data.Vector as V
-- standard levenshtein distance between two lists
editDistance :: Eq a => [a] -> [a] -> Int
editDistance s1 s2 = editDistance' 1 1 1 (V.fromList s1) (V.fromList s2)
-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: Eq a => Int -> Int -> Int -> V.Vector a -> V.Vector a -> Int
editDistance' del sub ins s1 s2
| V.null s2 = ins * V.length s1
| V.null s1 = ins * V.length s2
| V.last s1 == V.last s2 = editDistance' del sub ins (V.init s1) (V.init s2)
| otherwise = minimum [ editDistance' del sub ins s1 (V.init s2) + del -- deletion
, editDistance' del sub ins (V.init s1) (V.init s2) + sub -- substitution
, editDistance' del sub ins (V.init s1) s2 + ins -- insertion
]
The operations that are O(n) for lists include init, length, and last (though init is able to be lazy at least). All these operations are O(1) using Vector.
While real benchmarking should use Criterion, a quick and dirty benchmark:
str2 = replicate 15 'a' ++ replicate 25 'b'
str1 = replicate 20 'a' ++ replicate 20 'b'
main = print $ editDistance str1 str2
shows the vector version takes 0.09 seconds while strings take 1.6 seconds, so we saved about an order of magnitude without even looking at your editDistance algorithm.
Now what about memoizing results?
The bigger issue is obviously the need for memoization. I took this as an opportunity to learn the monad-memo package - my god is that awesome! For one extra constraint (you need Ord a), you get a memoization basically for no effort. The code:
import qualified Data.Vector as V
import Control.Monad.Memo
-- standard levenshtein distance between two lists
editDistance :: (Eq a, Ord a) => [a] -> [a] -> Int
editDistance s1 s2 = startEvalMemo $ editDistance' (1, 1, 1, (V.fromList s1), (V.fromList s2))
-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: (MonadMemo (Int, Int, Int, V.Vector a, V.Vector a) Int m, Eq a) => (Int, Int, Int, V.Vector a, V.Vector a) -> m Int
editDistance' (del, sub, ins, s1, s2)
| V.null s2 = return $ ins * V.length s1
| V.null s1 = return $ ins * V.length s2
| V.last s1 == V.last s2 = memo editDistance' (del, sub, ins, (V.init s1), (V.init s2))
| otherwise = do
r1 <- memo editDistance' (del, sub, ins, s1, (V.init s2))
r2 <- memo editDistance' (del, sub, ins, (V.init s1), (V.init s2))
r3 <- memo editDistance' (del, sub, ins, (V.init s1), s2)
return $ minimum [ r1 + del -- deletion
, r2 + sub -- substitution
, r3 + ins -- insertion
]
You see how the memoization needs a single "key" (see the MonadMemo class)? I packaged all the arguments as a big ugly tuple. It also needs one "value", which is your resulting Int. Then it's just plug and play using the "memo" function for the values you want to memoize.
For benchmarking I used a shorter, but larger-distance, string:
$ time ./so # the memoized vector version
12
real 0m0.003s
$ time ./so3 # the non-memoized vector version
12
real 1m33.122s
Don't even think about running the non-memoized string version, I figure it would take around 15 minutes at a minimum. As for me, I now love monad-memo - thanks for the package Eduard!
EDIT: The difference between String and Vector isn't as much in the memoized version, but still grows to a factor of 2 when the distance gets to around 200, so still worth while.
EDIT: Perhaps I should explain why the bigger issue is "obviously" memoizing results. Well, if you look at the heart of the original algorithm:
[ editDistance' ... s1 (V.init s2) + del
, editDistance' ... (V.init s1) (V.init s2) + sub
, editDistance' ... (V.init s1) s2 + ins]
It's quite clear a call of editDistance' s1 s2 results in 3 calls to editDistance'... each of which call editDistance' three more times... and three more time... and AHHH! Exponential explosion! Luckly most the calls are identical! for example (using --> for "calls" and eD for editDistance'):
eD s1 s2 --> eD s1 (init s2) -- The parent
, eD (init s1) s2
, eD (init s1) (init s2)
eD (init s1) s2 --> eD (init s1) (init s2) -- The first "child"
, eD (init (init s1)) s2
, eD (init (init s1)) (init s2)
eD s1 (init s2) --> eD s1 (init (init s2))
, eD (init s1) (init s2)
, eD (init s1) (init (init s2))
Just by considering the parent and two immediate children we can see the call ed (init s1) (init s2) is done three times. The other child share calls with the parent too and all children share many calls with each other (and their children, cue Monty Python skit).
It would be a fun, perhaps instructive, exercise to make a runMemo like function that returns the number of cached results used.

You need to memoize editDistance'. There are many ways of doing this, e.g., a recursively defined array.

As already mentioned, memoization is what you need. In addition you are looking at edit distance from right to left, wich isn't very efficient with strings, and edit distance is the same regardless of direction. That is: editDistance (reverse a) (reverse b) == editDistance a b
For solving the memoization part there are very many libraries that can help you. In my example below I chose MemoTrie since it is quite easy to use and performs well here.
import Data.MemoTrie(memo2)
editDistance' del sub ins = memf
where
memf = memo2 f
f s1 [] = ins * length s1
f [] s2 = ins * length s2
f (x:xs) (y:ys)
| x == y = memf xs ys
| otherwise = minimum [ del + memf xs (y:ys),
sub + memf (x:xs) ys,
ins + memf xs ys]
As you can see all you need is to add the memoization. The rest is the same except that we start from the beginning of the list in staid of the end.

I know there's already an editDistance implementation on Hackage, but I need it to work on lists of arbitrary tokens, not necessarily strings
Are there a finite number of tokens? I'd suggest you try simply devising a mapping from token to character. There are 10,646 characters at your disposal, after all.

This version is much quicker than those memorised versions, but still I would love to have it even quicker. Works fine with 100's character long strings.
I was written with other distances in mind(change the init function and cost) , and use classical dynamic programming array trick.
The long line could be converted into a separate function with top 'do', but I like this way.
import Data.Array.IO
import System.IO.Unsafe
editDistance = dist ini med
dist :: (Int -> Int -> Int) -> (a -> a -> Int ) -> [a] -> [a] -> Int
dist i f a b = unsafePerformIO $ distM i f a b
-- easy to create other distances
ini i 0 = i
ini 0 j = j
ini _ _ = 0
med a b = if a == b then 0 else 2
distM :: (Int -> Int -> Int) -> (a -> a -> Int) -> [a] -> [a] -> IO Int
distM ini f a b = do
let la = length a
let lb = length b
arr <- newListArray ((0,0),(la,lb)) [ini i j | i<- [0..la], j<-[0..lb]] :: IO (IOArray (Int,Int) Int)
-- all on one line
mapM_ (\(i,j) -> readArray arr (i-1,j-1) >>= \ld -> readArray arr (i-1,j) >>= \l -> readArray arr (i,j-1) >>= \d-> writeArray arr (i,j) $ minimum [l+1,d+1, ld + (f (a !! (i-1) ) (b !! (j-1))) ] ) [(i,j)| i<-[1..la], j<-[1..lb]]
readArray arr (la,lb)

People are recommending you to use generic memoization libraries, but for the simple task of defining Levenshtein distance plain dynamic programming is more than enough.
A very simple polymorphic list-based implementation:
distance s t =
d !!(length s)!!(length t)
where d = [ [ dist m n | n <- [0..length t] ] | m <- [0..length s] ]
dist i 0 = i
dist 0 j = j
dist i j = minimum [ d!!(i-1)!!j+1
, d!!i!!(j-1)+1
, d!!(i-1)!!(j-1) + (if s!!(i-1)==t!!(j-1)
then 0 else 1)
]
Or if you need real speed on long sequences, you can use a mutable array:
import Data.Array
import qualified Data.Array.Unboxed as UA
import Data.Array.ST
import Control.Monad.ST
-- Mutable unboxed and immutable boxed arrays
distance :: Eq a => [a] -> [a] -> Int
distance s t = d UA.! (ls , lt)
where s' = array (0,ls) [ (i,x) | (i,x) <- zip [0..] s ]
t' = array (0,lt) [ (i,x) | (i,x) <- zip [0..] t ]
ls = length s
lt = length t
(l,h) = ((0,0),(length s,length t))
d = runSTUArray $ do
m <- newArray (l,h) 0
for_ [0..ls] $ \i -> writeArray m (i,0) i
for_ [0..lt] $ \j -> writeArray m (0,j) j
for_ [1..lt] $ \j -> do
for_ [1..ls] $ \i -> do
let c = if s'!(i-1)==t'! (j-1)
then 0 else 1
x <- readArray m (i-1,j)
y <- readArray m (i,j-1)
z <- readArray m (i-1,j-1)
writeArray m (i,j) $ minimum [x+1, y+1, z+c ]
return m
for_ xs f = mapM_ f xs

Related

Haskell performance using dynamic programming

I am attempting to calculate the Levenshtein distance between two strings using dynamic programming. This is being done through Hackerrank, so I have timing constraints. I used a techenique I saw in: How are Dynamic Programming algorithms implemented in idiomatic Haskell? and it seems to be working. Unfortunaly, it is timing out in one test case. I do not have access to the specific test case, so I don't know the exact size of the input.
import Control.Monad
import Data.Array.IArray
import Data.Array.Unboxed
main = do
n <- readLn
replicateM_ n $ do
s1 <- getLine
s2 <- getLine
print $ editDistance s1 s2
editDistance :: String -> String -> Int
editDistance s1 s2 = dynamic editDistance' (length s1, length s2)
where
s1' :: UArray Int Char
s1' = listArray (1,length s1) s1
s2' :: UArray Int Char
s2' = listArray (1,length s2) s2
editDistance' table (i,j)
| min i j == 0 = max i j
| otherwise = min' (table!((i-1),j) + 1) (table!(i,(j-1)) + 1) (table!((i-1),(j-1)) + cost)
where
cost = if s1'!i == s2'!j then 0 else 1
min' a b = min (min a b)
dynamic :: (Array (Int,Int) Int -> (Int,Int) -> Int) -> (Int,Int) -> Int
dynamic compute (xBnd, yBnd) = table!(xBnd,yBnd)
where
table = newTable $ map (\coord -> (coord, compute table coord)) [(x,y) | x<-[0..xBnd], y<-[0..yBnd]]
newTable xs = array ((0,0),fst (last xs)) xs
I've switched to using arrays, but that speed up was insufficient. I cannot use Unboxed arrays, because this code relies on laziness. Are there any glaring performance mistakes I have made? Or how else can I speed it up?
The backward equations for edit distance calculations are:
f(i, j) = minimum [
1 + f(i + 1, j), -- delete from the 1st string
1 + f(i, j + 1), -- delete from the 2nd string
f(i + 1, j + 1) + if a(i) == b(j) then 0 else 1 -- substitute or match
]
So within each dimension, you need nothing more than the very next index: + 1. This is a sequential access pattern, not random access to require arrays; and can be implemented using lists and nested right folds:
editDistance :: Eq a => [a] -> [a] -> Int
editDistance a b = head . foldr loop [n, n - 1..0] $ zip a [m, m - 1..]
where
(m, n) = (length a, length b)
loop (s, l) lst = foldr go [l] $ zip3 b lst (tail lst)
where
go (t, i, j) acc#(k:_) = inc `seq` inc:acc
where inc = minimum [i + 1, k + 1, if s == t then j else j + 1]
You may test this code in Hackerrank Edit Distance Problem as in:
import Control.Applicative ((<$>))
import Control.Monad (replicateM_)
import Text.Read (readMaybe)
editDistance :: Eq a => [a] -> [a] -> Int
editDistance a b = ... -- as implemented above
main :: IO ()
main = do
Just n <- readMaybe <$> getLine
replicateM_ n $ do
a <- getLine
b <- getLine
print $ editDistance a b
which passes all tests with a decent performance.

Generic algorithm to enumerate sum and product types on Haskell?

Some time ago, I've asked how to map back and forth from godel numbers to terms of a context-free language. While the answer solved the issue specificaly, I'm having trouble in actually programming it generically. So, this question is more generic: given a recursive algebraic data type with terminals, sums and products - such as
data Term = Prod Term Term | SumL Term | SumR Term | AtomA | AtomB
what is an algorithm that will map a term of this type to its godel number, and its inverse?
Edit: for example:
data Foo = A | B Foo | C Foo deriving Show
to :: Foo -> Int
to A = 1
to (B x) = to x * 2
to (C x) = to x * 2 + 1
from :: Int -> Foo
from 1 = A
from n = case mod n 2 of
0 -> B (from (div n 2))
1 -> C (from (div n 2))
Here, to and from do what I want for Foo. I'm just asking for a systematic way to derive those functions for any datatype.
In order to avoid dealing with a particular Goedel numbering, let's define a class that'll abstract the necessary operations (with some imports we'll need later):
{-# LANGUAGE TypeOperators, DefaultSignatures, FlexibleContexts, DeriveGeneric #-}
import Control.Applicative
import GHC.Generics
import Test.QuickCheck
import Test.QuickCheck.Gen
class GodelNum a where
fromInt :: Integer -> a
toInt :: a -> Maybe Integer
encode :: [a] -> a
decode :: a -> [a]
So we can inject natural numbers and encode sequences. Let's further create a canonical instance of this class that'll use throughout the code, which does no real Goedel encoding, just constructs a tree of terms.
data TermNum = Value Integer | Complex [TermNum]
deriving (Show)
instance GodelNum TermNum where
fromInt = Value
toInt (Value x) = Just x
toInt _ = Nothing
encode = Complex
decode (Complex xs) = xs
decode _ = []
For real encoding we'd use another implementation that'd use just one Integer, something like newtype SomeGoedelNumbering = SGN Integer.
Let's further create a class for types that we can encode/decode:
class GNum a where
gto :: (GodelNum g) => a -> g
gfrom :: (GodelNum g) => g -> Maybe a
default gto :: (Generic a, GodelNum g, GGNum (Rep a)) => a -> g
gto = ggto . from
default gfrom :: (Generic a, GodelNum g, GGNum (Rep a)) => g -> Maybe a
gfrom = liftA to . ggfrom
The last four lines define a generic implementation of gto and gfrom using GHC Generics and DefaultSignatures. The class GGNum that they use is a helper class which we'll use to define encoding for the atomic ADT operations - products, sums, etc.:
class GGNum f where
ggto :: (GodelNum g) => f a -> g
ggfrom :: (GodelNum g) => g -> Maybe (f a)
-- no-arg constructors
instance GGNum U1 where
ggto U1 = encode []
ggfrom _ = Just U1
-- products
instance (GGNum a, GGNum b) => GGNum (a :*: b) where
ggto (a :*: b) = encode [ggto a, ggto b]
ggfrom e | [x, y] <- decode e = liftA2 (:*:) (ggfrom x) (ggfrom y)
| otherwise = Nothing
-- sums
instance (GGNum a, GGNum b) => GGNum (a :+: b) where
ggto (L1 x) = encode [fromInt 0, ggto x]
ggto (R1 y) = encode [fromInt 1, ggto y]
ggfrom e | [n, x] <- decode e = case toInt n of
Just 0 -> L1 <$> ggfrom x
Just 1 -> R1 <$> ggfrom x
_ -> Nothing
-- metadata
instance (GGNum a) => GGNum (M1 i c a) where
ggto (M1 x) = ggto x
ggfrom e = M1 <$> ggfrom e
-- constants and recursion of kind *
instance (GNum a) => GGNum (K1 i a) where
ggto (K1 x) = gto x
ggfrom e = K1 <$> gfrom e
Having that, we can then define a data type like yours and just declare its GNum instance, everything else will be automatically derived.
data Term = Prod Term Term | SumL Term | SumR Term | AtomA | AtomB
deriving (Eq, Show, Generic)
instance GNum Term where
And just to be sure we've done everything right, let's use QuickCheck to verify that our gfrom is an inverse of gto:
instance Arbitrary Term where
arbitrary = oneof [ return AtomA
, return AtomB
, SumL <$> arbitrary
, SumR <$> arbitrary
, Prod <$> arbitrary <*> arbitrary
]
prop_enc_dec :: Term -> Property
prop_enc_dec x = Just x === gfrom (gto x :: TermNum)
main :: IO ()
main = quickCheck prop_enc_dec
Notes:
The same thing could be accomplished using Scrap Your Boilerplate, perhaps more efficiently, as it allows somewhat higher-level access - enumerating constructors and records, etc.
See also paper Efficient Bijective G¨odel Numberings for Term Algebras (I haven't read the paper yet, but seems related).
For fun, I decided to try the approach in the link you posted, and didn't get stuck anywhere. So here's my code, with no commentary (the explanation is the same as the last time). First, code stolen from the other answer:
{-# LANGUAGE TypeSynonymInstances #-}
import Control.Applicative
import Data.Universe.Helpers
type Nat = Integer
class Godel a where
to :: a -> Nat
from :: Nat -> a
instance Godel Nat where to = id; from = id
instance (Godel a, Godel b) => Godel (a, b) where
to (m_, n_) = (m + n) * (m + n + 1) `quot` 2 + m where
m = to m_
n = to n_
from p = (from m, from n) where
isqrt = floor . sqrt . fromIntegral
base = (isqrt (1 + 8 * p) - 1) `quot` 2
triangle = base * (base + 1) `quot` 2
m = p - triangle
n = base - m
And the code specific to your new type:
data Term = Prod Term Term | SumL Term | SumR Term | AtomA | AtomB
deriving (Eq, Ord, Read, Show)
ts = AtomA : AtomB : interleave [uncurry Prod <$> ts +*+ ts, SumL <$> ts, SumR <$> ts]
instance Godel Term where
to AtomA = 0
to AtomB = 1
to (Prod t1 t2) = 2 + 0 + 3 * to (t1, t2)
to (SumL t) = 2 + 1 + 3 * to t
to (SumR t) = 2 + 2 + 3 * to t
from 0 = AtomA
from 1 = AtomB
from n = case quotRem (n-2) 3 of
(q, 0) -> uncurry Prod (from q)
(q, 1) -> SumL (from q)
(q, 2) -> SumR (from q)
The same ghci test as last time:
*Main> take 30 (map from [0..]) == take 30 ts
True

Generality of `foldr` or other higher order function

Here's a simple function that takes a list and a number and works out if the length of the list is greater than that number.
e.g.
compareLengthTo [1,2,3] 3 == EQ
compareLengthTo [1,2] 3 == LT
compareLengthTo [1,2,3,4] 3 == GT
compareLengthTo [1..] 3 == GT
Note that it has two properties:
It works for infinite lists.
It is tail recursive and uses constant space.
import Data.Ord
compareLengthTo :: [a] -> Int -> Ordering
compareLengthTo l n = f 0 l
where
f c [] = c `compare` n
f c (l:ls) | c > n = GT
| otherwise = f (c + 1) ls
Is there a way to write compareLengthTo using foldr only?
Note, here's a version of compareLengthTo using drop:
compareLengthToDrop :: [a] -> Int -> Ordering
compareLengthToDrop l n = f (drop n (undefined:l))
where
f [] = LT
f [_] = EQ
f _ = GT
I guess another question is then, can you implement drop in terms of foldr?
Here ya go (note: I just changed one comparison, which makes it lazier):
compareLengthTo :: [a] -> Int -> Ordering
compareLengthTo l n = foldr f (`compare` n) l 0
where
f l cont c | c >= n = GT
| otherwise = cont $! c + 1
This uses exactly the same sort of technique used to implement foldl in terms of foldr. There's a classic article about the general technique called A tutorial on the universality and expressiveness of fold. You can also see a step-by-step explanation I wrote on the Haskell Wiki.
To get you started, note that foldr is being applied to four arguments here, rather than the usual three. This works out because the function being folded takes three arguments, and the "base case" is a function, (`compare` n).
Edit
If you want to use lazy Peano numerals as J. Abrahamson does, you can count down instead of counting up.
compareLengthTo :: [a] -> Nat -> Ordering
compareLengthTo l n = foldr f final l n
where
f _ _ Zero = GT
f _ cont (Succ p) = cont p
final Zero = EQ
final _ = LT
By it's very definition, foldr is not tail-recursive:
-- slightly simplified
foldr :: (a -> r -> r) -> r -> ([a] -> r)
foldr cons nil [] = nil
foldr cons nil (a:as) = cons a (foldr cons nil as)
so you cannot achieve that end. That said, there are some attractive components of foldr's semantics. In particular, it is "productive" which allows folds written with foldr to behave nicely with laziness.
We can see foldr as saying how to break down (catalyze) a list one "layer" at a time. If the cons argument can return without caring about any further layers of the list then it can terminate early and we avoid ever having to examine any more tails of the list---this is how foldr can act non-strictly at times.
Your function, to work on infinite lists, does something similar to the numeric argument. We'd like to operate on that argument "layer by layer". To make this more clear, let's define the naturals as follows
data Nat = Zero | Succ Nat
Now "layer by layer" more clearly means "counting down to zero". We can formalize that notion like so:
foldNat :: (r -> r) -> r -> (Nat -> r)
foldNat succ zero Zero = zero
foldNat succ zero (Succ n) = succ (foldNat succ zero n)
and now we can define something a bit like what we're looking for
compareLengthTo :: Nat -> [a] -> Ordering
compareLengthTo = foldNat succ zero where
zero :: [a] -> Ordering
zero [] = EQ -- we emptied the list and the nat at the same time
zero _ = GT -- we're done with the nat, but more list remains
succ :: ([a] -> Ordering) -> ([a] -> Ordering)
succ continue [] = LT -- we ran out of list, but had more nat
succ continue (_:as) = continue as -- keep going, both nat and list remain
It can take some time to study the above to see how it works. In particular, note that I instantiated r as a function, [a] -> Ordering. The form of the function above is "recursion on the natural numbers" and it allows it to accept infinite lists so long as the Nat argument isn't...
infinity :: Nat
infinity = Succ infinity
Now, the above function works on this strange type, Nat, which models the non-negative integers. We can translate the same concept to Int by replacing foldNat with foldInt, written similarly:
foldInt :: (r -> r) -> r -> (Int -> r)
foldInt succ zero 0 = zero
foldInt succ zero n = succ (foldInt succ zero (n - 1))
which you can verify embodies the exact same pattern as foldNat but avoids the use of the awkward Succ and Zero constructors. You can also verify that foldInt behaves pathologically if we give it negative integers... which is about what we'd expect.
Have to participate into this coding competion:
"Prelude":
import Test.QuickCheck
import Control.Applicative
compareLengthTo :: [a] -> Int -> Ordering
compareLengthTo l n = f 0 l
where
f c [] = c `compare` n
f c (l:ls) | c > n = GT
| otherwise = f (c + 1) ls
My first attempt was to write this
compareLengthTo1 :: [a] -> Int -> Ordering
compareLengthTo1 l n = g $ foldr f (Just n) l
where
-- we go below zero
f _ (Just 0) = Nothing
f _ (Just n) = Just (n - 1)
f _ Nothing = Nothing
g (Just 0) = EQ
g (Just _) = LT
g Nothing = GT
And it works for finite arguments:
prop1 :: [()] -> NonNegative Int -> Property
prop1 l (NonNegative n) = compareLengthTo l n === compareLengthTo1 l n
-- >>> quickCheck prop1
-- +++ OK, passed 100 tests.
But it fails for infinite lists. Why?
Let's define a variant using peano naturals:
data Nat = Zero | Succ Nat
foldNat :: (r -> r) -> r -> (Nat -> r)
foldNat succ zero Zero = zero
foldNat succ zero (Succ n) = succ (foldNat succ zero n)
natFromInteger :: Integer -> Nat
natFromInteger 0 = Zero
natFromInteger n = Succ (natFromInteger (n - 1))
natToIntegral :: Integral a => Nat -> a
natToIntegral = foldNat (1+) 0
instance Arbitrary Nat where
arbitrary = natFromInteger . getNonNegative <$> arbitrary
instance Show Nat where
show = show . (natToIntegral :: Nat -> Integer)
infinity :: Nat
infinity = Succ infinity
compareLengthTo2 :: [a] -> Nat -> Ordering
compareLengthTo2 l n = g $ foldr f (Just n) l
where
f _ (Just Zero) = Nothing
f _ (Just (Succ n)) = Just n
f _ Nothing = Nothing
g (Just Zero) = EQ
g (Just _) = LT
g Nothing = GT
prop2 :: [()] -> Nat -> Property
prop2 l n = compareLengthTo l (natToIntegral n) === compareLengthTo2 l n
-- >>> compareLengthTo2 [] infinity
-- LT
After staring long enough we see that it works for infinite numbers, not infinite lists.
That's why J. Abrahamson used foldNat in his definition.
So if we fold the number argument, we will get function which works on infinite lists, but finite numbers:
compareLengthTo3 :: [a] -> Nat -> Ordering
compareLengthTo3 l n = g $ foldNat f (Just l) n
where
f (Just []) = Nothing
f (Just (x:xs)) = Just xs
f Nothing = Nothing
g (Just []) = EQ
g (Just _) = GT
g Nothing = LT
prop3 :: [()] -> Nat -> Property
prop3 l n = compareLengthTo l (natToIntegral n) === compareLengthTo3 l n
nats :: [Nat]
nats = iterate Succ Zero
-- >>> compareLengthTo3 nats (natFromInteger 10)
-- GT
foldr and foldNat are kind of functions which generalise structural recursion on the argument (catamorphisms). They have nice property that given finite inputs and total functions as arguments, they are also total i.e. always terminate.
That's why we foldNat in the last example. We assume that Nat argument is finite, so compareLengthTo3 works on all [a] - even infinite.

Construct infinite sorted list without adding duplicates

I am relatively new to Haskell, but I am trying to learn both by reading and trying to solve problems on Project Euler. I am currently trying to implement a function that takes an infinite list of integers and returns the ordered list of pairwise sums of elements in said list. I am really looking for solutions to the specific issue I am facing, rather than advice on different strategies or approaches, but those are welcome as well, as being a coder doesn't mean knowing how to implement a strategy, but also choosing the best strategy available.
My approach relies on traversing an infinite list of infinite generators and retrieving elements in order, with several mathematical properties that are useful in implementing my solution.
If I were trying to obtain the sequence of pairwise sums of the natural numbers, for example, this would be my code:
myList :: [Integer]
myList = [1..]
myGens :: [[Integer]]
myGens = gens myList
where
gens = \xs -> map (\x -> [x+y|y<-(dropWhile (<x) xs)]) xs
Regardless of the number set used, provided that it is sorted, the following conditions hold:
∀ i ≥ 0, head (gens xs !! i) == 2*(myList !! i)
∀ i,j,k ≥ 0, l > 0, (((gens xs) !! i) !! j) < (((gens xs) !! i+k) !! j+l)
Special cases for the second condition are:
∀ i,j ≥ 0, (((gens xs) !! i) !! j) < (((gens xs) !! i+1) !! j)
∀ i,j ≥ 0, k > 0, (((gens xs) !! i) !! j) < (((gens xs) !! i+k) !! j)
Here is the particular code I am trying to modify:
stride :: [Integer] -> [Int] -> [[Integer]] -> [Integer]
stride xs cs xss = x : stride xs counts streams
where
(x,i) = step xs cs xss
counts = inc i cs
streams = chop i xss
step :: [Integer] -> [Int] -> [[Integer]] -> (Integer,Int)
step xs cs xss = pace xs (defer cs xss)
pace :: [Integer] -> [(Integer,Int)] -> (Integer,Int)
pace hs xs#((x,i):xt) = minim (x,i) hs xt
where
minim :: (Integer,Int) -> [Integer] -> [(Integer,Int)] -> (Integer,Int)
minim m _ [] = m
minim m#(g,i) hs (y#(h,n):ynt) | g > h && 2*(hs !! n) > h = y
| g > h = minim y hs ynt
| 2*(hs !! n) > g = m
| otherwise = minim m hs ynt
defer :: [Int] -> [[a]] -> [(a,Int)]
defer cs xss = (infer (zip cs (zip (map head xss) [0..])))
infer :: [(Int,(a,Int))] -> [(a,Int)]
infer [] = []
infer ((c,xi):xis) | c == 0 = xi:[]
| otherwise = xi:(infer (dropWhile (\(p,(q,r)) -> p>=c) xis))
The set in question I am using has the property that multiple distinct pairs produce an identical sum. I want an efficient method of handling all duplicate elements at once, in order to avoid an increased cost of computing all the pairwise sums up to N, as it requires M more tests if M is the number of duplicates.
Does anyone have any suggestions?
EDIT:
I made some changes to the code, independently of what was suggested, and would appreciate feedback on the relative efficiencies of my original code, my revised code, and the proposals so far.
stride :: [Integer] -> [Int] -> [[Integer]] -> [Integer]
stride xs cs xss = x : stride xs counts streams
where
(x,is) = step xs cs xss
counts = foldr (\i -> inc i) cs is
streams = foldr (\i -> chop i) xss is
step :: [Integer] -> [Int] -> [[Integer]] -> (Integer,[Int])
step xs cs xss = pace xs (defer cs xss)
pace :: [Integer] -> [(Integer,Int)] -> (Integer,[Int])
pace hs xs#((x,i):xt) = minim (x,(i:[])) hs xt
where
minim :: (Integer,[Int]) -> [Integer] -> [(Integer,Int)] -> (Integer,[Int])
minim m _ [] = m
minim m#(g,is#(i:_)) hs (y#(h,n):ynt) | g > h && 2*(hs !! n) > h = (h,[n])
| g > h = minim (h,[n]) hs ynt
| g == h && 2*(hs !! n) > h = (g,n:is)
| g == h = minim (g,n:is) hs ynt
| g < h && 2*(hs !! n) > g = m
| g < h = minim m hs ynt
Also, I left out the code for inc and chop:
alter :: (a->a) -> Int -> [a] -> [a]
alter = \f -> \n -> \xs -> (take (n) xs) ++ [f (xs !! n)] ++ (drop (n+1) xs)
inc :: Int -> [Int] -> [Int]
inc = alter (1+)
chop :: Int -> [[a]] -> [[a]]
chop = alter (tail)
I'm going to present a solution that uses an infinite pairing heap. We'll have logarithmic overhead per element constructed, but no one knows how to do better (in a model with comparison-based methods and real numbers).
The first bit of code is just the standard pairing heap.
module Queue where
import Data.Maybe (fromMaybe)
data Queue k = E
| T k [Queue k]
deriving Show
fromOrderedList :: (Ord k) => [k] -> Queue k
fromOrderedList [] = E
fromOrderedList [k] = T k []
fromOrderedList (k1 : ks'#(k2 : _ks''))
| k1 <= k2 = T k1 [fromOrderedList ks']
mergePairs :: (Ord k) => [Queue k] -> Queue k
mergePairs [] = E
mergePairs [q] = q
mergePairs (q1 : q2 : qs'') = merge (merge q1 q2) (mergePairs qs'')
merge :: (Ord k) => Queue k -> Queue k -> Queue k
merge (E) q2 = q2
merge q1 (E) = q1
merge q1#(T k1 q1's) q2#(T k2 q2's)
= if k1 <= k2 then T k1 (q2 : q1's) else T k2 (q1 : q2's)
deleteMin :: (Ord k) => Queue k -> Maybe (k, Queue k)
deleteMin (E) = Nothing
deleteMin (T k q's) = Just (k, mergePairs q's)
toOrderedList :: (Ord k) => Queue k -> [k]
toOrderedList q
= fromMaybe [] $
do (k, q') <- deleteMin q
return (k : toOrderedList q')
Note that fromOrderedList accepts infinite lists. I think that this can be justified theoretically by pretending as though the infinite list of descendants effectively are merged "just in time". This feels like the kind of thing that should be in the literature on purely functional data structures already, but I'm going to be lazy and not look right now.
The function mergeOrderedByMin takes this one step further and merges a potentially infinite list of queues, where the min element in each queue is nondecreasing. I don't think that we can reuse merge, since merge appears to be insufficiently lazy.
mergeOrderedByMin :: (Ord k) => [Queue k] -> Queue k
mergeOrderedByMin [] = E
mergeOrderedByMin (E : qs') = mergeOrderedByMin qs'
mergeOrderedByMin (T k q's : qs')
= T k (mergeOrderedByMin qs' : q's)
The next function removes duplicates from a sorted list. It's in the library that m09 suggested, but for the sake of completeness, I'll define it here.
nubOrderedList :: (Ord k) => [k] -> [k]
nubOrderedList [] = []
nubOrderedList [k] = [k]
nubOrderedList (k1 : ks'#(k2 : _ks''))
| k1 < k2 = k1 : nubOrderedList ks'
| k1 == k2 = nubOrderedList ks'
Finally, we put it all together. I'll use the squares as an example.
squares :: [Integer]
squares = map (^ 2) [0 ..]
sumsOfTwoSquares :: [Integer]
sumsOfTwoSquares
= nubOrderedList $ toOrderedList $
mergeOrderedByMin
[fromOrderedList (map (s +) squares) | s <- squares]
If you don't want to modify your code that much, you can use the nub function of Data.List.Ordered (installable by cabal install data-ordlist) to filter duplicates out.
It runs in linear time, ie complexity wise your algorithm won't change.
for your example [1..] the result is just [2..]. A "very smart compiler" could deduce this from the general solution with implicit heap, that follows.
gens xs is better expressed as
gens xs = map (\t#(x:_) -> map (x+) t) $ tails xs -- or should it be
-- map (\(x:ys) -> map (x+) ys) $ tails xs -- ?
Its resulting list of lists is easily merged without duplicates by tree-like folding1 (pictured here), with
pairsums xs = foldi (\(x:l) r-> x : union l r) $ gens xs
This assumes the input list is ordered in increasing order. If it's merely in non-decreasing order (with only finite runs of equals in it, of course), you'll need to slap an orderedNub on top of that (as m09 mentions),
pairsums' = orderedNub . pairsums
Just by using foldi where foldr would work, we often get an algorithmic improvement in complexity from a factor of n to log n, a pretty significant speedup. I use it as a general tool all the time.
1The code, adjusted for infinite lists only:
foldi f (x:xs) = f x (foldi f (pairs f xs))
pairs f (x:y:t) = f x y : pairs f t
union (x:xs) (y:ys) = case compare x y of
LT -> x : union xs (y:ys)
EQ -> x : union xs ys
GT -> y : union (x:xs) ys
See also:
mergesort as foldtree (by Heinrich Apfelmus)
infinite tree folding (by Dave Bayer)
Implicit Heap (by apfelmus)
I propose to build the pairs above the diagonal, that way a lot of duplicates are not even generated:
sums xs = zipWith (map . (+)) hs ts where
(hs:ts) = tails xs
Now you have a list of lists, each containing sorted sums. Because they are sorted, it is possible to determine the next element of the sequence in a finite number of steps:
filtermerge :: (Ord a) => [[a]]->[a]
filtermerge ((h:t):ts) = h : filtermerge (insert t ts) where
insert [] ts = ts
insert xs [] = [xs]
insert h ([]:t) = insert h t
insert (h:t) ts#((h1:t1):t2)
| h < h1 = (h:t):ts
| h == h1 = insert (h:t) $ insert t1 t2
| otherwise = insert (h1:t1) $ insert (h:t) t2
filtermerge _ = []

Weight-Biased Leftist Heaps: advantages of top-down version of merge?

I am self-studying Okasaki's Purely Functional Data Structures, now on exercise 3.4, which asks to reason about and implement a weight-biased leftist heap. This is my basic implementation:
(* 3.4 (b) *)
functor WeightBiasedLeftistHeap (Element : Ordered) : Heap =
struct
structure Elem = Element
datatype Heap = E | T of int * Elem.T * Heap * Heap
fun size E = 0
| size (T (s, _, _, _)) = s
fun makeT (x, a, b) =
let
val sizet = size a + size b + 1
in
if size a >= size b then T (sizet, x, a, b)
else T (sizet, x, b, a)
end
val empty = E
fun isEmpty E = true | isEmpty _ = false
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (_, x, a1, b1), h2 as T (_, y, a2, b2)) =
if Elem.leq (x, y) then makeT (x, a1, merge (b1, h2))
else makeT (y, a2, merge (h1, b2))
fun insert (x, h) = merge (T (1, x, E, E), h)
fun findMin E = raise Empty
| findMin (T (_, x, a, b)) = x
fun deleteMin E = raise Empty
| deleteMin (T (_, x, a, b)) = merge (a, b)
end
Now, in 3.4 (c) & (d), it asks:
Currently, merge operates in two
passes: a top-down pass consisting of
calls to merge, and a bottom-up pass
consisting of calls to the helper
function, makeT. Modify merge to
operate in a single, top-down pass.
What advantages would the top-down
version of merge have in a lazy
environment? In a concurrent
environment?
I changed the merge function by simply inlining makeT, but I fail to see any advantages, so I think I haven't grasped the spirit of these parts of the exercise. What am I missing?
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (s1, x, a1, b1), h2 as T (s2, y, a2, b2)) =
let
val st = s1 + s2
val (v, a, b) =
if Elem.leq (x, y) then (x, a1, merge (b1, h2))
else (y, a2, merge (h1, b2))
in
if size a >= size b then T (st, v, a, b)
else T (st, v, b, a)
end
I think I've figured out one point with regards to lazy evaluation. If I don't use the recursive merge to calculate the size, then the recursive call won't need to be evaluated until the child is needed:
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (s1, x, a1, b1), h2 as T (s2, y, a2, b2)) =
let
val st = s1 + s2
val (v, ma, mb1, mb2) =
if Elem.leq (x, y) then (x, a1, b1, h2)
else (y, a2, h1, b2)
in
if size ma >= size mb1 + size mb2
then T (st, v, ma, merge (mb1, mb2))
else T (st, v, merge (mb1, mb2), ma)
end
Is that all? I am not sure about concurrency though.
I think you've essentially got it as far as the lazy evaluation goes -- it's not very helpful to use lazy evaluation if you are going to have to end up traversing the whole data structure to find out anything every time you do a merge...
As to the concurrency, I expect the issue is that if, while one thread is evaluating the merge, another comes along and wants to look something up, it will not be able to get anything useful done at least until the first thread completes the merge. (And it might even take longer than that.)
It doesn’t any benefit to WMERGE-3-4C function in a lazy environment. It still does all the work that the original down-up merge did. It pretty sure it would not be any easier for the language system to memorize..
No benefit to WMERGE-3-4C functions in a concurrent environment. Each call to WMERGE-3-4C does all its work before passing the buck to another instance of WMERGE-3-4C. In fact, if we eliminated the recursion by hand, WMERGE-3-4C could be implemented as a single loop that does all the work while accumulating a stack, then a second loop that does the REDUCE work on the stack. The first loop would not be naturally parallizable, though maybe the REDUCE could operate by calling the function on pairs, in parallel, until only one element remained in the list.

Resources