Is there a good strategy for proving the given theorem? - sorting

What strategy should be used to prove this result (at the end, with Admitted)? Thanks in advance for any hints. :slight_smile:
Hopefully it is true theorem. I already got burned when I had wrong intuition and there was counterexample found.
Require Import Permutation List Lia FunInd Recdef.
Set Implicit Arguments.
Inductive value := x0 | x1 | x2 | x3 | x4 | x5 | x6 | x7.
Inductive variable := aux | num: value -> variable.
Definition variable_eq_dec (x y: variable): {x=y} + {x<>y}.
Proof.
destruct x, y.
+ left; auto.
+ right; abstract congruence.
+ right; abstract congruence.
+ destruct v, v0; try (left; abstract congruence); (right; abstract congruence).
Defined.
Inductive assignment := assign: variable -> variable -> assignment.
Inductive comparison := GT: forall (more less: value), comparison.
Inductive step :=
| assignments: forall (L: list assignment), step
| conditional: forall (c: comparison) (positive negative: step), step.
Definition algorithm := list step.
Definition instantation := variable -> nat.
Definition list_of_values (i: instantation) :=
i (num x0) :: i (num x1) :: i (num x2) :: i (num x3) :: i (num x4) :: i (num x5) :: i (num x6) :: i (num x7) :: nil.
Definition is_permutation (i1 i2: instantation) := Permutation (list_of_values i1) (list_of_values i2).
Definition run_assignment (a: assignment) (i: instantation): instantation :=
match a with
| assign v1 v2 => fun x => if variable_eq_dec x v1 then i v2 else i x end.
Fixpoint run_assignment_list (L: list assignment): instantation -> instantation :=
match L with
| nil => fun i => i
| a :: l => fun i => run_assignment_list l (run_assignment a i)
end.
Fixpoint run_step (s: step) (i: instantation): instantation :=
match s with
| assignments L => run_assignment_list L i
| conditional (GT more less) pos neg =>
if Compare_dec.gt_dec (i (num more)) (i (num less)) then run_step pos i else run_step neg i
end.
Fixpoint run_algorithm (a: algorithm): instantation -> instantation :=
match a with
| nil => fun i => i
| s :: t => fun i => run_algorithm t (run_step s i)
end.
Definition permuting_step (s: step) := forall (i: instantation), is_permutation i (run_step s i).
Definition permuting_algorithm (a: algorithm) := forall (i: instantation), is_permutation i (run_algorithm a i).
Theorem permuting_algorithm_aux00 (a: algorithm) (s: step):
permuting_algorithm (s :: a) -> permuting_algorithm a /\ permuting_step s.
Proof.
Admitted.
Edit: Based on counterexample found by Yves, one should add at least two more conditions.
Fixpoint compact_assignments (a: algorithm): Prop :=
match a with
| nil => True
| assignments L :: assignments L0 :: t => False
| x :: t => compact_assignments t
end.
Fixpoint no_useless_comparisons_in_step (s: step): Prop :=
match s with
| assignments L => True
| conditional (GT a b) pos neg => a <> b /\ no_useless_comparisons_in_step pos /\ no_useless_comparisons_in_step neg
end.
Definition no_useless_comparisons (a: algorithm) := forall x, In x a -> no_useless_comparisons_in_step x.
Definition compact_algorithm (a: algorithm) := compact_assignments a /\ no_useless_comparisons a.
Theorem permuting_algorithm_aux00 (a: algorithm) (s: step):
compact_algorithm (s :: a) -> permuting_algorithm (s :: a) -> permuting_algorithm a /\ permuting_step s.
Proof.
Admitted.
Even then there are counterexamples, for example:
assignments (assign aux (num x1) :: assign (num x1) (num x0) :: nil) ::
conditional (GT x0 x1)
(assignments nil)
(assignments (assign (num x0) aux :: nil)) :: nil.

This is more a question of mathematics than a Coq question.
there is probably a counter example. Please investigate this: an assignment shuffles the values of registers aux, x1, x2, ..., x7. However, when you look for permutations, you only look at values of x1, x2, ..., x7.
suppose you have one step that stores the value of x1 into aux, duplicates the value of x2 into x1 and x2, and leaves all other registers unchanged. When looking only at the list of values in x1, ..., x7, this step is not a permutation (because of the duplication). Let's call this
step s1.
Then consider the step s2 that duplicates the value of aux into aux and x1 and leaves all other values unchanged. Again, when looking only at registers x1, ..., x7, this is not a permutation, because it introduces a value that was not in these registers before.
Now s1::s2::nil is the identity function on registers x1, ..., x7. It is a permutation. But neither s1 nor (s2::nil) are permuting steps or permutation algorithms.
For a Coq counter example, it is enough to prove that s1 is not a permuting step. Here it is:
Definition la1 :=
assign aux (num x1) ::
assign (num x1) (num x2):: nil.
Definition la2 :=
assign (num x1) aux :: nil.
Definition s1 := assignments la1.
Definition s2 := assignments la2.
Lemma pa_all : permuting_algorithm (s1 :: s2:: nil).
Proof.
intros i.
unfold s1, s2, is_permutation.
unfold list_of_values; simpl.
apply Permutation_refl.
Qed.
Lemma not_permuting_step_s1 : ~permuting_step s1.
Proof.
unfold s1, permuting_step, is_permutation.
set (f := fun x => if variable_eq_dec x (num x1) then 0 else 1).
intros abs.
assert (abs1 := abs f).
revert abs1.
unfold list_of_values, f; simpl; intros abs1.
absurd (In 0 (1::1::1::1::1::1::1::1::nil)).
simpl; intuition easy.
apply (Permutation_in 0 abs1); simpl; right; left; easy.
Qed.

Related

Haskell performance using dynamic programming

I am attempting to calculate the Levenshtein distance between two strings using dynamic programming. This is being done through Hackerrank, so I have timing constraints. I used a techenique I saw in: How are Dynamic Programming algorithms implemented in idiomatic Haskell? and it seems to be working. Unfortunaly, it is timing out in one test case. I do not have access to the specific test case, so I don't know the exact size of the input.
import Control.Monad
import Data.Array.IArray
import Data.Array.Unboxed
main = do
n <- readLn
replicateM_ n $ do
s1 <- getLine
s2 <- getLine
print $ editDistance s1 s2
editDistance :: String -> String -> Int
editDistance s1 s2 = dynamic editDistance' (length s1, length s2)
where
s1' :: UArray Int Char
s1' = listArray (1,length s1) s1
s2' :: UArray Int Char
s2' = listArray (1,length s2) s2
editDistance' table (i,j)
| min i j == 0 = max i j
| otherwise = min' (table!((i-1),j) + 1) (table!(i,(j-1)) + 1) (table!((i-1),(j-1)) + cost)
where
cost = if s1'!i == s2'!j then 0 else 1
min' a b = min (min a b)
dynamic :: (Array (Int,Int) Int -> (Int,Int) -> Int) -> (Int,Int) -> Int
dynamic compute (xBnd, yBnd) = table!(xBnd,yBnd)
where
table = newTable $ map (\coord -> (coord, compute table coord)) [(x,y) | x<-[0..xBnd], y<-[0..yBnd]]
newTable xs = array ((0,0),fst (last xs)) xs
I've switched to using arrays, but that speed up was insufficient. I cannot use Unboxed arrays, because this code relies on laziness. Are there any glaring performance mistakes I have made? Or how else can I speed it up?
The backward equations for edit distance calculations are:
f(i, j) = minimum [
1 + f(i + 1, j), -- delete from the 1st string
1 + f(i, j + 1), -- delete from the 2nd string
f(i + 1, j + 1) + if a(i) == b(j) then 0 else 1 -- substitute or match
]
So within each dimension, you need nothing more than the very next index: + 1. This is a sequential access pattern, not random access to require arrays; and can be implemented using lists and nested right folds:
editDistance :: Eq a => [a] -> [a] -> Int
editDistance a b = head . foldr loop [n, n - 1..0] $ zip a [m, m - 1..]
where
(m, n) = (length a, length b)
loop (s, l) lst = foldr go [l] $ zip3 b lst (tail lst)
where
go (t, i, j) acc#(k:_) = inc `seq` inc:acc
where inc = minimum [i + 1, k + 1, if s == t then j else j + 1]
You may test this code in Hackerrank Edit Distance Problem as in:
import Control.Applicative ((<$>))
import Control.Monad (replicateM_)
import Text.Read (readMaybe)
editDistance :: Eq a => [a] -> [a] -> Int
editDistance a b = ... -- as implemented above
main :: IO ()
main = do
Just n <- readMaybe <$> getLine
replicateM_ n $ do
a <- getLine
b <- getLine
print $ editDistance a b
which passes all tests with a decent performance.

Generic algorithm to enumerate sum and product types on Haskell?

Some time ago, I've asked how to map back and forth from godel numbers to terms of a context-free language. While the answer solved the issue specificaly, I'm having trouble in actually programming it generically. So, this question is more generic: given a recursive algebraic data type with terminals, sums and products - such as
data Term = Prod Term Term | SumL Term | SumR Term | AtomA | AtomB
what is an algorithm that will map a term of this type to its godel number, and its inverse?
Edit: for example:
data Foo = A | B Foo | C Foo deriving Show
to :: Foo -> Int
to A = 1
to (B x) = to x * 2
to (C x) = to x * 2 + 1
from :: Int -> Foo
from 1 = A
from n = case mod n 2 of
0 -> B (from (div n 2))
1 -> C (from (div n 2))
Here, to and from do what I want for Foo. I'm just asking for a systematic way to derive those functions for any datatype.
In order to avoid dealing with a particular Goedel numbering, let's define a class that'll abstract the necessary operations (with some imports we'll need later):
{-# LANGUAGE TypeOperators, DefaultSignatures, FlexibleContexts, DeriveGeneric #-}
import Control.Applicative
import GHC.Generics
import Test.QuickCheck
import Test.QuickCheck.Gen
class GodelNum a where
fromInt :: Integer -> a
toInt :: a -> Maybe Integer
encode :: [a] -> a
decode :: a -> [a]
So we can inject natural numbers and encode sequences. Let's further create a canonical instance of this class that'll use throughout the code, which does no real Goedel encoding, just constructs a tree of terms.
data TermNum = Value Integer | Complex [TermNum]
deriving (Show)
instance GodelNum TermNum where
fromInt = Value
toInt (Value x) = Just x
toInt _ = Nothing
encode = Complex
decode (Complex xs) = xs
decode _ = []
For real encoding we'd use another implementation that'd use just one Integer, something like newtype SomeGoedelNumbering = SGN Integer.
Let's further create a class for types that we can encode/decode:
class GNum a where
gto :: (GodelNum g) => a -> g
gfrom :: (GodelNum g) => g -> Maybe a
default gto :: (Generic a, GodelNum g, GGNum (Rep a)) => a -> g
gto = ggto . from
default gfrom :: (Generic a, GodelNum g, GGNum (Rep a)) => g -> Maybe a
gfrom = liftA to . ggfrom
The last four lines define a generic implementation of gto and gfrom using GHC Generics and DefaultSignatures. The class GGNum that they use is a helper class which we'll use to define encoding for the atomic ADT operations - products, sums, etc.:
class GGNum f where
ggto :: (GodelNum g) => f a -> g
ggfrom :: (GodelNum g) => g -> Maybe (f a)
-- no-arg constructors
instance GGNum U1 where
ggto U1 = encode []
ggfrom _ = Just U1
-- products
instance (GGNum a, GGNum b) => GGNum (a :*: b) where
ggto (a :*: b) = encode [ggto a, ggto b]
ggfrom e | [x, y] <- decode e = liftA2 (:*:) (ggfrom x) (ggfrom y)
| otherwise = Nothing
-- sums
instance (GGNum a, GGNum b) => GGNum (a :+: b) where
ggto (L1 x) = encode [fromInt 0, ggto x]
ggto (R1 y) = encode [fromInt 1, ggto y]
ggfrom e | [n, x] <- decode e = case toInt n of
Just 0 -> L1 <$> ggfrom x
Just 1 -> R1 <$> ggfrom x
_ -> Nothing
-- metadata
instance (GGNum a) => GGNum (M1 i c a) where
ggto (M1 x) = ggto x
ggfrom e = M1 <$> ggfrom e
-- constants and recursion of kind *
instance (GNum a) => GGNum (K1 i a) where
ggto (K1 x) = gto x
ggfrom e = K1 <$> gfrom e
Having that, we can then define a data type like yours and just declare its GNum instance, everything else will be automatically derived.
data Term = Prod Term Term | SumL Term | SumR Term | AtomA | AtomB
deriving (Eq, Show, Generic)
instance GNum Term where
And just to be sure we've done everything right, let's use QuickCheck to verify that our gfrom is an inverse of gto:
instance Arbitrary Term where
arbitrary = oneof [ return AtomA
, return AtomB
, SumL <$> arbitrary
, SumR <$> arbitrary
, Prod <$> arbitrary <*> arbitrary
]
prop_enc_dec :: Term -> Property
prop_enc_dec x = Just x === gfrom (gto x :: TermNum)
main :: IO ()
main = quickCheck prop_enc_dec
Notes:
The same thing could be accomplished using Scrap Your Boilerplate, perhaps more efficiently, as it allows somewhat higher-level access - enumerating constructors and records, etc.
See also paper Efficient Bijective G¨odel Numberings for Term Algebras (I haven't read the paper yet, but seems related).
For fun, I decided to try the approach in the link you posted, and didn't get stuck anywhere. So here's my code, with no commentary (the explanation is the same as the last time). First, code stolen from the other answer:
{-# LANGUAGE TypeSynonymInstances #-}
import Control.Applicative
import Data.Universe.Helpers
type Nat = Integer
class Godel a where
to :: a -> Nat
from :: Nat -> a
instance Godel Nat where to = id; from = id
instance (Godel a, Godel b) => Godel (a, b) where
to (m_, n_) = (m + n) * (m + n + 1) `quot` 2 + m where
m = to m_
n = to n_
from p = (from m, from n) where
isqrt = floor . sqrt . fromIntegral
base = (isqrt (1 + 8 * p) - 1) `quot` 2
triangle = base * (base + 1) `quot` 2
m = p - triangle
n = base - m
And the code specific to your new type:
data Term = Prod Term Term | SumL Term | SumR Term | AtomA | AtomB
deriving (Eq, Ord, Read, Show)
ts = AtomA : AtomB : interleave [uncurry Prod <$> ts +*+ ts, SumL <$> ts, SumR <$> ts]
instance Godel Term where
to AtomA = 0
to AtomB = 1
to (Prod t1 t2) = 2 + 0 + 3 * to (t1, t2)
to (SumL t) = 2 + 1 + 3 * to t
to (SumR t) = 2 + 2 + 3 * to t
from 0 = AtomA
from 1 = AtomB
from n = case quotRem (n-2) 3 of
(q, 0) -> uncurry Prod (from q)
(q, 1) -> SumL (from q)
(q, 2) -> SumR (from q)
The same ghci test as last time:
*Main> take 30 (map from [0..]) == take 30 ts
True

Generality of `foldr` or other higher order function

Here's a simple function that takes a list and a number and works out if the length of the list is greater than that number.
e.g.
compareLengthTo [1,2,3] 3 == EQ
compareLengthTo [1,2] 3 == LT
compareLengthTo [1,2,3,4] 3 == GT
compareLengthTo [1..] 3 == GT
Note that it has two properties:
It works for infinite lists.
It is tail recursive and uses constant space.
import Data.Ord
compareLengthTo :: [a] -> Int -> Ordering
compareLengthTo l n = f 0 l
where
f c [] = c `compare` n
f c (l:ls) | c > n = GT
| otherwise = f (c + 1) ls
Is there a way to write compareLengthTo using foldr only?
Note, here's a version of compareLengthTo using drop:
compareLengthToDrop :: [a] -> Int -> Ordering
compareLengthToDrop l n = f (drop n (undefined:l))
where
f [] = LT
f [_] = EQ
f _ = GT
I guess another question is then, can you implement drop in terms of foldr?
Here ya go (note: I just changed one comparison, which makes it lazier):
compareLengthTo :: [a] -> Int -> Ordering
compareLengthTo l n = foldr f (`compare` n) l 0
where
f l cont c | c >= n = GT
| otherwise = cont $! c + 1
This uses exactly the same sort of technique used to implement foldl in terms of foldr. There's a classic article about the general technique called A tutorial on the universality and expressiveness of fold. You can also see a step-by-step explanation I wrote on the Haskell Wiki.
To get you started, note that foldr is being applied to four arguments here, rather than the usual three. This works out because the function being folded takes three arguments, and the "base case" is a function, (`compare` n).
Edit
If you want to use lazy Peano numerals as J. Abrahamson does, you can count down instead of counting up.
compareLengthTo :: [a] -> Nat -> Ordering
compareLengthTo l n = foldr f final l n
where
f _ _ Zero = GT
f _ cont (Succ p) = cont p
final Zero = EQ
final _ = LT
By it's very definition, foldr is not tail-recursive:
-- slightly simplified
foldr :: (a -> r -> r) -> r -> ([a] -> r)
foldr cons nil [] = nil
foldr cons nil (a:as) = cons a (foldr cons nil as)
so you cannot achieve that end. That said, there are some attractive components of foldr's semantics. In particular, it is "productive" which allows folds written with foldr to behave nicely with laziness.
We can see foldr as saying how to break down (catalyze) a list one "layer" at a time. If the cons argument can return without caring about any further layers of the list then it can terminate early and we avoid ever having to examine any more tails of the list---this is how foldr can act non-strictly at times.
Your function, to work on infinite lists, does something similar to the numeric argument. We'd like to operate on that argument "layer by layer". To make this more clear, let's define the naturals as follows
data Nat = Zero | Succ Nat
Now "layer by layer" more clearly means "counting down to zero". We can formalize that notion like so:
foldNat :: (r -> r) -> r -> (Nat -> r)
foldNat succ zero Zero = zero
foldNat succ zero (Succ n) = succ (foldNat succ zero n)
and now we can define something a bit like what we're looking for
compareLengthTo :: Nat -> [a] -> Ordering
compareLengthTo = foldNat succ zero where
zero :: [a] -> Ordering
zero [] = EQ -- we emptied the list and the nat at the same time
zero _ = GT -- we're done with the nat, but more list remains
succ :: ([a] -> Ordering) -> ([a] -> Ordering)
succ continue [] = LT -- we ran out of list, but had more nat
succ continue (_:as) = continue as -- keep going, both nat and list remain
It can take some time to study the above to see how it works. In particular, note that I instantiated r as a function, [a] -> Ordering. The form of the function above is "recursion on the natural numbers" and it allows it to accept infinite lists so long as the Nat argument isn't...
infinity :: Nat
infinity = Succ infinity
Now, the above function works on this strange type, Nat, which models the non-negative integers. We can translate the same concept to Int by replacing foldNat with foldInt, written similarly:
foldInt :: (r -> r) -> r -> (Int -> r)
foldInt succ zero 0 = zero
foldInt succ zero n = succ (foldInt succ zero (n - 1))
which you can verify embodies the exact same pattern as foldNat but avoids the use of the awkward Succ and Zero constructors. You can also verify that foldInt behaves pathologically if we give it negative integers... which is about what we'd expect.
Have to participate into this coding competion:
"Prelude":
import Test.QuickCheck
import Control.Applicative
compareLengthTo :: [a] -> Int -> Ordering
compareLengthTo l n = f 0 l
where
f c [] = c `compare` n
f c (l:ls) | c > n = GT
| otherwise = f (c + 1) ls
My first attempt was to write this
compareLengthTo1 :: [a] -> Int -> Ordering
compareLengthTo1 l n = g $ foldr f (Just n) l
where
-- we go below zero
f _ (Just 0) = Nothing
f _ (Just n) = Just (n - 1)
f _ Nothing = Nothing
g (Just 0) = EQ
g (Just _) = LT
g Nothing = GT
And it works for finite arguments:
prop1 :: [()] -> NonNegative Int -> Property
prop1 l (NonNegative n) = compareLengthTo l n === compareLengthTo1 l n
-- >>> quickCheck prop1
-- +++ OK, passed 100 tests.
But it fails for infinite lists. Why?
Let's define a variant using peano naturals:
data Nat = Zero | Succ Nat
foldNat :: (r -> r) -> r -> (Nat -> r)
foldNat succ zero Zero = zero
foldNat succ zero (Succ n) = succ (foldNat succ zero n)
natFromInteger :: Integer -> Nat
natFromInteger 0 = Zero
natFromInteger n = Succ (natFromInteger (n - 1))
natToIntegral :: Integral a => Nat -> a
natToIntegral = foldNat (1+) 0
instance Arbitrary Nat where
arbitrary = natFromInteger . getNonNegative <$> arbitrary
instance Show Nat where
show = show . (natToIntegral :: Nat -> Integer)
infinity :: Nat
infinity = Succ infinity
compareLengthTo2 :: [a] -> Nat -> Ordering
compareLengthTo2 l n = g $ foldr f (Just n) l
where
f _ (Just Zero) = Nothing
f _ (Just (Succ n)) = Just n
f _ Nothing = Nothing
g (Just Zero) = EQ
g (Just _) = LT
g Nothing = GT
prop2 :: [()] -> Nat -> Property
prop2 l n = compareLengthTo l (natToIntegral n) === compareLengthTo2 l n
-- >>> compareLengthTo2 [] infinity
-- LT
After staring long enough we see that it works for infinite numbers, not infinite lists.
That's why J. Abrahamson used foldNat in his definition.
So if we fold the number argument, we will get function which works on infinite lists, but finite numbers:
compareLengthTo3 :: [a] -> Nat -> Ordering
compareLengthTo3 l n = g $ foldNat f (Just l) n
where
f (Just []) = Nothing
f (Just (x:xs)) = Just xs
f Nothing = Nothing
g (Just []) = EQ
g (Just _) = GT
g Nothing = LT
prop3 :: [()] -> Nat -> Property
prop3 l n = compareLengthTo l (natToIntegral n) === compareLengthTo3 l n
nats :: [Nat]
nats = iterate Succ Zero
-- >>> compareLengthTo3 nats (natFromInteger 10)
-- GT
foldr and foldNat are kind of functions which generalise structural recursion on the argument (catamorphisms). They have nice property that given finite inputs and total functions as arguments, they are also total i.e. always terminate.
That's why we foldNat in the last example. We assume that Nat argument is finite, so compareLengthTo3 works on all [a] - even infinite.

How to formalize the definition of likeness/similarity between relations in Coq?

I am reading the book Introduction to Mathematics Philosophy by B.Russell and trying to formalize the definitions. Whereas I got stuck on proving the equivalence between the two definitions of similarity posed in the book.
Here are the text quoted from the book. (context)
1) Defining similarity directly:
We may define two relations P and Q as “similar,” or as having
“likeness,” when there is a one-one relation S whose domain is the
field of P and whose converse domain is the field of Q, and which is
such that, if one term has the relation P to another, the correlate of
the one has the relation Q to the correlate of the other, and vice
versa.
Here's my comprehension of the above text:
Inductive similar {X} (P : relation X) (Q : relation X) : Prop :=
| similar_intro : forall (S : relation X),
one_one S ->
(forall x, field P x <-> domain S x) ->
(forall x y z w, P x y -> S x z -> S y w -> Q z w) ->
(forall x y z w, Q z w -> S x z -> S y w -> P x y) ->
similar P Q.
2) Defining similarity through the concept of 'correlator':
A relation S is said to be a “correlator” or an “ordinal correlator”
of two relations P and Q if S is one-one, has the field of Q for its
converse domain, and is such that P is the relative product of S and Q
and the converse of S.
Two relations P and Q are said to be “similar,” or to have “likeness,”
when there is at least one correlator of P and Q.
My definition to this is:
Inductive correlator {X} (P Q : relation X) : relation X -> Prop :=
| correlator_intro : forall (S : relation X),
one_one S ->
(forall x, field P x <-> domain S x) ->
(forall x y, relative_product (relative_product S Q) (converse S) x y <-> P x y) ->
correlator P Q S.
Inductive similar' {X} (P Q : relation X) : Prop :=
| similar'_intro : forall S, correlator P Q S -> similar' P Q.
But I couldn't prove that similar is equivalent to similar', where did I make the mistake? Thanks a lot.

edit distance algorithm in Haskell - performance tuning

I'm trying to implement the levenshtein distance (or edit distance) in Haskell, but its performance decreases rapidly when the string lenght increases.
I'm still quite new to Haskell, so it would be nice if you could give me some advice on how I could improve the algorithm. I already tried to "precompute" values (the inits), but since it didn't change anything I reverted that change.
I know there's already an editDistance implementation on Hackage, but I need it to work on lists of arbitrary tokens, not necessarily strings. Also, I find it a bit complicated, at least compared to my version.
So, here's the code:
-- standard levenshtein distance between two lists
editDistance :: Eq a => [a] -> [a] -> Int
editDistance s1 s2 = editDistance' 1 1 1 s1 s2
-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: Eq a => Int -> Int -> Int -> [a] -> [a] -> Int
editDistance' _ _ ins s1 [] = ins * length s1
editDistance' _ _ ins [] s2 = ins * length s2
editDistance' del sub ins s1 s2
| last s1 == last s2 = editDistance' del sub ins (init s1) (init s2)
| otherwise = minimum [ editDistance' del sub ins s1 (init s2) + del -- deletion
, editDistance' del sub ins (init s1) (init s2) + sub -- substitution
, editDistance' del sub ins (init s1) s2 + ins -- insertion
]
It seems to be a correct implementation, at least it gives exactly the same results as this online tool.
Thanks in advance for your help! If you need any additional information, please let me know.
Greetings,
bzn
Ignoring that this is a bad algorithm (should be memoizing, I get to that second)...
Use O(1) Primitives and not O(n)
One problem is you use a whole bunch calls that are O(n) for lists (haskell lists are singly linked lists). A better data structure would give you O(1) operations, I used Vector:
import qualified Data.Vector as V
-- standard levenshtein distance between two lists
editDistance :: Eq a => [a] -> [a] -> Int
editDistance s1 s2 = editDistance' 1 1 1 (V.fromList s1) (V.fromList s2)
-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: Eq a => Int -> Int -> Int -> V.Vector a -> V.Vector a -> Int
editDistance' del sub ins s1 s2
| V.null s2 = ins * V.length s1
| V.null s1 = ins * V.length s2
| V.last s1 == V.last s2 = editDistance' del sub ins (V.init s1) (V.init s2)
| otherwise = minimum [ editDistance' del sub ins s1 (V.init s2) + del -- deletion
, editDistance' del sub ins (V.init s1) (V.init s2) + sub -- substitution
, editDistance' del sub ins (V.init s1) s2 + ins -- insertion
]
The operations that are O(n) for lists include init, length, and last (though init is able to be lazy at least). All these operations are O(1) using Vector.
While real benchmarking should use Criterion, a quick and dirty benchmark:
str2 = replicate 15 'a' ++ replicate 25 'b'
str1 = replicate 20 'a' ++ replicate 20 'b'
main = print $ editDistance str1 str2
shows the vector version takes 0.09 seconds while strings take 1.6 seconds, so we saved about an order of magnitude without even looking at your editDistance algorithm.
Now what about memoizing results?
The bigger issue is obviously the need for memoization. I took this as an opportunity to learn the monad-memo package - my god is that awesome! For one extra constraint (you need Ord a), you get a memoization basically for no effort. The code:
import qualified Data.Vector as V
import Control.Monad.Memo
-- standard levenshtein distance between two lists
editDistance :: (Eq a, Ord a) => [a] -> [a] -> Int
editDistance s1 s2 = startEvalMemo $ editDistance' (1, 1, 1, (V.fromList s1), (V.fromList s2))
-- weighted levenshtein distance
-- ins, sub and del are the costs for the various operations
editDistance' :: (MonadMemo (Int, Int, Int, V.Vector a, V.Vector a) Int m, Eq a) => (Int, Int, Int, V.Vector a, V.Vector a) -> m Int
editDistance' (del, sub, ins, s1, s2)
| V.null s2 = return $ ins * V.length s1
| V.null s1 = return $ ins * V.length s2
| V.last s1 == V.last s2 = memo editDistance' (del, sub, ins, (V.init s1), (V.init s2))
| otherwise = do
r1 <- memo editDistance' (del, sub, ins, s1, (V.init s2))
r2 <- memo editDistance' (del, sub, ins, (V.init s1), (V.init s2))
r3 <- memo editDistance' (del, sub, ins, (V.init s1), s2)
return $ minimum [ r1 + del -- deletion
, r2 + sub -- substitution
, r3 + ins -- insertion
]
You see how the memoization needs a single "key" (see the MonadMemo class)? I packaged all the arguments as a big ugly tuple. It also needs one "value", which is your resulting Int. Then it's just plug and play using the "memo" function for the values you want to memoize.
For benchmarking I used a shorter, but larger-distance, string:
$ time ./so # the memoized vector version
12
real 0m0.003s
$ time ./so3 # the non-memoized vector version
12
real 1m33.122s
Don't even think about running the non-memoized string version, I figure it would take around 15 minutes at a minimum. As for me, I now love monad-memo - thanks for the package Eduard!
EDIT: The difference between String and Vector isn't as much in the memoized version, but still grows to a factor of 2 when the distance gets to around 200, so still worth while.
EDIT: Perhaps I should explain why the bigger issue is "obviously" memoizing results. Well, if you look at the heart of the original algorithm:
[ editDistance' ... s1 (V.init s2) + del
, editDistance' ... (V.init s1) (V.init s2) + sub
, editDistance' ... (V.init s1) s2 + ins]
It's quite clear a call of editDistance' s1 s2 results in 3 calls to editDistance'... each of which call editDistance' three more times... and three more time... and AHHH! Exponential explosion! Luckly most the calls are identical! for example (using --> for "calls" and eD for editDistance'):
eD s1 s2 --> eD s1 (init s2) -- The parent
, eD (init s1) s2
, eD (init s1) (init s2)
eD (init s1) s2 --> eD (init s1) (init s2) -- The first "child"
, eD (init (init s1)) s2
, eD (init (init s1)) (init s2)
eD s1 (init s2) --> eD s1 (init (init s2))
, eD (init s1) (init s2)
, eD (init s1) (init (init s2))
Just by considering the parent and two immediate children we can see the call ed (init s1) (init s2) is done three times. The other child share calls with the parent too and all children share many calls with each other (and their children, cue Monty Python skit).
It would be a fun, perhaps instructive, exercise to make a runMemo like function that returns the number of cached results used.
You need to memoize editDistance'. There are many ways of doing this, e.g., a recursively defined array.
As already mentioned, memoization is what you need. In addition you are looking at edit distance from right to left, wich isn't very efficient with strings, and edit distance is the same regardless of direction. That is: editDistance (reverse a) (reverse b) == editDistance a b
For solving the memoization part there are very many libraries that can help you. In my example below I chose MemoTrie since it is quite easy to use and performs well here.
import Data.MemoTrie(memo2)
editDistance' del sub ins = memf
where
memf = memo2 f
f s1 [] = ins * length s1
f [] s2 = ins * length s2
f (x:xs) (y:ys)
| x == y = memf xs ys
| otherwise = minimum [ del + memf xs (y:ys),
sub + memf (x:xs) ys,
ins + memf xs ys]
As you can see all you need is to add the memoization. The rest is the same except that we start from the beginning of the list in staid of the end.
I know there's already an editDistance implementation on Hackage, but I need it to work on lists of arbitrary tokens, not necessarily strings
Are there a finite number of tokens? I'd suggest you try simply devising a mapping from token to character. There are 10,646 characters at your disposal, after all.
This version is much quicker than those memorised versions, but still I would love to have it even quicker. Works fine with 100's character long strings.
I was written with other distances in mind(change the init function and cost) , and use classical dynamic programming array trick.
The long line could be converted into a separate function with top 'do', but I like this way.
import Data.Array.IO
import System.IO.Unsafe
editDistance = dist ini med
dist :: (Int -> Int -> Int) -> (a -> a -> Int ) -> [a] -> [a] -> Int
dist i f a b = unsafePerformIO $ distM i f a b
-- easy to create other distances
ini i 0 = i
ini 0 j = j
ini _ _ = 0
med a b = if a == b then 0 else 2
distM :: (Int -> Int -> Int) -> (a -> a -> Int) -> [a] -> [a] -> IO Int
distM ini f a b = do
let la = length a
let lb = length b
arr <- newListArray ((0,0),(la,lb)) [ini i j | i<- [0..la], j<-[0..lb]] :: IO (IOArray (Int,Int) Int)
-- all on one line
mapM_ (\(i,j) -> readArray arr (i-1,j-1) >>= \ld -> readArray arr (i-1,j) >>= \l -> readArray arr (i,j-1) >>= \d-> writeArray arr (i,j) $ minimum [l+1,d+1, ld + (f (a !! (i-1) ) (b !! (j-1))) ] ) [(i,j)| i<-[1..la], j<-[1..lb]]
readArray arr (la,lb)
People are recommending you to use generic memoization libraries, but for the simple task of defining Levenshtein distance plain dynamic programming is more than enough.
A very simple polymorphic list-based implementation:
distance s t =
d !!(length s)!!(length t)
where d = [ [ dist m n | n <- [0..length t] ] | m <- [0..length s] ]
dist i 0 = i
dist 0 j = j
dist i j = minimum [ d!!(i-1)!!j+1
, d!!i!!(j-1)+1
, d!!(i-1)!!(j-1) + (if s!!(i-1)==t!!(j-1)
then 0 else 1)
]
Or if you need real speed on long sequences, you can use a mutable array:
import Data.Array
import qualified Data.Array.Unboxed as UA
import Data.Array.ST
import Control.Monad.ST
-- Mutable unboxed and immutable boxed arrays
distance :: Eq a => [a] -> [a] -> Int
distance s t = d UA.! (ls , lt)
where s' = array (0,ls) [ (i,x) | (i,x) <- zip [0..] s ]
t' = array (0,lt) [ (i,x) | (i,x) <- zip [0..] t ]
ls = length s
lt = length t
(l,h) = ((0,0),(length s,length t))
d = runSTUArray $ do
m <- newArray (l,h) 0
for_ [0..ls] $ \i -> writeArray m (i,0) i
for_ [0..lt] $ \j -> writeArray m (0,j) j
for_ [1..lt] $ \j -> do
for_ [1..ls] $ \i -> do
let c = if s'!(i-1)==t'! (j-1)
then 0 else 1
x <- readArray m (i-1,j)
y <- readArray m (i,j-1)
z <- readArray m (i-1,j-1)
writeArray m (i,j) $ minimum [x+1, y+1, z+c ]
return m
for_ xs f = mapM_ f xs

Resources