Written in Haskell, here is the data type that proves that one list is a permutation of another:
data Belongs (x :: k) (ys :: [k]) (zs :: [k]) where
BelongsHere :: Belongs x xs (x ': xs)
BelongsThere :: Belongs x xs xys -> Belongs x (y ': xs) (y ': xys)
data Permutation (xs :: [k]) (ys :: [k]) where
PermutationEmpty :: Permutation '[] '[]
PermutationCons :: Belongs x ys xys -> Permutation xs ys -> Permutation (x ': xs) xys
With a Permutation, we can now permute a record:
data Rec :: (u -> *) -> [u] -> * where
RNil :: Rec f '[]
(:&) :: !(f r) -> !(Rec f rs) -> Rec f (r ': rs)
insertRecord :: Belongs x ys zs -> f x -> Rec f ys -> Rec f zs
insertRecord BelongsHere v rs = v :& rs
insertRecord (BelongsThere b) v (r :& rs) = r :& insertRecord b v rs
permute :: Permutation xs ys -> Rec f xs -> Rec f ys
permute PermutationEmpty RNil = RNil
permute (PermutationCons b pnext) (r :& rs) = insertRecord b r (permute pnext rs)
This works fine. However, permute is O(n^2) where n is the length of the record. I'm wondering if there is a way to get it to be any faster by using a different data type to represent a permutation.
For comparison, in a mutable and untyped setting (which I know is a very different setting indeed), we could apply a permutation to a heterogeneous record like this in O(n) time. You represent the record as an array of values and the permutation as an array of new positions (no duplicates are allowed and all digits must be between 0 and n). Applying the permutation is just iterating that array and indexing into the record's array with those positions.
I don't expect that an O(n) permutation is possible in a more rigorously typed settings. But it seems like O(n*log(n)) might be possible. I appreciate any feedback, and let me know if I need to clarify anything. Also, answers to this can use Haskell, Agda, or Idris depending on what it feels easier to communicate with.
A faster simple solution is to compare the sorted permutation of the permutations.
Given permutation A and B.
Then there exist the sorted permutations,
As = sort(A)
Bs = sort(B)
As is a permutation of A and Bs is a permutation of B.
If As == Bs then A is a permutation of B.
Thus the order of this algorithm is O(n log(n)) < O(n²)
And this is leading to the optimal solution.
Using a different storage of permutation yields O(n)
Using the statements from above, we are changing the storage format of each permutation into
the sorted data
the original unsorted data
To determine if a list is a permutation of another one, simple a comparison of the sorted data is necessary -> O(n).
This answers the question correctly, but the effort is hidden in creating the doubled data storage ^^ So it will depend on the use if this is a real advantage or not.
Related
The two definitions are these:
Inductive perm : list nat -> list nat -> Prop :=
| perm_eq: forall l1, perm l1 l1
| perm_swap: forall x y l1, perm (x :: y :: l1) (y :: x :: l1)
| perm_hd: forall x l1 l2, perm l1 l2 -> perm (x :: l1) (x :: l2)
| perm_trans: forall l1 l2 l3, perm l1 l2 -> perm l2 l3 -> perm l1 l3.
Fixpoint num_oc (x: nat) (l: list nat): nat :=
match l with
| nil => 0
| h::tl =>
if (x =? h) then S (num_oc x tl) else num_oc x tl
end.
Definition equiv l l' := forall n:nat, num_oc n l = num_oc n l'.
The theorem that I'm trying to prove is this:
Theorem perm_equiv: forall l l', equiv l l' <-> perm l l'.
The perm -> equiv direction is ready, but the equiv -> perm direction isn't working. I tried this strategy:
- intro H. unfold equiv in H.
generalize dependent l'.
induction l.
+ intros l' H. admit.
+ intros l' H. simpl in H.
generalize dependent l'.
intro l'. induction l'.
* intro H. specialize (H a).
rewrite <- beq_nat_refl in H.
simpl in H. Search False.
inversion H.
destruct (a =? a0) eqn:Ha.
** simpl in H. inversion H.
** apply False_ind.
apply beq_nat_false in Ha.
apply Ha. reflexivity.
* destruct (x =? a). *).
I'm out of ideas for the first branch, so it's admitted for now, but the second one is crashing at the destruct tactic. How do I proceed with this proof?
You should attempt to write a proof on paper before attempting to encode it in Coq. Here is a possible strategy.
Nil case
When l = [], you know that every number in l' occurs zero times because of H. It should be possible to prove an auxiliary lemma that implies that l' = [] in this case. You can conclude with perm_eq.
Cons case
Suppose that l = x :: xs. Let n = num_oc x xs. We know that num_oc x l' = S n by H. You should be able to prove a lemma saying that l' is of the form ys1 ++ x :: ys2 where num_oc x ys1 = 0. This would allow you to show that equiv xs (ys1 ++ ys2). By the induction hypothesis, you find that perm xs (ys1 ++ ys2). Hence, by perm_hd, perm (x :: xs) (x :: ys1 ++ ys2).
You should be able to prove that perm is a transitive relation and that perm (x :: ys1 ++ ys2) (ys1 ++ x :: ys2) holds. Combined with the last assertion, this will yield perm l l'.
The main takeaway in this case is that attempting to write every proof with single, direct induction is only going to work for the simplest results. You should start thinking about how to break down your results into simpler intermediate lemmas that you can combine to prove your final result.
So I have two implementations of the function tabulate, which, given a function f :: Int -> a and a number n, should produce the list [f 0, f 1, ..., f (n-1)]. I'm trying to guess which one is better in terms of work and span.
tabulate1 :: (Int -> a) -> Int -> [a]
tabulate1 f n = tab (\x -> f (n - x)) n where
tab _ 0 = []
tab g n = let (x,xs) = (g n) ||| (tab g (n-1))
in (x:xs)
tabulate2 :: (Int -> a) -> Int -> [a]
tabulate2 f n = tab f 0 (n-1) where
tab f n m
| n > m = []
| n == m = [f n]
| otherwise = let i = (n + m) `div` 2
(l, r) = (tab f n i) ||| (tab f i+1 m)
in (l ++ r)
While the first one avoids the using of (++), which has linear work and span, the second one computes the two sublists in parallel but uses (++).
So... which one is better?
Time and space complexity in Haskell is often non-trivial as it is a lazy language. This means that while a function might be O(n!), its result might never be needed and therefore never evaluated. Or like in this case, if your function returns a list, and only the first 3 elements are needed by other functions, only those are evaluated.
Anyways, your functions is just a particular case of map, and as such it could be coded in a much more readable way:
tabulate f n = map f [0..n]
Map is implemented with a fold, and is probably the most optimised version you could get
In math, when I want to rearrange a list of length n, I'll act on the list with a permutation. For example:
(1 2) * (x, y, z) = (y, x, z)
(1 n 2) * (v[1], v[2], ..., v[n]) = (v[n], v[1], ..., v[2])
perm * (v[1], v[2], ..., v[n]) = ( v[perm(1)], v[perm(2)], ..., v[perm(n)] )
How would I do this in Haskell?
I would use the input permutation to build a map from old indices to new indices.
import Prelude hiding ((*))
import qualified Data.Map as M
infixr 5 * -- right-associative so we can compose permutations conveniently
(*) :: [Int] -> [a] -> [a]
perm * xs = zipWith (\i _ -> xs !! M.findWithDefault i i ixMap) [0..] xs
where ixMap = M.fromList (zip perm (drop 1 perm ++ take 1 perm))
You can see it in action in the ghci prompt (though as usual in programming it uses 0-based rather than 1-based indexing):
> [0,1] * "xyz"
"yxz"
> [0,4,1] * "abcde"
"eacdb"
This costs O(n^2 log m) where n is the length of xs and m is the length of perm. You can reduce this to O(n log(nm)) by switching from (!!) to M.lookup for the indexing into xs, too.
I have decided to solve first programing assignment from Standford algorithm course https://class.coursera.org/algo-005 using Haskell. Despite I am very new to language I implemented it much faster than in c++. I have 6+ years of work experience in c++ so it impressed me a bit. But performance is disappointing: 0.19 sec (c++) vs 9.88 (haskell) version.
How can I improve performance of Haskell implementation so it can be comparable to c++?
Here is my code in Haskell
data SortedList = SortedList {
inversionCount :: Int,
list :: [Int]
} deriving (Show)
-- first list accumulator
packm :: Int -> SortedList -> Int -> SortedList
packm x (SortedList count xs) add = SortedList (count + add) (x:xs)
merge2 :: [Int] -> [Int] -> SortedList
merge2 [] xs = SortedList 0 xs
merge2 xs [] = SortedList 0 xs
merge2 xlist#(x:xs) ylist#(y:ys)
| x < y = packm x (merge2 xs ylist) 0
| otherwise = packm y (merge2 xlist ys) $ length xlist
countAndMerge :: SortedList -> SortedList -> SortedList
countAndMerge (SortedList lcount lxs) (SortedList rcount rxs) =
let merged = merge2 lxs rxs
in SortedList (lcount + rcount + inversionCount merged) $ list merged
mergesort :: [Int] -> SortedList
mergesort [] = SortedList 0 []
mergesort [x] = SortedList 0 [x]
mergesort xs =
let leftsorted = mergesort $ take halfElements xs
rightsorted = mergesort $ drop halfElements xs
in countAndMerge leftsorted rightsorted
where halfElements = length xs `div` 2
main = do
contents <- getContents
let intlist = [ read x :: Int | x <- (lines contents) ]
print $ inversionCount $ mergesort intlist
The biggest problem is that the asymptotic performance isn't right to begin with; it's O(n^2 * log n) rather than the optimal O(n * log n). The culprit is merge2:
| otherwise = packm y (merge2 xlist ys) $ length xlist
length xlist is O(n). Supposing a random input list, we need to compute length xlist on about half of the merge2 calls, thus making one level of merging O(n^2).
otherwise = packm y (merge2 xlist ys) $ length xlist
This computes length at every other step of the merge on the average. This makes the whole business quadratic.
If you track length of lists not by counting elements, but by passing the count down from the top level, you restore the O(N log N) behaviour. For a list of 100000 elements this means execution time goes down from 20 seconds to 0.45 second (on my machine with -O2).
Scaling it further up without changing the algorithm is problematic, because it currently runs in linear stack space, and cannot cope with 1 million elements with default RTS options. Change mergesort to a merge-adjacent-pairs version, it is likely to run much better.
The following (unoptimal) code generates all the subsets of size N for certain subset.
This code works but, as I said, is highly unoptimal. Using an intermediate list to avoid the O(log(n)) of Set.insert doesn't seem help due to the large cost of later reconverting the list to a Set
Can anybody suggest how to optimize the code?
import qualified Data.Set as Set
subsetsOfSizeN :: Ord a => Int -> Set.Set a -> Set.Set (Set.Set a)
subsetsOfSizeN n s
| Set.size s < n || n < 0 = error "subsetOfSizeN: wrong parameters"
| otherwise = doSubsetsOfSizeN n s
where doSubsetsOfSizeN n s
| n == 0 = Set.singleton Set.empty
| Set.size s == n = Set.singleton s
| otherwise =
case Set.minView s of
Nothing -> Set.empty
Just (firstS, restS) ->
let partialN n = doSubsetsOfSizeN n restS in
Set.map (Set.insert firstS) (partialN (n-1)) `Set.union` partialN n
This is inspired by Pascal's triangle.
choose :: [b] -> Int -> [[b]]
_ `choose` 0 = [[]]
[] `choose` _ = []
(x:xs) `choose` k = (x:) `fmap` (xs `choose` (k-1)) ++ xs `choose` k
This code works but, as I said, is highly unoptimal.
Doesn't seem so terribly bad to me. The number of subsets of size k of a set of size n is n `choose` k which grows rather fast for k ~ n/2. So creating all the subsets must scale badly.
Using an intermediate list to avoid the O(log(n)) of Set.insert doesn't seem help due to the large cost of later reconverting the list to a Set.
Hmm, I found using lists to give better performance. Not asymptotically, I think, but a not negligible more-or-less constant factor.
But first, there is an inefficiency in your code that is simple to repair:
Set.map (Set.insert firstS) (partialN (n-1))
Note that Set.map must rebuild a tree from scratch. But we know that firstS is always smaller than any element in any of the sets in partialN (n-1), so we can use Set.mapMonotonic that can reuse the spine of the set.
And that principle is also what makes lists attractive, the subsets are generated in lexicographic order, so instead of Set.fromList we can use the more efficient Set.fromDistinctAscList. Transcribing the algorithm yields
onlyLists :: Ord a => Int -> Set.Set a -> Set.Set (Set.Set a)
onlyLists n s
| n == 0 = Set.singleton Set.empty
| Set.size s < n || n < 0 = error "onlyLists: out of range n"
| Set.size s == n = Set.singleton s
| otherwise = Set.fromDistinctAscList . map Set.fromDistinctAscList $
go n (Set.size s) (Set.toList s)
where
go 1 _ xs = map return xs
go k l (x:xs)
| k == l = [x:xs]
| otherwise = map (x:) (go (k-1) (l-1) xs) ++ go k (l-1) xs
which in the few benchmarks I've run is between 1.5 and 2× faster than the amended algorithm using Sets.
And that is in turn, in my criterion benchmarks, nearly twice as fast as dave4420's.
subsets :: Int -> [a] -> [[a]]
subsets 0 _ = [[]]
subsets _ [] = []
subsets k (x:xs) = map (x:) (subsets (k - 1) xs) ++ subsets k xs
First, use a better algorithm.
Look at your final line:
Set.map (Set.insert firstS) (partialN (n-1)) `Set.union` partialN n
Evaluating doSubsetsOfSizeN k (Set.fromList $ 1:2:xs) will involve evaluating doSubsetsOfSizeN (k-1) (Set.fromList xs) twice (once when inserting 1, and once when inserting 2). This duplication is wasteful.
Enter a better algorithm.
mine :: Ord a => Int -> Set.Set a -> Set.Set (Set.Set a)
mine n s | Set.size s < n || n < 0 = Set.empty
| otherwise = Set.foldr cons nil s !! n
where
nil :: Ord a => [Set.Set (Set.Set a)]
nil = Set.singleton Set.empty : repeat Set.empty
cons :: Ord a => a -> [Set.Set (Set.Set a)] -> [Set.Set (Set.Set a)]
cons x sets = zipWith Set.union sets
(Set.empty : map (Set.map $ Set.insert x) sets)
mine 9 (Data.Set.fromList [0..18]) `seq` () is faster than subsetsOfSizeN 9 (Data.Set.fromList [0..18]) `seq` () and should have better asymptotic performance.
I haven't tried optimising this any further. There may be a better algorithm still.
(If the cost of insert and fromList are issues, you should consider giving back a list of lists instead of a set of sets.)
I found this, may be it can help you
f [] = [[1]]
f l = (:) [u] l'
where
u = succ (head (head l))
l' = (++) l (map(\x->(:) u x) l)
fix f n = if (n==0) then [] else f (fix f (n-1))
To test it
$ length $ (fix f 10) => 1023 -- The empty set is always include then == 1024