Translate Haskell code into Standard ML (combinations with repetition) - algorithm

I am writing a code for the permutation with repetition for n elements drawn from choice of k values. So the cardinality of my resulting set should have k^n elements. In Haskell, it is fairly easy. For example, one can just write:
import Control.Monad (replicateM)
main = mapM_ print (replicateM 2 [1,2,3])
then you will get a list as:
[1,1] [1,2] [1,3] [2,1] [2,2] [2,3] [3,1] [3,2] [3,3]
But on Standard ML, I don't know how to do.
I tried this:
fun combs_with_rep (k,xxs) =
case (k, xxs) of (0,_) => [[]]
|(_, []) => []
|(k, x::xs) =>List.map (fn ys => x::ys) (combs_with_rep((k-1),xxs))# combs_with_rep(k,xs)
But the list is not complete and I don't know why....
Is there an analog coding as the one in Haskell that does the same thing? Or how should I fix my sml code?
Any help is appreciated!

Just transform the monadic code:
rep_comb n xs -- n times choose 1 elem from xs, repetition allowed
= replicateM n xs
= sequence $ replicate n xs
= foldr k (return []) $ replicate n xs
where
k m m' = do { x <- m; xs <- m'; return (x:xs) }
= case n of 0 -> [[]] ;
_ -> k xs (rep_comb (n-1) xs)
where
k m m' = m >>= (\x->
m' >>= (\xs -> return (x:xs) ))
= case n of 0 -> [[]] ;
_ -> xs >>= (\y->
rep_comb (n-1) xs >>= (\ys -> [y:ys]))
-- i.e.
= case n of 0 -> [[]] ;
_ -> [y:ys | y<- xs, ys<- rep_comb (n-1) xs]
= case n of 0 -> [[]] ;
_ -> concatMap (\y-> map (y:) (rep_comb (n-1) xs)) xs
-- or, in a different order
= case n of 0 -> [[]] ;
_ -> [y:ys | ys<- rep_comb (n-1) xs, y<- xs]
= case n of 0 -> [[]] ;
_ -> concatMap (\ys-> map (:ys) xs) (rep_comb (n-1) xs)
Now you can translate this to ML.

Related

Balanced Partition in Haskell

In haskell, how can I generate a balanced partition of a set?
Assuming I have a set {1,3,4,6,9}, a balanced partition of that set would be s1{9,3} and s2{6,4,1}, seeing as s1-s2 is 1.
Well, for brute force, we can generate all partitions recursively by generating partitions for the tail and then putting the head on the left list or the right:
partitions :: [a] -> [([a], [a])]
partitions [] = [([], [])]
partitions (x : xs) = let ps = partitions xs in
[(x : ys, zs) | (ys, zs) <- ps] ++ [(ys, x : zs) | (ys, zs) <- ps]
have a way to compute the unbalance:
unbalance :: Num a => ([a], [a]) -> a
unbalance (ys, zs) = abs (sum ys - sum zs)
and then put it all together:
balancedPartition :: (Num a, Ord a) => [a] -> ([a], [a])
balancedPartition = minimumBy (comparing unbalance) . partitions
Here's the complete module:
module Balance where
import Data.List(minimumBy)
import Data.Ord(comparing)
partitions :: [a] -> [([a], [a])]
partitions [] = [([], [])]
partitions (x : xs) = let ps = partitions xs in
[(x : ys, zs) | (ys, zs) <- ps] ++ [(ys, x : zs) | (ys, zs) <- ps]
unbalance :: Num a => ([a], [a]) -> a
unbalance (ys, zs) = abs (sum ys - sum zs)
balancedPartition :: (Num a, Ord a) => [a] -> ([a], [a])
balancedPartition = minimumBy (comparing unbalance) . partitions
Here's a solution that does a little better:
balancedPartition :: (Num a, Ord a) => [a] -> ([a], [a])
balancedPartition = snd . head . partitionsByBadness . sort
where
-- recursively builds a list of possible partitionings and their badness
-- sorted by their (increasing) badness
partitionsByBadness [] = [(0, ([], []))]
partitionsByBadness (x:xs) = let res = partitionsByBadness xs
withX = map ( (+x) *** first (x:)) res
sansX = map (subtract x *** second (x:)) res
in merge withX $ normalize sansX
-- When items are added to the second list, the difference between the sums
-- decreases - and might become negative
-- We take those cases and swap the lists, so that the first list has always
-- a greater sum and the difference is always positive
-- So that we can sort the list again (with linear complexity)
normalize xs = let (neg, pos) = span ((<0) . fst) xs
in merge pos $ reverse $ map (negate *** swap) neg
-- merge two sorted lists (as known from mergesort, but
-- omits "duplicates" with same badness)
merge :: Ord k => [(k, v)] -> [(k, v)] -> [(k, v)]
merge [] zss = zss
merge yss [] = yss
merge yss#(y:ys) zss#(z:zs) = case comparing fst y z of
LT -> y : merge ys zss
EQ -> merge ys zss
GT -> z : merge yss zs
Bin packing works pretty well:
% stack ghci --package Binpack
λ: import Data.BinPack
λ: let bins numberOfBins items = let totalSize = sum items; binSize = succ (totalSize `div` (max 1 numberOfBins)) in binpack WorstFit Decreasing id (replicate numberOfBins (emptyBin binSize)) items
λ: bins 2 [1,3,4,6,9]
([(0,[3,9]),(1,[1,4,6])],[])
If you know your input will fit into the bins you can extract out the partitions:
λ: map snd . fst . bins 2 $ [1,3,4,6,9]
[[3,9],[1,4,6]]

minimum difference between numbers

We have K different sets of numbers. We have to choose a number from each set, so that the difference between the higher and the lower number is the minimum.
Any ideas?
Something like this (written in Haskell)?
import Data.List (minimum, maximum, minimumBy)
minDiff (x:xs) = comb (head x) (diff $ matches (head x)) x where
lenxs = length xs
diff m = maximum m - minimum m
matches y = minimumBy (\a b -> compare (diff a) (diff b)) $ p [] 0 where
md = map (minimumBy (\a b -> compare (abs (a - y)) (abs (b - y)))) xs
mds = [m | m <- foldl (\b a -> filter (\z -> abs (z - y) == abs (y - md!!a)) (xs!!a) : b) [] [0..lenxs - 1]]
p result index
| index == lenxs = [y:result]
| otherwise = do
p' <- mds!!index
p (p':result) (index + 1)
comb result difference [] = matches result
comb result difference (z:zs) =
let diff' = diff (matches z)
in if diff' < difference
then comb z diff' zs
else comb result difference zs
OUTPUT:
*Main> minDiff [[1,3,5,9,10],[2,4,6,8],[7,11,12,13]]
[5,6,7]

Simplifying selectionsort and mergesort

I have managed to implement insertionsort and quicksort in a couple of lines, but selectionsort and mergesort still give me headaches ;)
selectionsort [] = []
selectionsort (x:xs) =
let (minimum, greater) = extractMinimum x [] xs
in minimum : selectionsort greater
extractMinimum minimumSoFar greater [] = (minimumSoFar, greater)
extractMinimum minimumSoFar greater (x:xs)
| x < minimumSoFar = extractMinimum x (minimumSoFar:greater) xs
| otherwise = extractMinimum minimumSoFar (x:greater) xs
Is something like the extractMinimum function available in the standard library? I tried hoogling for (a -> a -> Bool/Ordering) -> [a] -> (a, [a]) without any luck.
mergesort [ ] = [ ]
mergesort [x] = [x]
mergesort xs =
let (left, right) = splitAt (length xs `div` 2) xs
in merge (mergesort left) (mergesort right)
merge xs [] = xs
merge [] ys = ys
merge xxs#(x:xs) yys#(y:ys)
| x < y = x : merge xs yys
| otherwise = y : merge xxs ys
Again, do I have to write merge myself, or can I reuse existing components? Hoogle gave me no useful results for (a -> a -> Bool/Ordering) -> [a] -> [a] -> [a].
There's nothing in the standard libraries, but at least merge is provided by a package on hackage, although I'm not sure it's worth pulling in a dependency for such a simple function.
However,
merge xxs#(x:xs) yys#(y:ys)
| x < y = x : merge xs yys
| otherwise = y : merge xxs ys
produces a non-stable sort, to get a stable sort, the condition to place x should be x <= y.
For extractMinimum, I haven't found anything either, but I can offer an alternative definition,
extractMinimum :: Ord a => a -> [a] -> (a,[a])
extractMinimum x = foldl' select (x, [])
where
select (mini, greater) y
| y < mini = (y, mini:greater)
| otherwise = (mini, y:greater)
A nice definition of selectionSort would be
import Data.List -- for unfoldr
selectionSort :: Ord a => [a] -> [a]
selectionSort = unfoldr getMin
where
getMin [] = Nothing
getMin (x:xs) = Just $ extractMinimum x xs
My suggestion for selection sort:
import Data.List
selectionsort xs = unfoldr f xs where
f [] = Nothing
f xs = Just $ extractMinimum xs
extractMinimum (x:xs) = foldl' f (x,[]) xs where
f (minimum, greater) x | x < minimum = (x, minimum : greater)
| otherwise = (minimum, x : greater)

Statement for checking only once?Haskell

I have two lists of unequal length. When I add both of them I want the final list to have the length of the longest list.
addtwolists [0,0,221,2121] [0,0,0,99,323,99,32,2332,23,23]
>[0,0,221,2220,323,99,32,2332,23,23]
addtwolists [945,45,4,45,22,34,2] [0,34,2,34,2]
>[945,79,6,79,24,34,2]
zerolist :: Int -> [Integer]
zerolist x = take x (repeat 0)
addtwolists :: [Integer] -> [Integer] -> [Integer]
addtwolists x y = zipWith (+) (x ++ (zerolist ((length y)-(length x)))) (y ++ (zerolist ((length x)-(length y))))
This code is inefficient. So I tried:
addtwolist :: [Integer] -> [Integer] -> [Integer]
addtwolist x y = zipWith (+) (x ++ [head (zerolist ((length y)-(length x))) | (length y) > (length x)]) (y ++ [head (zerolist ((length x)-(length y))) | (length x) > (length y)])
Any other way to increase the efficiency?Could you only check once to see which list is bigger?
Your implementation is slow because it looks like you call the length function on each list multiple times on each step of zipWith. Haskell computes list length by walking the entire list and counting the number of elements it traverses.
The first speedy method that came to my mind was explicit recursion.
addLists :: [Integer] -> [Integer] -> [Integer]
addLists xs [] = xs
addLists [] ys = ys
addLists (x:xs) (y:ys) = x + y : addLists xs ys
I'm not aware of any standard Prelude functions that would fill your exact need, but if you wanted to generalize this to a higher order function, you could do worse than this. The two new values passed to the zip function are filler used in computing the remaining portion of the long list after the short list has been exhausted.
zipWithExtend :: (a -> b -> c) -> [a] -> [b] -> a -> b -> [c]
zipWithExtend f [] [] a' b' = []
zipWithExtend f (a:as) [] a' b' = f a b' : zipWithExtend f as [] a' b'
zipWithExtend f [] (b:bs) a' b' = f a' b : zipWithExtend f [] bs a' b'
zipWithExtend f (a:as) (b:bs) a' b' = f a b : zipWithExtend f as bs a' b'
Usage:
> let as = [0,0,221,2121]
> let bs = [0,0,0,99,323,99,32,2332,23,23]
> zipWithExtend (+) as bs 0 0
[0,0,221,2220,323,99,32,2332,23,23]
This can be done in a single iteration, which should be a significant improvement for long lists. It's probably simplest with explicit recursion:
addTwoLists xs [] = xs
addTwoLists [] ys = ys
addTwoLists (x:xs) (y:ys) = x+y:addTwoLists xs ys
Just because I can't help bikeshedding, you might enjoy this function:
Prelude Data.Monoid Data.List> :t map mconcat . transpose
map mconcat . transpose :: Monoid b => [[b]] -> [b]
For example:
> map (getSum . mconcat) . transpose $ [map Sum [0..5], map Sum [10,20..100]]
[10,21,32,43,54,65,70,80,90,100]
Two suggestions:
addtwolists xs ys =
let common = zipWith (+) xs ys
len = length common
in common ++ drop len xs ++ drop len ys
addtwolists xs ys | length xs < length ys = zipWith (+) (xs ++ repeat 0) ys
| otherwise = zipWith (+) xs (ys ++ repeat 0)

Merge sorted inputs in Haskell?

I'm a newbie to Haskell, and I'm trying to write an elegant function to merge an arbitrary number of sorted lists into a single sorted list... Can anyone provide an elegant and efficient reference implementation?
Thanks!
Something like this should work:
merge2 pred xs [] = xs
merge2 pred [] ys = ys
merge2 pred (x:xs) (y:ys) =
case pred x y of
True -> x: merge2 pred xs (y:ys)
False -> y: merge2 pred (x:xs) ys
merge pred [] = []
merge pred (x:[]) = x
merge pred (x:xs) = merge2 pred x (merge pred xs)
Here, the function merge2 merges 2 lists. The function merge merges a list of lists. The pred is predicate you use for sorting.
Example:
merge (<) [[1, 3, 9], [2, 3, 4], [7, 11, 15, 22]]
should return
[1,2,3,3,4,7,9,11,15,22]
Since I like taking advantage of infix operators and higher-order functions where it makes sense to, I would write
infixr 5 ##
(##) :: (Ord a) => [a] -> [a] -> [a]
-- if one side is empty, the merges can only possibly go one way
[] ## ys = ys
xs ## [] = xs
-- otherwise, take the smaller of the two heads out, and continue with the rest
(x:xs) ## (y:ys) = case x `compare` y of
LT -> x : xs ## (y:ys)
EQ -> x : xs ## ys
GT -> y : (x:xs) ## ys
-- a n-way merge can be implemented by a repeated 2-way merge
merge :: (Ord a) => [[a]] -> [a]
merge = foldr1 (##)
Here, xs ## ys merges two lists by their natural ordering (and drops duplicates), while merge [xs, ys, zs..] merges any number of lists.
This leads to the very natural definition of the Hamming numbers:
hamming :: (Num a, Ord a) => [a]
hamming = 1 : map (2*) hamming ## map (3*) hamming ## map (5*) hamming
hamming = 1 : merge [map (n*) hamming | n <- [2, 3, 5]] -- alternative
-- this generates, in order, all numbers of the form 2^i * 3^j * 5^k
-- hamming = [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36,40,45,48,50,..]
Stealing yairchu's unimplemented idea:
{-# LANGUAGE ViewPatterns #-}
import qualified Data.Map as M
import Data.List (foldl', unfoldr)
import Data.Maybe (mapMaybe)
-- merge any number of ordered lists, dropping duplicate elements
merge :: (Ord a) => [[a]] -> [a]
-- create a map of {n: [tails of lists starting with n]}; then
-- repeatedly take the least n and re-insert the tails
merge = unfoldr ((=<<) step . M.minViewWithKey) . foldl' add M.empty where
add m (x:xs) = M.insertWith' (++) x [xs] m; add m _ = m
step ((x, xss), m) = Just (x, foldl' add m xss)
-- merge any number of ordered lists, preserving duplicate elements
mergeDup :: (Ord a) => [[a]] -> [a]
-- create a map of {(n, i): tail of list number i (which starts with n)}; then
-- repeatedly take the least n and re-insert the tail
-- the index i <- [0..] is used to prevent map from losing duplicates
mergeDup = unfoldr step . M.fromList . mapMaybe swap . zip [0..] where
swap (n, (x:xs)) = Just ((x, n), xs); swap _ = Nothing
step (M.minViewWithKey -> Just (((x, n), xs), m)) =
Just (x, case xs of y:ys -> M.insert (y, n) ys m; _ -> m)
step _ = Nothing
where merge, like my original, eliminates duplicates, while mergeDup preserves them (like Igor's answer).
if efficiency wasn't a concern I'd go with
merge = sort . concat
otherwise:
merge :: Ord a => [[a]] -> [a]
merge [] = []
merge lists =
minVal : merge nextLists
where
heads = map head lists
(minVal, minIdx) = minimum $ zip heads [0..]
(pre, ((_:nextOfMin):post)) = splitAt minIdx lists
nextLists =
if null nextOfMin
then pre ++ post
else pre ++ nextOfMin : post
note however that this implementation always linearly searches for the minimum (while for a large number of list one may wish to maintain a heap etc.)
Unlike the other posts, I would have merge :: [a] -> [a] -> [a]
type SortedList a = [a]
merge :: (Ord a) => SortedList a -> SortedList a -> SortedList a
merge [] ys = ys
merge xs [] = xs
merge (x:xs) (y:ys)
| x < y = x : merge xs (y : ys)
| otherwise = y : merge (x : xs) ys
mergeAll :: (Ord a) => [SortedList a] -> SortedList a
mergeAll = foldr merge []
Just a quick note: if you want to have the optimal log n behavior when merging several lists (such as you'd get with a priority queue), you can do it very easily with a tweak to Igor's beautiful solution above. (I would have put this as a comment on his answer above, but I don't have enough reputation.) In particular, you do:
merge2 pred xs [] = xs
merge2 pred [] ys = ys
merge2 pred (x:xs) (y:ys) =
case pred x y of
True -> x: merge2 pred xs (y:ys)
False -> y: merge2 pred (x:xs) ys
everyother [] = []
everyother e0:[] = e0:[]
everyother (e0:e1:es) = e0:everyother es
merge pred [] = []
merge pred (x:[]) = x
merge pred xs = merge2 pred (merge pred . everyother $ xs)
(merge pred . everyother . tail $ xs)
Note that a real priority queue would be a bit faster/more space efficient, but that this solution is asymptotically just as good and, as I say, has the advantage that it's a very minor tweak to Igor's beautifully clear solution above.
Comments?

Resources