Finding a "Count Sequence" - algorithm

Given a list of integers xs, let:
count :: [Integer] -> Integer -> Integer
count xs n = length . filter (==n) $ xs
count the number of times the integer n occurs in the list.
Now, given a "list" (some sort of array of integers, can be something besides a List) of length n, write a function
countSequence :: [Integer] -> Integer -> Integer -> Integer
countSequence xs n m = [count xs x | x <- [0..m]]
that outputs the "list of counts" (0th index contains number of times 0 occurs in the list, 1st index contains number of times 1 occurs in the list, etc) that has time compleity o(m*n)
The above implementation I've given has complexity O(m*n). In Python (which I'm more familiar with), it's easy to do this in O(m + n) time --- iterate through the list, and each element increment a counter in some other list, which is initialized to be all zeros and length (m+1).
How could I get a better implementation in Haskell? I'd prefer if it wasn't some trivial way to implement the Python solution (such as adding another input to the function to keep the "list of counts" in and then interating through it).

In O(n+m) (sort of, I think, maybe):
import Data.Ix (inRange)
import qualified Data.IntMap.Strict as IM
countSequence m =
foldl' count IM.empty . filter (inRange (0,m))
where count a b = IM.insertWith (+) b 1 a
gives
> countSequence 2 [1,2,3,1,2,-1]
fromList [(1,2),(2,2)]
I haven't used n because you also didn't use n and I'm not sure what it's supposed to be. I also moved the list to the last argument to put it in a position to be eta reduced.

I think you should use your Python intuition -- iterate through the one list and increment a counter in another list. Here's an implementation with O(n+m) runtime:
import Data.Array
countSequence xs m = accumArray (+) 0 (0,m) [(x, 1) | x <- xs, inRange (0,m) x]
(This use case is even the motivating example for the existence of accumArray in the documentation!) In ghci:
> countSequence ([1..5] ++ [1,3..5] ++ [1,4..5] ++ [1,5]) 3
array (0,3) [(0,0),(1,4),(2,1),(3,2)]

I guess using Data.IntMap would be as efficient as it gets for this job. One foldr pass is done to establish the IntMap (cm) and a map to construct a new list holding the counts of elements at corresponding positions.
import qualified Data.IntMap.Lazy as IM
countSequence :: [Int] -> [Int]
countSequence xs = map (\x -> let cm = foldr (\x m -> IM.alter (\mx -> if mx == Nothing then Just 1 else fmap (+1) mx) x m) IM.empty xs
in IM.findWithDefault 0 x cm) xs
*Main> countSequence [1,2,5,1,3,7,8,5,6,4,1,2,3,7,9,3,4,8]
[3,2,2,3,3,2,2,2,1,2,3,2,3,2,1,3,2,2]
*Main> countSequence [4,5,4]
[2,1,2]
*Main> *Main> countSequence [9,8,7,6,5]
[1,1,1,1,1]

Related

Generalizing a combinatoric function?

I've been solving a few combinatoric problems on Haskell, so I wrote down those 2 functions:
permutations :: (Eq a) => [a] -> [[a]]
permutations [] = [[]]
permutations list = do
x <- list
xs <- permutations (filter (/= x) list)
return (x : xs)
combinations :: (Eq a, Ord a) => Int -> [a] -> [[a]]
combinations 0 _ = [[]]
combinations n list = do
x <- list
xs <- combinations (n-1) (filter (> x) list)
return (x : xs)
Which works as follows:
*Main> permutations [1,2,3]
[[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]]
*Main> combinations 2 [1,2,3,4]
[[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]]
Those were uncomfortably similar, so I had to abstract it. I wrote the following abstraction:
combinatoric next [] = [[]]
combinatoric next list = do
x <- list
xs <- combinatoric next (next x list)
return (x : xs)
Which receives a function that controls how to filter the elements of the list. It can be used to easily define permutations:
permutations :: (Eq a) => [a] -> [[a]]
permutations = combinatoric (\ x ls -> filter (/= x) ls)
But I couldn't define combinations this way since it carries an state (n). I could extend the combinatoric with an additional state argument, but that'd become too clunky and I remember such approach was not necessary in a somewhat similar situation. Thus, I wonder: is it possible to define combinations using combinatorics? If not, what is a better abstraction of combinatorics which successfully subsumes both functions?
This isn't a direct answer to your question (sorry), but I don't think your code is correct. The Eq and Ord constraints tipped me off - they shouldn't be necessary - so I wrote a couple of QuickCheck properties.
prop_numberOfPermutations xs = length (permutations xs) === factorial (length xs)
where _ = (xs :: [Int]) -- force xs to be instantiated to [Int]
prop_numberOfCombinations (Positive n) (NonEmpty xs) = n <= length xs ==>
length (combinations n xs) === choose (length xs) n
where _ = (xs :: [Int])
factorial :: Int -> Int
factorial x = foldr (*) 1 [1..x]
choose :: Int -> Int -> Int
choose n 0 = 1
choose 0 r = 0
choose n r = choose (n-1) (r-1) * n `div` r
The first property checks that the number of permutations of a list of length n is n!. The second checks that the number of r-combinations of a list of length n is C(n, r). Both of these properties fail when I run them against your definitions:
ghci> quickCheck prop_numberOfPermutations
*** Failed! Falsifiable (after 5 tests and 4 shrinks):
[0,0,0]
3 /= 6
ghci> quickCheck prop_numberOfCombinations
*** Failed! Falsifiable (after 4 tests and 1 shrink):
Positive {getPositive = 2}
NonEmpty {getNonEmpty = [3,3]}
0 /= 1
It looks like your functions fail when the input list contains duplicate elements. Writing an abstraction for an incorrect implementation isn't a good idea - don't try and run before you can walk! You might find it helpful to read the source code for the standard library's definition of permutations, which does not have an Eq constraint.
First let's improve the original functions. You assume that all elements are distinct wrt their equality for permutations, and that they're distinct and have an ordering for combinations. These constraints aren't necessary and as described in the other answer, the code can produce wrong results. Following the robustness principle, let's accept just unconstrained lists. For this we'll need a helper function that produces all possible splits of a list:
split :: [a] -> [([a], a, [a])]
split = loop []
where
loop _ [] = []
loop rs (x:xs) = (rs, x, xs) : loop (x:rs) xs
Note that the implementation causes prefixes returned by this function to be reversed, but it's nothing we require.
This allows us to write generic permutations and combinations.
permutations :: [a] -> [[a]]
permutations [] = [[]]
permutations list = do
(pre, x, post) <- split list
-- reversing 'pre' isn't really necessary, but makes the output
-- order natural
xs <- permutations (reverse pre ++ post)
return (x : xs)
combinations :: Int -> [a] -> [[a]]
combinations 0 _ = [[]]
combinations n list = do
(_, x, post) <- split list
xs <- combinations (n-1) post
return (x : xs)
Now what they have in common:
At each step they pick an element to output,
update the list of elements to pick from and
stop after some condition is met.
The last point is a bit problematic, as for permutations we end once the list to choose from is empty, while for combinations we have a counter. This is probably the reason why it was difficult to generalize. We can work around this by realizing that for permutations the number of steps is equal to the length of the input list, so we can express the condition in the number of repetitions.
For such problems it's often very convenient to express them using StateT s [] monad, where s is the state we're working with. In our case it'll be the list of elements to choose from. The core of our combinatorial functions can be then expressed with StateT [a] [] a: pick an element from the state and update the state for the next step. Since the stateful computations all happen in the [] monad, we automatically branch all possibilities. With that, we can define a generic function:
import Control.Monad.State
combinatoric :: Int -> StateT [a] [] b -> [a] -> [[b]]
combinatoric n k = evalStateT $ replicateM n k
And then define permutations and combinations by specifying the appropriate number of repetitions and what's the core StateT [a] [] a function:
permutations' :: [a] -> [[a]]
permutations' xs = combinatoric (length xs) f xs
where
f = StateT $ map (\(pre, x, post) -> (x, reverse pre ++ post)) . split
combinations' :: Int -> [a] -> [[a]]
combinations' n xs = combinatoric n f xs
where
f = StateT $ map (\(_, x, post) -> (x, post)) . split

Haskell Recursion - finding largest difference between numbers in list

Here's the problem at hand: I need to find the largest difference between adjacent numbers in a list using recursion. Take the following list for example: [1,2,5,6,7,9]. The largest difference between two adjacent numbers is 3 (between 2 and 5).
I know that recursion may not be the best solution, but I'm trying to improve my ability to use recursion in Haskell.
Here's the current code I currently have:
largestDiff (x:y:xs) = if (length (y:xs) > 1) then max((x-y), largestDiff (y:xs)) else 0
Basically - the list will keep getting shorter until it reaches 1 (i.e. no more numbers can be compared, then it returns 0). As 0 passes up the call stack, the max function is then used to implement a 'King of the Hill' type algorithm. Finally - at the end of the call stack, the largest number should be returned.
Trouble is, I'm getting an error in my code that I can't work around:
Occurs check: cannot construct the infinite type:
t1 = (t0, t1) -> (t0, t1)
In the return type of a call of `largestDiff'
Probable cause: `largestDiff' is applied to too few arguments
In the expression: largestDiff (y : xs)
In the first argument of `max', namely
`((x - y), largestDiff (y : xs))'
Anyone have some words of wisdom to share?
Thanks for your time!
EDIT: Thanks everyone for your time - I ended up independently discovering a much simpler way after much trial and error.
largestDiff [] = error "List too small"
largestDiff [x] = error "List too small"
largestDiff [x,y] = abs(x-y)
largestDiff (x:y:xs) = max(abs(x-y)) (largestDiff (y:xs))
Thanks again, all!
So the reason why your code is throwing an error is because
max((x-y), largestDiff (y:xs))
In Haskell, you do not use parentheses around parameters and separate them by commas, the correct syntax is
max (x - y) (largestDiff (y:xs))
The syntax you used is getting parsed as
max ((x - y), largestDiff (y:xs))
Which looks like you're passing a tuple to max!
However, this does not solve the problem. I always got 0 back. Instead, I would recommend breaking up the problem into two functions. You want to calculate the maximum of the difference, so first write a function to calculate the differences and then a function to calculate the maximum of those:
diffs :: Num a => [a] -> [a]
diffs [] = [] -- No elements case
diffs [x] = [] -- One element case
diffs (x:y:xs) = y - x : diffs (y:xs) -- Two or more elements case
largestDiff :: (Ord a, Num a) => [a] -> a
largestDiff xs = maximum $ map abs $ diffs xs
Notice how I've pulled the recursion out into the simplest possible case. We didn't need to calculate the maximum as we traversed the list; it's possible, just more complex. Since Haskell has a handy built-in function for calculating the maximum of a list for us, we can also leverage that. Our recursive function is clean and simple, and it is then combined with maximum to implement the desired largestDiff. As an FYI, diffs is really just a function to compute the derivative of a list of numbers, it can be a very useful function for data processing.
EDIT: Needed Ord constraint on largestDiff and added in map abs before calculating maximum.
Here's my take at it.
First some helpers:
diff a b = abs(a-b)
pick a b = if a > b then a else b
Then the solution:
mdiff :: [Int] -> Int
mdiff [] = 0
mdiff [_] = 0
mdiff (a:b:xs) = pick (diff a b) (mdiff (b:xs))
You have to provide two closing clauses, because the sequence might have either even or odd number of elements.
Another solution to this problem, which circumvents your error, can be obtained
by just transforming lists and folding/reducing them.
import Data.List (foldl')
diffs :: (Num a) => [a] -> [a]
diffs x = zipWith (-) x (drop 1 x)
absMax :: (Ord a, Num a) => [a] -> a
absMax x = foldl' max (fromInteger 0) (map abs x)
Now I admit this is a bit dense for a beginner, so I will explain the above.
The function zipWith transforms two given lists by using a binary function,
which is (-) in this case.
The second list we pass to zipWith is drop 1 x, which is just another way of
describing the tail of a list, but where tail [] results in an error,
drop 1 [] just yields the empty list. So drop 1 is the "safer" choice.
So the first function calculates the adjacent differences.
The name of the second function suggests that it calculates the maximum absolute
value of a given list, which is only partly true, it results in "0" if passed an
empty list.
But how does this happen, reading from right to left, we see that map abs
transforms every list element to its absolute value, which is asserted by
the Num a constraint. Then the foldl'-function traverses the list and
accumulates the maximum of the previous accumulator and the current element of
the list traversal. Moreover I'd like to mention that foldl' is the "strict"
sister/brother of the foldl-function, where the latter is rarely of use,
because it tends to build up a bunch of unevaluated expressions called thunks.
So let's quit all this blah blah and see it in action ;-)
> let a = diffs [1..3] :: [Int]
>>> zipWith (-) [1,2,3] (drop 1 [1,2,3])
<=> zipWith (-) [1,2,3] [2,3]
<=> [1-2,2-3] -- zipWith stops at the end of the SHORTER list
<=> [-1,-1]
> b = absMax a
>>> foldl' max (fromInteger 0) (map abs [-1,-1])
-- fromInteger 0 is in this case is just 0 - interesting stuff only happens
-- for other numerical types
<=> foldl' max 0 (map abs [-1,-1])
<=> foldl' max 0 [1,1]
<=> foldl' max (max 0 1) [1]
<=> foldl' max 1 [1]
<=> foldl' max (max 1 1) []
<=> foldl' max 1 [] -- foldl' _ acc [] returns just the accumulator
<=> 1

Finding unique (as in only occurring once) element haskell

I need a function which takes a list and return unique element if it exists or [] if it doesn't. If many unique elements exists it should return the first one (without wasting time to find others).
Additionally I know that all elements in the list come from (small and known) set A.
For example this function does the job for Ints:
unique :: Ord a => [a] -> [a]
unique li = first $ filter ((==1).length) ((group.sort) li)
where first [] = []
first (x:xs) = x
ghci> unique [3,5,6,8,3,9,3,5,6,9,3,5,6,9,1,5,6,8,9,5,6,8,9]
ghci> [1]
This is however not good enough because it involves sorting (n log n) while it could be done in linear time (because A is small).
Additionally it requires the type of list elements to be Ord while all which should be needed is Eq. It would also be nice if amount of comparisons was as small as possible (ie if we traverse a list and encounter element el twice we don't test subsequent elements for equality with el)
This is why for example this: Counting unique elements in a list doesn't solve the problem - all answers involve either sorting or traversing the whole list to find count of all elements.
The question is: how to do it correctly and efficiently in Haskell ?
Okay, linear time, from a finite domain. The running time will be O((m + d) log d), where m is the size of the list and d is the size of the domain, which is linear when d is fixed. My plan is to use the elements of the set as the keys of a trie, with the counts as values, then look through the trie for elements with count 1.
import qualified Data.IntTrie as IntTrie
import Data.List (foldl')
import Control.Applicative
Count each of the elements. This traverses the list once, builds a trie with the results (O(m log d)), then returns a function which looks up the result in the trie (with running time O(log d)).
counts :: (Enum a) => [a] -> (a -> Int)
counts xs = IntTrie.apply (foldl' insert (pure 0) xs) . fromEnum
where
insert t x = IntTrie.modify' (fromEnum x) (+1) t
We use the Enum constraint to convert values of type a to integers in order to index them in the trie. An Enum instance is part of the witness of your assumption that a is a small, finite set (Bounded would be the other part, but see below).
And then look for ones that are unique.
uniques :: (Eq a, Enum a) => [a] -> [a] -> [a]
uniques dom xs = filter (\x -> cts x == 1) dom
where
cts = counts xs
This function takes as its first parameter an enumeration of the entire domain. We could have required a Bounded a constraint and used [minBound..maxBound] instead, which is semantically appealing to me since finite is essentially Enum+Bounded, but quite inflexible since now the domain needs to be known at compile time. So I would choose this slightly uglier but more flexible variant.
uniques traverses the domain once (lazily, so head . uniques dom will only traverse as far as it needs to to find the first unique element -- not in the list, but in dom), for each element running the lookup function which we have established is O(log d), so the filter takes O(d log d), and building the table of counts takes O(m log d). So uniques runs in O((m + d) log d), which is linear when d is fixed. It will take at least Ω(m log d) to get any information from it, because it has to traverse the whole list to build the table (you have to get all the way to the end of the list to see if an element was repeated, so you can't do better than this).
There really isn't any way to do this efficiently with just Eq. You'd need to use some much less efficient way to build the groups of equal elements, and you can't know that only one of a particular element exists without scanning the whole list.
Also, note that to avoid useless comparisons you'd need a way of checking to see if an element has been encountered before, and the only way to do that would be to have a list of elements known to have multiple occurrences, and the only way to check if the current element is in that list is... to compare it for equality with each.
If you want this to work faster than O(something really horrible) you need that Ord constraint.
Ok, based on the clarifications in comments, here's a quick and dirty example of what I think you're looking for:
unique [] _ _ = Nothing
unique _ [] [] = Nothing
unique _ (r:_) [] = Just r
unique candidates results (x:xs)
| x `notElem` candidates = unique candidates results xs
| x `elem` results = unique (delete x candidates) (delete x results) xs
| otherwise = unique candidates (x:results) xs
The first argument is a list of candidates, which should initially be all possible elements. The second argument is the list of possible results, which should initially be empty. The third argument is the list to examine.
If it runs out of candidates, or reaches the end of the list with no results, it returns Nothing. If it reaches the end of the list with results, it returns the one at the front of the result list.
Otherwise, it examines the next input element: If it's not a candidate, it ignores it and continues. If it's in the result list we've seen it twice, so remove it from the result and candidate lists and continue. Otherwise, add it to the results and continue.
Unfortunately, this still has to scan the entire list for even a single result, since that's the only way to be sure it's actually unique.
First off, if your function is intended to return at most one element, you should almost certainly use Maybe a instead of [a] to return your result.
Second, at minimum, you have no choice but to traverse the entire list: you can't tell for sure if any given element is actually unique until you've looked at all the others.
If your elements are not Ordered, but can only be tested for Equality, you really have no better option than something like:
firstUnique (x:xs)
| elem x xs = firstUnique (filter (/= x) xs)
| otherwise = Just x
firstUnique [] = Nothing
Note that you don't need to filter out the duplicated elements if you don't want to -- the worst case is quadratic either way.
Edit:
The above misses the possibility of early exit due to the above-mentioned small/known set of possible elements. However, note that the worst case will still require traversing the entire list: all that is necessary is for at least one of these possible elements to be missing from the list...
However, an implementation that provides an early out in case of set exhaustion:
firstUnique = f [] [<small/known set of possible elements>] where
f [] [] _ = Nothing -- early out
f uniques noshows (x:xs)
| elem x uniques = f (delete x uniques) noshows xs
| elem x noshows = f (x:uniques) (delete x noshows) xs
| otherwise = f uniques noshows xs
f [] _ [] = Nothing
f (u:_) _ [] = Just u
Note that if your list has elements which shouldn't be there (because they aren't in the small/known set), they will be pointedly ignored by the above code...
As others have said, without any additional constraints, you can't do this in less than quadratic time, because without knowing something about the elements, you can't keep them in some reasonable data structure.
If we are able to compare elements, an obvious O(n log n) solution to compute the count of elements first and then find the first one with count equal to 1:
import Data.List (foldl', find)
import Data.Map (Map)
import qualified Data.Map as Map
import Data.Maybe (fromMaybe)
count :: (Ord a) => Map a Int -> a -> Int
count m x = fromMaybe 0 $ Map.lookup x m
add :: (Ord a) => Map a Int -> a -> Map a Int
add m x = Map.insertWith (+) x 1 m
uniq :: (Ord a) => [a] -> Maybe a
uniq xs = find (\x -> count cs x == 1) xs
where
cs = foldl' add Map.empty xs
Note that the log n factor comes from the fact that we need to operate on a Map of size n. If the list has only k unique elements then the size of our map will be at most k, so the overall complexity will be just O(n log k).
However, we can do even better - we can use a hash table instead of a map to get an O(n) solution. For this we'll need the ST monad to perform mutable operations on the hash map, and our elements will have to be Hashable. The solution is basically the same as before, just a little bit more complex due to working within the ST monad:
import Control.Monad
import Control.Monad.ST
import Data.Hashable
import qualified Data.HashTable.ST.Basic as HT
import Data.Maybe (fromMaybe)
count :: (Eq a, Hashable a) => HT.HashTable s a Int -> a -> ST s Int
count ht x = liftM (fromMaybe 0) (HT.lookup ht x)
add :: (Eq a, Hashable a) => HT.HashTable s a Int -> a -> ST s ()
add ht x = count ht x >>= HT.insert ht x . (+ 1)
uniq :: (Eq a, Hashable a) => [a] -> Maybe a
uniq xs = runST $ do
-- Count all elements into a hash table:
ht <- HT.newSized (length xs)
forM_ xs (add ht)
-- Find the first one with count 1
first (\x -> liftM (== 1) (count ht x)) xs
-- Monadic variant of find which exists once an element is found.
first :: (Monad m) => (a -> m Bool) -> [a] -> m (Maybe a)
first p = f
where
f [] = return Nothing
f (x:xs') = do
b <- p x
if b then return (Just x)
else f xs'
Notes:
If you know that there will be only a small number of distinct elements in the list, you could use HT.new instead of HT.newSized (length xs). This will save you some memory and one pass over xs but in the case of many distinct elements the hash table will be have to resized several times.
Here is a version that does the trick:
unique :: Eq a => [a] -> [a]
unique = select . collect []
where
collect acc [] = acc
collect acc (x : xs) = collect (insert x acc) xs
insert x [] = [[x]]
insert x (ys#(y : _) : yss)
| x == y = (x : ys) : yss
| otherwise = ys : insert x yss
select [] = []
select ([x] : _) = [x]
select ((_ : _) : xss) = select xss
So, first we traverse the input list (collect) while maintaining a list of buckets of equal elements that we update with insert. Then we simply select the first element that appears in a singleton bucket (select).
The bad news is that this takes quadratic time: for every visited element in collect we need to go over the list of buckets. I am afraid that is the price you will have to pay for only being able to constrain the element type to be in Eq.
Something like this look pretty good.
unique = fst . foldl' (\(a, b) c -> if (c `elem` b)
then (a, b)
else if (c `elem` a)
then (delete c a, c:b)
else (c:a, b)) ([],[])
The first element of the resulted tuple of the fold, contain what you are expecting, a list containing unique element. The second element of the tuple is the memory of the process remembered if an element has already been discarded or not.
About space performance.
As your problem is design, all the element of the list should be traversed at least one time, before a result can be display. And the internal algorithm must keep trace of discarded value in addition to the good one, but discarded value will appears only one time. Then in the worst case the required amount of memory is equal to the size of the inputted list. This sound goods as you said that expected input are small.
About time performance.
As the expected input are small and not sorted by default, trying to sort the list into the algorithm is useless, or before to apply it is useless. In fact statically we can almost said, that the extra operation to place an element at its ordered place (into the sub list a and b of the tuple (a,b)) will cost the same amount of time than to check if this element appear into the list or not.
Below a nicer and more explicit version of the foldl' one.
import Data.List (foldl', delete, elem)
unique :: Eq a => [a] -> [a]
unique = fst . foldl' algorithm ([], [])
where
algorithm (result0, memory0) current =
if (current `elem` memory0)
then (result0, memory0)
else if (current`elem` result0)
then (delete current result0, memory)
else (result, memory0)
where
result = current : result0
memory = current : memory0
Into the nested if ... then ... else ... instruction the list result is traversed twice in the worst case, this can be avoid using the following helper function.
unique' :: Eq a => [a] -> [a]
unique' = fst . foldl' algorithm ([], [])
where
algorithm (result, memory) current =
if (current `elem` memory)
then (result, memory)
else helper current result memory []
where
helper current [] [] acc = ([current], [])
helper current [] memory acc = (acc, memory)
helper current (r:rs) memory acc
| current == r = (acc ++ rs, current:memory)
| otherwise = helper current rs memory (r:acc)
But the helper can be rewrite using fold as follow, which is definitely nicer.
helper current [] _ = ([current],[])
helper current memory result =
foldl' (\(r, m) x -> if x==current
then (r, current:m)
else (current:r, m)) ([], memory) $ result

Two simple codes to generate divisors of a number. Why is the recursive one faster?

While solving a problem, I had to calculate the divisors of a number. I have two implementations that produce all divisors > 1 for a given number.
The first is using simple recursion:
divisors :: Int64 -> [Int64]
divisors k = divisors' 2 k
where
divisors' n k | n*n > k = [k]
| n*n == k = [n, k]
| k `mod` n == 0 = (n:(k `div` n):result)
| otherwise = result
where result = divisors' (n+1) k
The second one uses list processing functions from the Prelude:
divisors2 :: Int64 -> [Int64]
divisors2 k = k : (concatMap (\x -> [x, k `div` x]) $!
filter (\x -> k `mod` x == 0) $!
takeWhile (\x -> x*x <= k) [2..])
I find that the first implementation is faster (I printed the whole list returned, so that no part of the result remains unevaluated due to laziness). The two implementations produce differently ordered divisors, but that is not a problem for me. (In fact, if k is a perfect square, the square root is output twice in the second implementation - again not a problem).
In general are such recursive implementations faster in Haskell? Also, I would appreciate any pointers to make either of these codes faster. Thanks!
EDIT:
Here is the code I am using to compare these two implementations for performance: https://gist.github.com/3414372
Here are my timing measurements:
Using divisor2 with strict evaluation ($!)
$ ghc --make -O2 div.hs
[1 of 1] Compiling Main ( div.hs, div.o )
Linking div ...
$ time ./div > /tmp/out1
real 0m7.651s
user 0m7.604s
sys 0m0.012s
Using divisors2 with lazy evaluation ($):
$ ghc --make -O2 div.hs
[1 of 1] Compiling Main ( div.hs, div.o )
Linking div ...
$ time ./div > /tmp/out1
real 0m7.461s
user 0m7.444s
sys 0m0.012s
Using function divisors
$ ghc --make -O2 div.hs
[1 of 1] Compiling Main ( div.hs, div.o )
Linking div ...
$ time ./div > /tmp/out1
real 0m7.058s
user 0m7.036s
sys 0m0.020s
The recursive version is not in general faster than the list-based version. This is because the GHC compiler employs List fusion optimizations when a computation follows a certain pattern. This means that list generators and "list transformers" might be fused into one big generator instead.
However, when you use $!, you basically tell the compiler to "Please produce the first cons of this list before performing the next step." This means that GHC is forced to at least compute one intermediate list element, which disables the whole fusion optimization entirely.
So, the second algorithm is slower, because you produce intermediate lists that have to be constructed and destructed, while the recursive algorithm simply produces a single list straight away.
Since you asked, to make it faster a different algorithm should be used. Simple and straightforward is to find a prime factorization first, then construct the divisors from it somehow.
Standard prime factorization by trial division is:
factorize :: Integral a => a -> [a]
factorize n = go n (2:[3,5..]) -- or: `go n primes`
where
go n ds#(d:t)
| d*d > n = [n]
| r == 0 = d : go q ds
| otherwise = go n t
where (q,r) = quotRem n d
-- factorize 12348 ==> [2,2,3,3,7,7,7]
Equal prime factors can be grouped and counted:
import Data.List (group)
primePowers :: Integral a => a -> [(a, Int)]
primePowers n = [(head x, length x) | x <- group $ factorize n]
-- primePowers = map (head &&& length) . group . factorize
-- primePowers 12348 ==> [(2,2),(3,2),(7,3)]
Divisors are usually constructed, though out of order, with:
divisors :: Integral a => a -> [a]
divisors n = map product $ sequence
[take (k+1) $ iterate (p*) 1 | (p,k) <- primePowers n]
Hence, we have
numDivisors :: Integral a => a -> Int
numDivisors n = product [ k+1 | (_,k) <- primePowers n]
The product here comes from the sequence in the definition above it, because sequence :: Monad m => [m a] -> m [a] for list monad m ~ [] constructs lists of all possible combinations of elements picked by one from each member list, sequence_lists = foldr (\xs rs -> [x:r | x <- xs, r <- rs]) [[]], so that length . sequence_lists === product . map length, and or course length . take n === n for infinite argument lists.
In-order generation is possible, too:
ordDivisors :: Integral a => a -> [a]
ordDivisors n = foldr (\(p,k)-> foldi merge [] . take (k+1) . iterate (map (p*)))
[1] $ reverse $ primePowers n
foldi :: (a -> a -> a) -> a -> [a] -> a
foldi f z (x:xs) = f x (foldi f z (pairs xs)) where
pairs (x:y:xs) = f x y:pairs xs
pairs xs = xs
foldi f z [] = z
merge :: Ord a => [a] -> [a] -> [a]
merge (x:xs) (y:ys) = case (compare y x) of
LT -> y : merge (x:xs) ys
_ -> x : merge xs (y:ys)
merge xs [] = xs
merge [] ys = ys
{- ordDivisors 12348 ==>
[1,2,3,4,6,7,9,12,14,18,21,28,36,42,49,63,84,98,126,147,196,252,294,343,441,588,
686,882,1029,1372,1764,2058,3087,4116,6174,12348] -}
This definition is productive, too, i.e. it starts producing the divisors right away, without noticeable delay:
{- take 20 $ ordDivisors $ product $ concat $ replicate 5 $ take 11 primes
==> [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
(0.00 secs, 525068 bytes)
numDivisors $ product $ concat $ replicate 5 $ take 11 primes
==> 362797056 -}

Most efficient way to create the Data.Set of all pairs of elements in a Set?

Given an arbitrary set holding an arbitrary number of elements of arbitrary type, e.g.
mySet1 = Set.fromList [1,2,3,4]
or
mySet2 = Set.fromList ["a","b","c","d"]
or
mySet3 = Set.fromList [A, B, C, D]
for some data constructors A, B, C, D, ...
What is the computationally most efficient way to generate the set of all unordered pairs of elements is the given set? I.e.
setPairs mySet1 == Set.fromList [(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)]
or
setPairs mySet2 == fromList [ ("a","b")
, ("a","c")
, ("a","d")
, ("b","c")
, ("b","d")
, ("c","d") ]
or
setPairs mySet2 == fromList [ (A,B)
, (A,C)
, (A,D)
, (B,C)
, (B,D)
, (C,D) ]
My initial naive guess would be:
setPairs s = fst $ Set.fold
(\e (pairAcc, elementsLeft) ->
( Set.fold
(\e2 pairAcc2 ->
Set.insert (e2, e) pairAcc2
) pairAcc $ Set.delete e elementsLeft
, Set.delete e elementsLeft )
) (Set.empty, s) s
but surely that cannot be the best solution?
Benchmarking might prove me wrong, but my suspicion is that there's no win in staying in the set representation. You're going to need O(n^2) regardless, because that's the size of the output. The key advantage would be producing your list such that you could use a call to S.fromDistinctAscList such that it only costs O(n) to build the set itself.
The following is pretty clean, preserves a fair amount of sharing, and is generally the simplest, most straightforward and intuitive solution I can imagine.
pairs s = S.fromDistinctAscList . concat $ zipWith zip (map (cycle . take 1) ts) (drop 1 ts)
where ts = tails $ S.toList s
Edit
Shorter/clearer (not sure performancewise, but probably as good/better):
pairs s = S.fromDistinctAscList [(x,y) | (x:xt) <- tails (S.toList s), y <- xt]
At first, you need to generate all sets. replicateM from Control.Monad helps with it.
λ> replicateM 2 [1..4]
[[1,1],[1,2],[1,3],[1,4],[2,1],[2,2],[2,3],[2,4],[3,1],[3,2],[3,3],[3,4],[4,1],[4,2],[4,3],[4,4]]
Then you need to filter pairs, where second element is greater than first
λ> filter (\[x,y] -> x < y) $ replicateM 2 [1 .. 4]
[[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]]
Finally, you need to convert every list in a tuple
λ> map (\[x,y] -> (x,y)) $ filter (\[x,y] -> x < y) $ replicateM 2 [1 .. 4]
[(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)]
Then we can formulate it into function pairs:
import Data.Set
import Control.Monad
import Data.List
mySet = Data.Set.fromList [1,2,3,4]
--setOfPairs = Data.Set.fromList [(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)]
setOfPairs = Data.Set.fromList $ pairs mySet
pairs :: Ord a => Set a -> [(a,a)]
pairs x = Data.List.map (\[x,y] -> (x,y)) $ Data.List.filter (\[x,y] -> x < y) $ replicateM 2 $ toList x
So, if I got you question right, you can use pairs mySet, where pairs generate the list of all unordered pairs of mySet.
Is it what you want?
UPD:
List comprehension could be more clear and fast technique to create such sublists, so here is another instance of pairs:
pairs :: Ord a => Set a -> [(a,a)]
pairs set = [(x,y) | let list = toList set, x <- list, y <- list, x < y]
So here is a first stab at a solution using conversion back and forth to a list. Again, I am not sure this is the fastest way to do this but I do know that iteration over sets it's not terribly efficient.
import Data.List
import qualified Data.Set as S
pairs :: S.Set String -> S.Set (String,String)
pairs s = S.fromList $ foldl' (\st e -> (zip l e) ++ st) [] ls
where (l:ls) = tails $ S.toList s
By folding zip over the tails, you get a nice and efficient way to create the set of unordered pairs. However, instinct encourages me that there may be a monadic filterM or foldM solution that's even more elegant. I will keep thinking.
[EDIT]
So here is what should be [but is not on account of the size of the powerset] a faster solution that does not require a toList.
import Data.List
import qualified Data.Set as S
import qualified Data.Foldable as F
pairs :: (Ord a) => S.Set a -> S.Set (a,a)
pairs s = S.fromList $ foldl two [] $ F.foldlM (\st e -> [[e]++st,st]) [] s
where two st (x:xa:[]) = (x,xa) : st
two st _ = st
Uses the power-set solution over monadic lists to build the powerset and then filter out the pairs. I can go into more detail if necessary.

Resources