Haskell: FIFO queue algorithm complexity - algorithm

This is my attempt at a FIFO queue:
type Queue a = [a] -> [a]
empty :: Queue a
empty = id
remove :: Int -> Queue a -> ([a], Queue a)
remove n queue = (take n (queue []), (\x -> drop n (queue x)));
add :: [a] -> Queue a -> Queue a
add elems queue = (\x -> queue (elems ++ x))
empty creates an empty queue, remove takes the first n elements of the queue and returns the rest of the queue as the second element of the tuple, and add adds the list elems to the queue.
Will this add/remove 1 element in O(1) time and n elements in O(n) time?

What you have implemented effectively amounts to difference lists. (See: dlist.)
Difference lists allow for cheap appends, but unfortunately your removal will take linear time. It becomes more clear if we rewrite your code slightly:
type Queue a = [a] -> [a]
empty :: Queue a
empty = id
toList :: Queue a -> [a]
toList q = q []
fromList :: [a] -> Queue a
fromList = (++)
remove :: Int -> Queue a -> ([a], Queue a)
remove n q = (xs, fromList ys)
where
(xs, ys) = splitAt n (toList q)
add :: [a] -> Queue a -> Queue a
add xs q = (++ xs) . q
Note that I have made the conversion to and from lists a bit more explicit than it was in your code. You clearly see that the core of your removal code gets bracketed between toList and fromList.

Well, sidestepping your question somewhat, the classic purely functional implementation of a FIFO queue is as a pair of lists, one for the "front" and one for the "back." You enqueue elements by adding them as the head of the back list, and dequeue by taking the head of the front list; if the front list is empty, you "rotate" the queue by reversing the back list and swapping that with the empty front list. In code:
import Control.Monad
import Data.List
import Data.Maybe
data FIFO a = FIFO [a] [a]
deriving Show
empty :: FIFO a
empty = FIFO [] []
isEmpty :: FIFO a -> Bool
isEmpty (FIFO [] []) = True
isEmpty _ = False
enqueue :: a -> FIFO a -> FIFO a
enqueue x (FIFO front back) = FIFO front (x:back)
-- | Remove the head off the queue. My type's different from yours
-- because I use Maybe to handle the case where somebody tries to
-- dequeue off an empty FIFO.
dequeue :: FIFO a -> Maybe (a, FIFO a)
dequeue queue = case queue of
FIFO [] [] -> Nothing
FIFO (x:f) b -> Just (x, FIFO f b)
otherwise -> dequeue (rotate queue)
where rotate (FIFO [] back) = FIFO (reverse back) []
-- | Elements exit the queue in the order they appear in the list.
fromList :: [a] -> FIFO a
fromList xs = FIFO xs []
-- | Elements appear in the result list in the order they exit the queue.
toList :: FIFO a -> [a]
toList = unfoldr dequeue
That's the classic implementation. Now your operations can be written in terms of that:
-- | Enqueue multiple elements. Elements exit the queue in the order
-- they appear in xs.
add :: [a] -> FIFO a -> FIFO a
add xs q = foldl' (flip enqueue) q xs
To write remove in terms of dequeue, you need to handle all of those intermediate FIFOs from the (a, FIFO a) result of dequeue. One way to do that is to use the State monad:
import Control.Monad.State
-- | Remove n elements from the queue. My result type is different
-- from yours, again, because I handle the empty FIFO case. If you
-- try to remove too many elements, you get a bunch of Nothings at
-- the end of your list.
remove :: Int -> FIFO a -> ([Maybe a], FIFO a)
remove n q = runState (removeM n) q
-- | State monad action to dequeue n elements from the state queue.
removeM :: Int -> State (FIFO a) [Maybe a]
removeM n = replicateM n dequeueM
-- | State monad action to dequeue an element from the state queue.
dequeueM :: State (FIFO a) (Maybe a)
dequeueM = do q <- get
case dequeue q of
Just (x, q') -> put q' >> return (Just x)
Nothing -> return Nothing

I was looking for a FIFO queue that's faster than taking a list and reversing it. Stefan's add isn't performant (O(n)), so here's what's worked for me after benchmarking:
add :: a -> Queue a -> Queue a
add x f = f . (x:)

Related

Haskell: Sort an almost-sorted array

I've been learning Haskell in my spare time working through LYAH. Would like to improve upon my Haskell (/ Functional programming) skills by solving some problems from the imperative world. One of the problems from EPI is to print an "almost sorted array", in a sorted fashion where it is guaranteed that no element in the array is more than k away from its correct position. The input is a stream of elements and the requirement is to do this in O(n log k) time complexity and O(k) space complexity.
I've attempted to re-implement the imperative solution in Haskell as follows:
import qualified Data.Heap as Heap
-- print the k-sorted list in a sorted fashion
ksorted :: (Ord a, Show a) => [a] -> Int -> IO ()
ksorted [] _ = return ()
ksorted xs k = do
heap <- ksorted' xs Heap.empty
mapM_ print $ (Heap.toAscList heap) -- print the remaining elements in the heap.
where
ksorted' :: (Ord a, Show a) => [a] -> Heap.MinHeap a -> IO (Heap.MinHeap a)
ksorted' [] h = return h
ksorted' (x:xs) h = do let (m, h') = getMinAndBuildHeap h x in
(printMin m >> ksorted' xs h')
printMin :: (Show a) => Maybe a -> IO ()
printMin m = case m of
Nothing -> return ()
(Just item) -> print item
getMinAndBuildHeap :: (Ord a, Show a) => Heap.MinHeap a -> a -> (Maybe a, Heap.MinHeap a)
getMinAndBuildHeap h item= if (Heap.size h) > k
then ((Heap.viewHead h), (Heap.insert item (Heap.drop 1 h)))
else (Nothing, (Heap.insert item h))
I would like to know a better way of solving this in Haskell. Any inputs would be appreciated.
[Edit 1]: The input is stream, but for now I assumed a list instead (with only a forward iterator/ input iterator in some sense.)
[Edit 2]: added Data.Heap import to the code.
Thanks.
I think the main improvement is to separate the production of the sorted list from the printing of the sorted list. So:
import Data.Heap (MinHeap)
import qualified Data.Heap as Heap
ksort :: Ord a => Int -> [a] -> [a]
ksort k xs = go (Heap.fromList b) e where
(b, e) = splitAt (k-1) xs
go :: Ord a => MinHeap a -> [a] -> [a]
go heap [] = Heap.toAscList heap
go heap (x:xs) = x' : go heap' xs where
Just (x', heap') = Heap.view (Heap.insert x heap)
printKSorted :: (Ord a, Show a) => Int -> [a] -> IO ()
printKSorted k xs = mapM_ print (ksort k xs)
If I were feeling extra-special-fancy, I might try to turn go into a foldr or perhaps a mapAccumR, but in this case I think the explicit recursion is relatively readable, too.

Breaking after finding the kth element of an inorder traversal using a higher order traversal function

I have the following code to do an inorder traversal of a Binary Tree:
data BinaryTree a =
Node a (BinaryTree a) (BinaryTree a)
| Leaf
deriving (Show)
inorder :: (a -> b -> b) -> b -> BinaryTree a -> b
inorder f acc tree = go tree acc
where go Leaf z = z
go (Node v l r) z = (go r . f v . go l) z
Using the inorder function above I'd like to get the kth element without having to traverse the entire list.
The traversal is a little like a fold given that you pass it a function and a starting value. I was thinking that I could solve it by passing k as the starting value, and a function that'll decrement k until it reaches 0 and at that point returns the value inside the current node.
The problem I have is that I'm not quite sure how to break out of the recursion of inorder traversal short of modifying the whole function, but I feel like having to modify the higher order function ruins the point of using a higher order function in the first place.
Is there a way to break after k iterations?
I observe that the results of the recursive call to go on the left and right subtrees are not available to f; hence no matter what f does, it cannot choose to ignore the results of recursive calls. Therefore I believe that inorder as written will always walk over the entire tree. (edit: On review, this statement may be a bit strong; it seems f may have a chance to ignore left subtrees. But the point basically stands; there is no reason to elevate left subtrees over right subtrees in this way.)
A better choice is to give the recursive calls to f. For example:
anyOldOrder :: (a -> b -> b -> b) -> b -> BinaryTree a -> b
anyOldOrder f z = go where
go Leaf = z
go (Node v l r) = f v (go l) (go r)
Now when we write
flatten = anyOldOrder (\v ls rs -> ls ++ [v] ++ rs) []
we will find that flatten is sufficiently lazy:
> take 3 (flatten (Node 'c' (Node 'b' (Node 'a' Leaf Leaf) Leaf) undefined))
"abc"
(The undefined is used to provide evidence that this part of the tree is never inspected during the traversal.) Hence we may write
findK k = take 1 . reverse . take k . flatten
which will correctly short-circuit. You can make flatten slightly more efficient with the standard difference list technique:
flatten' t = anyOldOrder (\v l r -> l . (v:) . r) id t []
Just for fun, I also want to show how to implement this function without using an accumulator list. Instead, we will produce a stateful computation which walks over the "interesting" part of the tree, stopping when it reaches the kth element. The stateful computation looks like this:
import Control.Applicative
import Control.Monad.State
import Control.Monad.Trans.Maybe
kthElem k v l r = l <|> do
i <- get
if i == k
then return v
else put (i+1) >> r
Looks pretty simple, hey? Now our findK function will farm out to kthElem, then do some newtype unwrapping:
findK' k = (`evalState` 1) . runMaybeT . anyOldOrder (kthElem 3) empty
We can verify that it is still as lazy as desired:
> findK' 3 $ Node 'c' (Node 'b' (Node 'a' Leaf Leaf) Leaf) undefined
Just 'c'
There are (at least?) two important generalizations of the notion of folding a list. The first, more powerful, notion is that of a catamorphism. The anyOldOrder of Daniel Wagner's answer follows this pattern.
But for your particular problem, the catamorphism notion is a bit more power than you need. The second, weaker, notion is that of a Foldable container. Foldable expresses the idea of a container whose elements can all be mashed together using the operation of an arbitrary Monoid. Here's a cute trick:
{-# LANGUAGE DeriveFoldable #-}
-- Note that for this trick only I've
-- switched the order of the Node fields.
data BinaryTree a =
Node (BinaryTree a) a (BinaryTree a)
| Leaf
deriving (Show, Foldable)
index :: [a] -> Int -> Maybe a
[] `index` _ = Nothing
(x : _) `index` 0 = Just x
(_ : xs) `index` i = xs `index` (i - 1)
(!?) :: Foldable f => Int -> f a -> Maybe a
xs !? i = toList xs `index` i
Then you can just use !? to index into your tree!
That trick is cute, and in fact deriving Foldable is a tremendous convenience, but it won't help you understand anything. I'll start by showing how you can define treeToList fairly directly and efficiently, without using Foldable.
treeToList :: BinaryTree a -> [a]
treeToList t = treeToListThen t []
The magic is in the treeToListThen function. treeToListThen t more converts t to a list and appends the list more to the end of the result. This slight generalization turns out to be all that's required to make conversion to a list efficient.
treeToListThen :: BinaryTree a -> [a] -> [a]
treeToListThen Leaf more = more
treeToListThen (Node v l r) more =
treeToListThen l $ v : treeToListThen r more
Instead of producing an inorder traversal of the left subtree and then appending everything else, we tell the left traversal what to stick on the end when it's done! This avoids the potentially serious inefficiency of repeated list concatenation that can turn things O(n^2) in bad cases.
Getting back to the Foldable notion, turning things into lists is a special case of foldr:
toList = foldr (:) []
So how can we implement foldr for trees? It ends up being somewhat similar to what we did with toList:
foldrTree :: (a -> b -> b) -> b -> BinaryTree a -> b
foldrTree _ n Leaf = n
foldrTree c n (Node v l r) = foldrTree c rest l
where
rest = v `c` foldrTree c n r
That is, when we go down the left side, we tell it that when it's done, it should deal with the current node and its right child.
Now foldr isn't quite the most fundamental operation of Foldable; that is actually
foldMap :: (Foldable f, Monoid m)
=> (a -> m) -> f a -> m
It is possible to implement foldr using foldMap, in a somewhat tricky fashion using a peculiar Monoid. I don't want to overload you with details of that right now, unless you ask (but you should look at the default definition of foldr in Data.Foldable). Instead, I'll show how foldMap can be defined using Daniel Wagner's anyOldOrder:
instance Foldable BinaryTree where
foldMap f = anyOldOrder bin mempty where
bin lres v rres = lres <> f v <> rres

Generalizing a combinatoric function?

I've been solving a few combinatoric problems on Haskell, so I wrote down those 2 functions:
permutations :: (Eq a) => [a] -> [[a]]
permutations [] = [[]]
permutations list = do
x <- list
xs <- permutations (filter (/= x) list)
return (x : xs)
combinations :: (Eq a, Ord a) => Int -> [a] -> [[a]]
combinations 0 _ = [[]]
combinations n list = do
x <- list
xs <- combinations (n-1) (filter (> x) list)
return (x : xs)
Which works as follows:
*Main> permutations [1,2,3]
[[1,2,3],[1,3,2],[2,1,3],[2,3,1],[3,1,2],[3,2,1]]
*Main> combinations 2 [1,2,3,4]
[[1,2],[1,3],[1,4],[2,3],[2,4],[3,4]]
Those were uncomfortably similar, so I had to abstract it. I wrote the following abstraction:
combinatoric next [] = [[]]
combinatoric next list = do
x <- list
xs <- combinatoric next (next x list)
return (x : xs)
Which receives a function that controls how to filter the elements of the list. It can be used to easily define permutations:
permutations :: (Eq a) => [a] -> [[a]]
permutations = combinatoric (\ x ls -> filter (/= x) ls)
But I couldn't define combinations this way since it carries an state (n). I could extend the combinatoric with an additional state argument, but that'd become too clunky and I remember such approach was not necessary in a somewhat similar situation. Thus, I wonder: is it possible to define combinations using combinatorics? If not, what is a better abstraction of combinatorics which successfully subsumes both functions?
This isn't a direct answer to your question (sorry), but I don't think your code is correct. The Eq and Ord constraints tipped me off - they shouldn't be necessary - so I wrote a couple of QuickCheck properties.
prop_numberOfPermutations xs = length (permutations xs) === factorial (length xs)
where _ = (xs :: [Int]) -- force xs to be instantiated to [Int]
prop_numberOfCombinations (Positive n) (NonEmpty xs) = n <= length xs ==>
length (combinations n xs) === choose (length xs) n
where _ = (xs :: [Int])
factorial :: Int -> Int
factorial x = foldr (*) 1 [1..x]
choose :: Int -> Int -> Int
choose n 0 = 1
choose 0 r = 0
choose n r = choose (n-1) (r-1) * n `div` r
The first property checks that the number of permutations of a list of length n is n!. The second checks that the number of r-combinations of a list of length n is C(n, r). Both of these properties fail when I run them against your definitions:
ghci> quickCheck prop_numberOfPermutations
*** Failed! Falsifiable (after 5 tests and 4 shrinks):
[0,0,0]
3 /= 6
ghci> quickCheck prop_numberOfCombinations
*** Failed! Falsifiable (after 4 tests and 1 shrink):
Positive {getPositive = 2}
NonEmpty {getNonEmpty = [3,3]}
0 /= 1
It looks like your functions fail when the input list contains duplicate elements. Writing an abstraction for an incorrect implementation isn't a good idea - don't try and run before you can walk! You might find it helpful to read the source code for the standard library's definition of permutations, which does not have an Eq constraint.
First let's improve the original functions. You assume that all elements are distinct wrt their equality for permutations, and that they're distinct and have an ordering for combinations. These constraints aren't necessary and as described in the other answer, the code can produce wrong results. Following the robustness principle, let's accept just unconstrained lists. For this we'll need a helper function that produces all possible splits of a list:
split :: [a] -> [([a], a, [a])]
split = loop []
where
loop _ [] = []
loop rs (x:xs) = (rs, x, xs) : loop (x:rs) xs
Note that the implementation causes prefixes returned by this function to be reversed, but it's nothing we require.
This allows us to write generic permutations and combinations.
permutations :: [a] -> [[a]]
permutations [] = [[]]
permutations list = do
(pre, x, post) <- split list
-- reversing 'pre' isn't really necessary, but makes the output
-- order natural
xs <- permutations (reverse pre ++ post)
return (x : xs)
combinations :: Int -> [a] -> [[a]]
combinations 0 _ = [[]]
combinations n list = do
(_, x, post) <- split list
xs <- combinations (n-1) post
return (x : xs)
Now what they have in common:
At each step they pick an element to output,
update the list of elements to pick from and
stop after some condition is met.
The last point is a bit problematic, as for permutations we end once the list to choose from is empty, while for combinations we have a counter. This is probably the reason why it was difficult to generalize. We can work around this by realizing that for permutations the number of steps is equal to the length of the input list, so we can express the condition in the number of repetitions.
For such problems it's often very convenient to express them using StateT s [] monad, where s is the state we're working with. In our case it'll be the list of elements to choose from. The core of our combinatorial functions can be then expressed with StateT [a] [] a: pick an element from the state and update the state for the next step. Since the stateful computations all happen in the [] monad, we automatically branch all possibilities. With that, we can define a generic function:
import Control.Monad.State
combinatoric :: Int -> StateT [a] [] b -> [a] -> [[b]]
combinatoric n k = evalStateT $ replicateM n k
And then define permutations and combinations by specifying the appropriate number of repetitions and what's the core StateT [a] [] a function:
permutations' :: [a] -> [[a]]
permutations' xs = combinatoric (length xs) f xs
where
f = StateT $ map (\(pre, x, post) -> (x, reverse pre ++ post)) . split
combinations' :: Int -> [a] -> [[a]]
combinations' n xs = combinatoric n f xs
where
f = StateT $ map (\(_, x, post) -> (x, post)) . split

Finding unique (as in only occurring once) element haskell

I need a function which takes a list and return unique element if it exists or [] if it doesn't. If many unique elements exists it should return the first one (without wasting time to find others).
Additionally I know that all elements in the list come from (small and known) set A.
For example this function does the job for Ints:
unique :: Ord a => [a] -> [a]
unique li = first $ filter ((==1).length) ((group.sort) li)
where first [] = []
first (x:xs) = x
ghci> unique [3,5,6,8,3,9,3,5,6,9,3,5,6,9,1,5,6,8,9,5,6,8,9]
ghci> [1]
This is however not good enough because it involves sorting (n log n) while it could be done in linear time (because A is small).
Additionally it requires the type of list elements to be Ord while all which should be needed is Eq. It would also be nice if amount of comparisons was as small as possible (ie if we traverse a list and encounter element el twice we don't test subsequent elements for equality with el)
This is why for example this: Counting unique elements in a list doesn't solve the problem - all answers involve either sorting or traversing the whole list to find count of all elements.
The question is: how to do it correctly and efficiently in Haskell ?
Okay, linear time, from a finite domain. The running time will be O((m + d) log d), where m is the size of the list and d is the size of the domain, which is linear when d is fixed. My plan is to use the elements of the set as the keys of a trie, with the counts as values, then look through the trie for elements with count 1.
import qualified Data.IntTrie as IntTrie
import Data.List (foldl')
import Control.Applicative
Count each of the elements. This traverses the list once, builds a trie with the results (O(m log d)), then returns a function which looks up the result in the trie (with running time O(log d)).
counts :: (Enum a) => [a] -> (a -> Int)
counts xs = IntTrie.apply (foldl' insert (pure 0) xs) . fromEnum
where
insert t x = IntTrie.modify' (fromEnum x) (+1) t
We use the Enum constraint to convert values of type a to integers in order to index them in the trie. An Enum instance is part of the witness of your assumption that a is a small, finite set (Bounded would be the other part, but see below).
And then look for ones that are unique.
uniques :: (Eq a, Enum a) => [a] -> [a] -> [a]
uniques dom xs = filter (\x -> cts x == 1) dom
where
cts = counts xs
This function takes as its first parameter an enumeration of the entire domain. We could have required a Bounded a constraint and used [minBound..maxBound] instead, which is semantically appealing to me since finite is essentially Enum+Bounded, but quite inflexible since now the domain needs to be known at compile time. So I would choose this slightly uglier but more flexible variant.
uniques traverses the domain once (lazily, so head . uniques dom will only traverse as far as it needs to to find the first unique element -- not in the list, but in dom), for each element running the lookup function which we have established is O(log d), so the filter takes O(d log d), and building the table of counts takes O(m log d). So uniques runs in O((m + d) log d), which is linear when d is fixed. It will take at least Ω(m log d) to get any information from it, because it has to traverse the whole list to build the table (you have to get all the way to the end of the list to see if an element was repeated, so you can't do better than this).
There really isn't any way to do this efficiently with just Eq. You'd need to use some much less efficient way to build the groups of equal elements, and you can't know that only one of a particular element exists without scanning the whole list.
Also, note that to avoid useless comparisons you'd need a way of checking to see if an element has been encountered before, and the only way to do that would be to have a list of elements known to have multiple occurrences, and the only way to check if the current element is in that list is... to compare it for equality with each.
If you want this to work faster than O(something really horrible) you need that Ord constraint.
Ok, based on the clarifications in comments, here's a quick and dirty example of what I think you're looking for:
unique [] _ _ = Nothing
unique _ [] [] = Nothing
unique _ (r:_) [] = Just r
unique candidates results (x:xs)
| x `notElem` candidates = unique candidates results xs
| x `elem` results = unique (delete x candidates) (delete x results) xs
| otherwise = unique candidates (x:results) xs
The first argument is a list of candidates, which should initially be all possible elements. The second argument is the list of possible results, which should initially be empty. The third argument is the list to examine.
If it runs out of candidates, or reaches the end of the list with no results, it returns Nothing. If it reaches the end of the list with results, it returns the one at the front of the result list.
Otherwise, it examines the next input element: If it's not a candidate, it ignores it and continues. If it's in the result list we've seen it twice, so remove it from the result and candidate lists and continue. Otherwise, add it to the results and continue.
Unfortunately, this still has to scan the entire list for even a single result, since that's the only way to be sure it's actually unique.
First off, if your function is intended to return at most one element, you should almost certainly use Maybe a instead of [a] to return your result.
Second, at minimum, you have no choice but to traverse the entire list: you can't tell for sure if any given element is actually unique until you've looked at all the others.
If your elements are not Ordered, but can only be tested for Equality, you really have no better option than something like:
firstUnique (x:xs)
| elem x xs = firstUnique (filter (/= x) xs)
| otherwise = Just x
firstUnique [] = Nothing
Note that you don't need to filter out the duplicated elements if you don't want to -- the worst case is quadratic either way.
Edit:
The above misses the possibility of early exit due to the above-mentioned small/known set of possible elements. However, note that the worst case will still require traversing the entire list: all that is necessary is for at least one of these possible elements to be missing from the list...
However, an implementation that provides an early out in case of set exhaustion:
firstUnique = f [] [<small/known set of possible elements>] where
f [] [] _ = Nothing -- early out
f uniques noshows (x:xs)
| elem x uniques = f (delete x uniques) noshows xs
| elem x noshows = f (x:uniques) (delete x noshows) xs
| otherwise = f uniques noshows xs
f [] _ [] = Nothing
f (u:_) _ [] = Just u
Note that if your list has elements which shouldn't be there (because they aren't in the small/known set), they will be pointedly ignored by the above code...
As others have said, without any additional constraints, you can't do this in less than quadratic time, because without knowing something about the elements, you can't keep them in some reasonable data structure.
If we are able to compare elements, an obvious O(n log n) solution to compute the count of elements first and then find the first one with count equal to 1:
import Data.List (foldl', find)
import Data.Map (Map)
import qualified Data.Map as Map
import Data.Maybe (fromMaybe)
count :: (Ord a) => Map a Int -> a -> Int
count m x = fromMaybe 0 $ Map.lookup x m
add :: (Ord a) => Map a Int -> a -> Map a Int
add m x = Map.insertWith (+) x 1 m
uniq :: (Ord a) => [a] -> Maybe a
uniq xs = find (\x -> count cs x == 1) xs
where
cs = foldl' add Map.empty xs
Note that the log n factor comes from the fact that we need to operate on a Map of size n. If the list has only k unique elements then the size of our map will be at most k, so the overall complexity will be just O(n log k).
However, we can do even better - we can use a hash table instead of a map to get an O(n) solution. For this we'll need the ST monad to perform mutable operations on the hash map, and our elements will have to be Hashable. The solution is basically the same as before, just a little bit more complex due to working within the ST monad:
import Control.Monad
import Control.Monad.ST
import Data.Hashable
import qualified Data.HashTable.ST.Basic as HT
import Data.Maybe (fromMaybe)
count :: (Eq a, Hashable a) => HT.HashTable s a Int -> a -> ST s Int
count ht x = liftM (fromMaybe 0) (HT.lookup ht x)
add :: (Eq a, Hashable a) => HT.HashTable s a Int -> a -> ST s ()
add ht x = count ht x >>= HT.insert ht x . (+ 1)
uniq :: (Eq a, Hashable a) => [a] -> Maybe a
uniq xs = runST $ do
-- Count all elements into a hash table:
ht <- HT.newSized (length xs)
forM_ xs (add ht)
-- Find the first one with count 1
first (\x -> liftM (== 1) (count ht x)) xs
-- Monadic variant of find which exists once an element is found.
first :: (Monad m) => (a -> m Bool) -> [a] -> m (Maybe a)
first p = f
where
f [] = return Nothing
f (x:xs') = do
b <- p x
if b then return (Just x)
else f xs'
Notes:
If you know that there will be only a small number of distinct elements in the list, you could use HT.new instead of HT.newSized (length xs). This will save you some memory and one pass over xs but in the case of many distinct elements the hash table will be have to resized several times.
Here is a version that does the trick:
unique :: Eq a => [a] -> [a]
unique = select . collect []
where
collect acc [] = acc
collect acc (x : xs) = collect (insert x acc) xs
insert x [] = [[x]]
insert x (ys#(y : _) : yss)
| x == y = (x : ys) : yss
| otherwise = ys : insert x yss
select [] = []
select ([x] : _) = [x]
select ((_ : _) : xss) = select xss
So, first we traverse the input list (collect) while maintaining a list of buckets of equal elements that we update with insert. Then we simply select the first element that appears in a singleton bucket (select).
The bad news is that this takes quadratic time: for every visited element in collect we need to go over the list of buckets. I am afraid that is the price you will have to pay for only being able to constrain the element type to be in Eq.
Something like this look pretty good.
unique = fst . foldl' (\(a, b) c -> if (c `elem` b)
then (a, b)
else if (c `elem` a)
then (delete c a, c:b)
else (c:a, b)) ([],[])
The first element of the resulted tuple of the fold, contain what you are expecting, a list containing unique element. The second element of the tuple is the memory of the process remembered if an element has already been discarded or not.
About space performance.
As your problem is design, all the element of the list should be traversed at least one time, before a result can be display. And the internal algorithm must keep trace of discarded value in addition to the good one, but discarded value will appears only one time. Then in the worst case the required amount of memory is equal to the size of the inputted list. This sound goods as you said that expected input are small.
About time performance.
As the expected input are small and not sorted by default, trying to sort the list into the algorithm is useless, or before to apply it is useless. In fact statically we can almost said, that the extra operation to place an element at its ordered place (into the sub list a and b of the tuple (a,b)) will cost the same amount of time than to check if this element appear into the list or not.
Below a nicer and more explicit version of the foldl' one.
import Data.List (foldl', delete, elem)
unique :: Eq a => [a] -> [a]
unique = fst . foldl' algorithm ([], [])
where
algorithm (result0, memory0) current =
if (current `elem` memory0)
then (result0, memory0)
else if (current`elem` result0)
then (delete current result0, memory)
else (result, memory0)
where
result = current : result0
memory = current : memory0
Into the nested if ... then ... else ... instruction the list result is traversed twice in the worst case, this can be avoid using the following helper function.
unique' :: Eq a => [a] -> [a]
unique' = fst . foldl' algorithm ([], [])
where
algorithm (result, memory) current =
if (current `elem` memory)
then (result, memory)
else helper current result memory []
where
helper current [] [] acc = ([current], [])
helper current [] memory acc = (acc, memory)
helper current (r:rs) memory acc
| current == r = (acc ++ rs, current:memory)
| otherwise = helper current rs memory (r:acc)
But the helper can be rewrite using fold as follow, which is definitely nicer.
helper current [] _ = ([current],[])
helper current memory result =
foldl' (\(r, m) x -> if x==current
then (r, current:m)
else (current:r, m)) ([], memory) $ result

Functional O(1) append and O(n) iteration from first element list data structure

I'm looking for a functional data structure that supports the following operations:
Append, O(1)
In order iteration, O(n)
A normal functional linked list only supports O(n) append, while I could use a normal LL and then reverse it, the reverse operation would be O(n) also which (partially) negates the O(1) cons operation.
You can use John Hughes's constant-time append lists, which seem nowadays to be called DList. The representation is a function from lists to lists: the empty list is the identity function; append is composition, and singleton is cons (partially applied). In this representation every enumeration will cost you n allocations, so that may not be so good.
The alternative is to make the same algebra as a data structure:
type 'a seq = Empty | Single of 'a | Append of 'a seq * 'a seq
Enumeration is a tree walk, which will either cost some stack space or will require some kind of zipper representation. Here's a tree walk that converts to list but uses stack space:
let to_list t =
let rec walk t xs = match t with
| Empty -> xs
| Single x -> x :: xs
| Append (t1, t2) -> walk t1 (walk t2 xs) in
walk t []
Here's the same, but using constant stack space:
let to_list' t =
let rec walk lefts t xs = match t with
| Empty -> finish lefts xs
| Single x -> finish lefts (x :: xs)
| Append (t1, t2) -> walk (t1 :: lefts) t2 xs
and finish lefts xs = match lefts with
| [] -> xs
| t::ts -> walk ts t xs in
walk [] t []
You can write a fold function that visits the same elements but doesn't actually reify the list; just replace cons and nil with something more general:
val fold : ('a * 'b -> 'b) -> 'b -> 'a seq -> 'b
let fold f z t =
let rec walk lefts t xs = match t with
| Empty -> finish lefts xs
| Single x -> finish lefts (f (x, xs))
| Append (t1, t2) -> walk (t1 :: lefts) t2 xs
and finish lefts xs = match lefts with
| [] -> xs
| t::ts -> walk ts t xs in
walk [] t z
That's your linear-time, constant-stack enumeration. Have fun!
I believe you can just use standard functional linked list:
To append element, you can use cons (which is O(1))
To iterate elements in the order in which they were inserted, you can first reverse the list,
(which is O(N)) and then traverse it, which is also O(N) (and 2xO(N) is still just O(N)).
How about a difference list?
type 'a DList = DList of ('a list -> 'a list)
module DList =
let append (DList f) (DList g) = (DList (f << g))
let cons x (DList f) = (DList (fun l -> x::(f l)))
let snoc (DList f) x = (DList (fun l -> f(x::l)))
let empty = DList id
let ofList = List.fold snoc empty
let toList (DList f) = f []
You could create a functional Deque, which provides O(1) adding to either end, and O(N) for iteration in either direction. Eric Lippert wrote about an interesting version of an immutable Deque on his blog note that if you look around you will find the other parts of the series, but that is the explanation of the final product. Note also that with a bit of tweaking it can be modified to utilize F# discriminated unions and pattern matching (although that is up to you).
Another interesting property of this version, O(1) peek, removal, and add, from either end (i.e. dequeueLeft, dequeueRight, dequeueLeft, dequeueRight, etc. is still O(N), versus O(N*N) with a double list method).
What about a circularly-linked list? It supports O(1) appends and O(n) iteration.

Resources