I am trying to write a function in sml that takes a list as its first argument and a number as its second. The result should be: greaterT [3,5,2,4,7]3; val it = [5,4,7] : int list
This is my work so far but doesn't work yet.
fun greaterT ([],k) = []
| greaterT (a::x,k)
if a <= k
then x = []
else greaterT(x,k);
You are getting into problems because the then branch of the if expression is trying to do something which doesn't make sense:
x = []
You can't re-assign values to already bound identifiers in Standard ML, although you can achieve mutability using refs, but they're not needed in this case.
In your case, what the required function conceptually does, is to look at the first element of a list, decide whether to keep it in the final result by comparing it with k and then recurse on the rest of list:
fun greaterT ([], k) = []
| greaterT (a :: rest, k) =
if a <= k then
a :: greaterT (rest, k)
else
greaterT (rest, k)
The above isn't a good solution, though, because the first recursive call isn't in a tail position, so the compiler can't optimize the generated code (for reasons I won't discuss here; there are plenty of questions about tail-call optimizations on StackOverflow).
So, a better version would use an extra parameter in which it accumulates elements that satisfy the <= predicate.
fun greaterTailRec ([], k, result) = List.rev result
| greaterTailRec (a :: rest, k) =
if a <= k then
greaterTailRec (rest, k, result)
else
greaterTailRec (rest, k, a :: result)
fun greaterT (list, k) = greaterTailRec (list, k, [])
We can go a step further and generalize greaterTailRec by replacing the specifics, which in this particular case is the comparison call <=, to a more general call to a function that takes an element of the list as argument and returns a bool. Thus, we'll end up with a generally useful function called filter:
fun filter predicate list =
let
fun recur ([], result) = List.rev result
| recur (a :: rest, result) =
if predicate a then
recur (rest, a :: result)
else
recur (rest, result)
in
recur (list, [])
end
fun greaterT (list, k) =
filter (fn a => a >= k) list
The filter helper function is already defined on the List structure, so your initial function can be more concisely expressed as:
fun greaterT (list, k) =
List.filter (fn a => a >= k) list
Related
Both of these functions are implementations of a dictionary lookup picked out from learnyouahaskell
findKeyPatMatch :: (Eq k) => k -> [(k,v)] -> Maybe v
findKeyPatMatch key [] = Nothing
findKeyPatMatch key ((k,v):xs)
| key == k = Just v
| otherwise = findKeyPatMatch key xs
findKeyFold :: (Eq k) => k -> [(k,v)] -> Maybe v
findKeyFold key = foldr (\(k,v) acc -> if k == key then Just v else acc) Nothing
Since the pattern in the first function is the exact idiom foldr is made for, I expected both of these functions to have the same performance. However, the pattern matching implementation is significantly faster.
foldr short-circuits, so I can't think of any particular reason for this, except for maybe a function call overhead, at a stretch.
testKey = 10^7
testDict = [(i,i) | i <- [1..]]
findKeyPatMatch testKey testDict -- takes ~3.40 s
findKeyFold testKey testDict -- takes ~4.10 s
So my question is - what's the reason for the performance difference, and is there any way to optimize the foldr expression.
I'm trying to implement the "stock span problem" in Haskell. This is the solution I've come up with. Would like to see if there is any other idiomatic way to do this. this is O(n^2) algorithm (?) and using a stack, we can make it a O(n). Any pointers to the other higher order functions that can be used is appreciated.
import Data.List (inits)
type Quote = Int
type Quotes = [Quote]
type Span = [Int]
stockSpanQuad :: Quotes -> Span
stockSpanQuad [] = []
stockSpanQuad xs = map spanQ (map splitfunc (tail $ inits xs))
where
spanQ (qs, q) = 1 + (length $ takeWhile (\a -> a <= q) (reverse qs))
splitfunc xs = (init xs, last xs)
The link that you've provided contains a solution which uses stack data structure. The examples in every language mutate the stack, use the arrays with indexes and access the elements of the array by index. All these operations are not very common in Haskell.
Let's consider the following solution:
type Quote = Int
type Quotes = [Quote]
type Span = [Int]
stockSpanWithStack :: Quotes -> Span
stockSpanWithStack quotes = calculateSpan quotesWithIndexes []
where
quotesWithIndexes = zip quotes [0..]
calculateSpan [] _ = []
calculateSpan ((x, index):xs) stack =
let
newStack = dropWhile (\(y, _) -> y <= x) stack
stockValue [] = index + 1
stockValue ((_, x):_) = index - x
in
(stockValue newStack) : (calculateSpan xs ((x, index):newStack))
And let's compare it with the Python solution:
# Calculate span values for rest of the elements
for i in range(1, n):
# Pop elements from stack whlie stack is not
# empty and top of stack is smaller than price[i]
while( len(st) > 0 and price[st[0]] <= price[i]):
st.pop()
# If stack becomes empty, then price[i] is greater
# than all elements on left of it, i.e. price[0],
# price[1], ..price[i-1]. Else the price[i] is
# greater than elements after top of stack
S[i] = i+1 if len(st) <= 0 else (i - st[0])
# Push this element to stack
st.append(i)
For the stack solution, we need elements with indexes. It can be imitated like this:
quotesWithIndexes = zip quotes [0..]
The list of quotes is iterated recursively and instead of modifying a stack in every loop iteration, we can call the function with a modified value:
calculateSpan ((x, index):xs) stack =
The following line in Python (popping of elements of the stack, which are smaller than the current value):
while( len(st) > 0 and price[st[0]] <= price[i]):
st.pop()
Can be rewritten in Haskell as:
newStack = dropWhile (\(y, _) -> y <= x) stack
And calculation of a stock value:
S[i] = i+1 if len(st) <= 0 else (i - st[0])
Can be interpreted as:
stockValue [] = index + 1
stockValue ((_, x):_) = index - x
Below it was said, that it's not a common thing to modify the state of the variables, like S[i] = ... or st.append(i). But we can recursively call the function with new stack value and prepend the current result to it:
(stockValue newStack) : (calculateSpan xs ((x, index):newStack))
Technically, we push at the beginning of the list and drop the first elements of the list, because it's idiomatic and more performant way of working with lists in Haskell.
I've come up with the following, however I'm not sure if its "idiomatic". I think this is similar to #Igor's answer. Please comment.
{-# LANGUAGE BangPatterns #-}
stockSpanLinear :: Quotes -> Span
stockSpanLinear = reverse.snd.(foldl func ([],[]))
type Stack = [(Quote, Int)]
func :: (Stack, Span)-> Quote -> (Stack, Span)
func ([], []) q = ([(q, 1)], [1])
func p#((_, !i):pis, span) q = go p q (i+1)
where
go :: (Stack, Span) -> Quote -> Int -> (Stack, Span)
go (stack,span) q index = let ys = dropWhile (\(p, _) -> p <= q) stack
in case ys of
[] -> ((q, index):ys, index+1:span)
(_,i):_ -> ((q, index):ys, index-i:span)
I've to adapt an existing code in Standard ML and I'm facing a problem right now.
I've got a function which is returning me a list of elements (let's call them Nodes (or Worlds))
val worlds = listNodes()
And for each world I've got a function fun listSuccessors w r which is returning the number of successors for each world.
My question is : How to obtain the number of successors globally ?
In pseudo code, I would like something like this :
worlds <-- listNodes ()
solution <-- 0
foreach w in worlds do
solution <-- solution + (length (listSuccessors w))
end foreach
return solution
But I unfortunately have no idea how to do this :/
Here are the existing functions :
fun listSuccessors w r =
let
val succs =
case Dynarraydict.get (nodes, Node.getId w)
of ROOT (_, _, succs) => succs
| _ => Exn.unexpArg "Nodestore.listSuccessors.succs"
in
List.mapPartial (fn (n, r', _) => if r = r' then SOME (getNode n) else NONE) succs
end
fun listNodes () = Dynarraydict.foldr (fn (_, ROOT item, ys) => (#1 item)::ys | (_, _, ys) => ys) nil nodes
Thanks in advance for your help :/ I'm a total beginner in Standard ML unfortunately, so it's quite complicated to understand how to manipulate list of lists. :/
Some hints : sum the size of the lists using List.fold and List.length.
In 1 single line you should be able to implement your pseudocode.
It should be some thing like :
List.fold (fun acc x -> acc+List.length x) 0 your_list_of_list.
Hmm, this looks like homework in spite of the comment, because it is a typical exercise, and a FP idiom.
A standard solution in ML (sic) is a recursive function where the parameters act as storage to sum something up or to collect some list. Other parameter(s) (the input data) diminish at every step and when empty cause the termination of the recursion.
This idiom serves the purpose to minimize the use of mutable state in the program.
Hope this helps.
I need a function which takes a list and return unique element if it exists or [] if it doesn't. If many unique elements exists it should return the first one (without wasting time to find others).
Additionally I know that all elements in the list come from (small and known) set A.
For example this function does the job for Ints:
unique :: Ord a => [a] -> [a]
unique li = first $ filter ((==1).length) ((group.sort) li)
where first [] = []
first (x:xs) = x
ghci> unique [3,5,6,8,3,9,3,5,6,9,3,5,6,9,1,5,6,8,9,5,6,8,9]
ghci> [1]
This is however not good enough because it involves sorting (n log n) while it could be done in linear time (because A is small).
Additionally it requires the type of list elements to be Ord while all which should be needed is Eq. It would also be nice if amount of comparisons was as small as possible (ie if we traverse a list and encounter element el twice we don't test subsequent elements for equality with el)
This is why for example this: Counting unique elements in a list doesn't solve the problem - all answers involve either sorting or traversing the whole list to find count of all elements.
The question is: how to do it correctly and efficiently in Haskell ?
Okay, linear time, from a finite domain. The running time will be O((m + d) log d), where m is the size of the list and d is the size of the domain, which is linear when d is fixed. My plan is to use the elements of the set as the keys of a trie, with the counts as values, then look through the trie for elements with count 1.
import qualified Data.IntTrie as IntTrie
import Data.List (foldl')
import Control.Applicative
Count each of the elements. This traverses the list once, builds a trie with the results (O(m log d)), then returns a function which looks up the result in the trie (with running time O(log d)).
counts :: (Enum a) => [a] -> (a -> Int)
counts xs = IntTrie.apply (foldl' insert (pure 0) xs) . fromEnum
where
insert t x = IntTrie.modify' (fromEnum x) (+1) t
We use the Enum constraint to convert values of type a to integers in order to index them in the trie. An Enum instance is part of the witness of your assumption that a is a small, finite set (Bounded would be the other part, but see below).
And then look for ones that are unique.
uniques :: (Eq a, Enum a) => [a] -> [a] -> [a]
uniques dom xs = filter (\x -> cts x == 1) dom
where
cts = counts xs
This function takes as its first parameter an enumeration of the entire domain. We could have required a Bounded a constraint and used [minBound..maxBound] instead, which is semantically appealing to me since finite is essentially Enum+Bounded, but quite inflexible since now the domain needs to be known at compile time. So I would choose this slightly uglier but more flexible variant.
uniques traverses the domain once (lazily, so head . uniques dom will only traverse as far as it needs to to find the first unique element -- not in the list, but in dom), for each element running the lookup function which we have established is O(log d), so the filter takes O(d log d), and building the table of counts takes O(m log d). So uniques runs in O((m + d) log d), which is linear when d is fixed. It will take at least Ω(m log d) to get any information from it, because it has to traverse the whole list to build the table (you have to get all the way to the end of the list to see if an element was repeated, so you can't do better than this).
There really isn't any way to do this efficiently with just Eq. You'd need to use some much less efficient way to build the groups of equal elements, and you can't know that only one of a particular element exists without scanning the whole list.
Also, note that to avoid useless comparisons you'd need a way of checking to see if an element has been encountered before, and the only way to do that would be to have a list of elements known to have multiple occurrences, and the only way to check if the current element is in that list is... to compare it for equality with each.
If you want this to work faster than O(something really horrible) you need that Ord constraint.
Ok, based on the clarifications in comments, here's a quick and dirty example of what I think you're looking for:
unique [] _ _ = Nothing
unique _ [] [] = Nothing
unique _ (r:_) [] = Just r
unique candidates results (x:xs)
| x `notElem` candidates = unique candidates results xs
| x `elem` results = unique (delete x candidates) (delete x results) xs
| otherwise = unique candidates (x:results) xs
The first argument is a list of candidates, which should initially be all possible elements. The second argument is the list of possible results, which should initially be empty. The third argument is the list to examine.
If it runs out of candidates, or reaches the end of the list with no results, it returns Nothing. If it reaches the end of the list with results, it returns the one at the front of the result list.
Otherwise, it examines the next input element: If it's not a candidate, it ignores it and continues. If it's in the result list we've seen it twice, so remove it from the result and candidate lists and continue. Otherwise, add it to the results and continue.
Unfortunately, this still has to scan the entire list for even a single result, since that's the only way to be sure it's actually unique.
First off, if your function is intended to return at most one element, you should almost certainly use Maybe a instead of [a] to return your result.
Second, at minimum, you have no choice but to traverse the entire list: you can't tell for sure if any given element is actually unique until you've looked at all the others.
If your elements are not Ordered, but can only be tested for Equality, you really have no better option than something like:
firstUnique (x:xs)
| elem x xs = firstUnique (filter (/= x) xs)
| otherwise = Just x
firstUnique [] = Nothing
Note that you don't need to filter out the duplicated elements if you don't want to -- the worst case is quadratic either way.
Edit:
The above misses the possibility of early exit due to the above-mentioned small/known set of possible elements. However, note that the worst case will still require traversing the entire list: all that is necessary is for at least one of these possible elements to be missing from the list...
However, an implementation that provides an early out in case of set exhaustion:
firstUnique = f [] [<small/known set of possible elements>] where
f [] [] _ = Nothing -- early out
f uniques noshows (x:xs)
| elem x uniques = f (delete x uniques) noshows xs
| elem x noshows = f (x:uniques) (delete x noshows) xs
| otherwise = f uniques noshows xs
f [] _ [] = Nothing
f (u:_) _ [] = Just u
Note that if your list has elements which shouldn't be there (because they aren't in the small/known set), they will be pointedly ignored by the above code...
As others have said, without any additional constraints, you can't do this in less than quadratic time, because without knowing something about the elements, you can't keep them in some reasonable data structure.
If we are able to compare elements, an obvious O(n log n) solution to compute the count of elements first and then find the first one with count equal to 1:
import Data.List (foldl', find)
import Data.Map (Map)
import qualified Data.Map as Map
import Data.Maybe (fromMaybe)
count :: (Ord a) => Map a Int -> a -> Int
count m x = fromMaybe 0 $ Map.lookup x m
add :: (Ord a) => Map a Int -> a -> Map a Int
add m x = Map.insertWith (+) x 1 m
uniq :: (Ord a) => [a] -> Maybe a
uniq xs = find (\x -> count cs x == 1) xs
where
cs = foldl' add Map.empty xs
Note that the log n factor comes from the fact that we need to operate on a Map of size n. If the list has only k unique elements then the size of our map will be at most k, so the overall complexity will be just O(n log k).
However, we can do even better - we can use a hash table instead of a map to get an O(n) solution. For this we'll need the ST monad to perform mutable operations on the hash map, and our elements will have to be Hashable. The solution is basically the same as before, just a little bit more complex due to working within the ST monad:
import Control.Monad
import Control.Monad.ST
import Data.Hashable
import qualified Data.HashTable.ST.Basic as HT
import Data.Maybe (fromMaybe)
count :: (Eq a, Hashable a) => HT.HashTable s a Int -> a -> ST s Int
count ht x = liftM (fromMaybe 0) (HT.lookup ht x)
add :: (Eq a, Hashable a) => HT.HashTable s a Int -> a -> ST s ()
add ht x = count ht x >>= HT.insert ht x . (+ 1)
uniq :: (Eq a, Hashable a) => [a] -> Maybe a
uniq xs = runST $ do
-- Count all elements into a hash table:
ht <- HT.newSized (length xs)
forM_ xs (add ht)
-- Find the first one with count 1
first (\x -> liftM (== 1) (count ht x)) xs
-- Monadic variant of find which exists once an element is found.
first :: (Monad m) => (a -> m Bool) -> [a] -> m (Maybe a)
first p = f
where
f [] = return Nothing
f (x:xs') = do
b <- p x
if b then return (Just x)
else f xs'
Notes:
If you know that there will be only a small number of distinct elements in the list, you could use HT.new instead of HT.newSized (length xs). This will save you some memory and one pass over xs but in the case of many distinct elements the hash table will be have to resized several times.
Here is a version that does the trick:
unique :: Eq a => [a] -> [a]
unique = select . collect []
where
collect acc [] = acc
collect acc (x : xs) = collect (insert x acc) xs
insert x [] = [[x]]
insert x (ys#(y : _) : yss)
| x == y = (x : ys) : yss
| otherwise = ys : insert x yss
select [] = []
select ([x] : _) = [x]
select ((_ : _) : xss) = select xss
So, first we traverse the input list (collect) while maintaining a list of buckets of equal elements that we update with insert. Then we simply select the first element that appears in a singleton bucket (select).
The bad news is that this takes quadratic time: for every visited element in collect we need to go over the list of buckets. I am afraid that is the price you will have to pay for only being able to constrain the element type to be in Eq.
Something like this look pretty good.
unique = fst . foldl' (\(a, b) c -> if (c `elem` b)
then (a, b)
else if (c `elem` a)
then (delete c a, c:b)
else (c:a, b)) ([],[])
The first element of the resulted tuple of the fold, contain what you are expecting, a list containing unique element. The second element of the tuple is the memory of the process remembered if an element has already been discarded or not.
About space performance.
As your problem is design, all the element of the list should be traversed at least one time, before a result can be display. And the internal algorithm must keep trace of discarded value in addition to the good one, but discarded value will appears only one time. Then in the worst case the required amount of memory is equal to the size of the inputted list. This sound goods as you said that expected input are small.
About time performance.
As the expected input are small and not sorted by default, trying to sort the list into the algorithm is useless, or before to apply it is useless. In fact statically we can almost said, that the extra operation to place an element at its ordered place (into the sub list a and b of the tuple (a,b)) will cost the same amount of time than to check if this element appear into the list or not.
Below a nicer and more explicit version of the foldl' one.
import Data.List (foldl', delete, elem)
unique :: Eq a => [a] -> [a]
unique = fst . foldl' algorithm ([], [])
where
algorithm (result0, memory0) current =
if (current `elem` memory0)
then (result0, memory0)
else if (current`elem` result0)
then (delete current result0, memory)
else (result, memory0)
where
result = current : result0
memory = current : memory0
Into the nested if ... then ... else ... instruction the list result is traversed twice in the worst case, this can be avoid using the following helper function.
unique' :: Eq a => [a] -> [a]
unique' = fst . foldl' algorithm ([], [])
where
algorithm (result, memory) current =
if (current `elem` memory)
then (result, memory)
else helper current result memory []
where
helper current [] [] acc = ([current], [])
helper current [] memory acc = (acc, memory)
helper current (r:rs) memory acc
| current == r = (acc ++ rs, current:memory)
| otherwise = helper current rs memory (r:acc)
But the helper can be rewrite using fold as follow, which is definitely nicer.
helper current [] _ = ([current],[])
helper current memory result =
foldl' (\(r, m) x -> if x==current
then (r, current:m)
else (current:r, m)) ([], memory) $ result
Inspired by this question and answer, how do I create a generic permutations algorithm in F#? Google doesn't give any useful answers to this.
EDIT: I provide my best answer below, but I suspect that Tomas's is better (certainly shorter!)
you can also write something like this:
let rec permutations list taken =
seq { if Set.count taken = List.length list then yield [] else
for l in list do
if not (Set.contains l taken) then
for perm in permutations list (Set.add l taken) do
yield l::perm }
The 'list' argument contains all the numbers that you want to permute and 'taken' is a set that contains numbers already used. The function returns empty list when all numbers all taken.
Otherwise, it iterates over all numbers that are still available, gets all possible permutations of the remaining numbers (recursively using 'permutations') and appends the current number to each of them before returning (l::perm).
To run this, you'll give it an empty set, because no numbers are used at the beginning:
permutations [1;2;3] Set.empty;;
I like this implementation (but can't remember the source of it):
let rec insertions x = function
| [] -> [[x]]
| (y :: ys) as l -> (x::l)::(List.map (fun x -> y::x) (insertions x ys))
let rec permutations = function
| [] -> seq [ [] ]
| x :: xs -> Seq.concat (Seq.map (insertions x) (permutations xs))
Tomas' solution is quite elegant: it's short, purely functional, and lazy. I think it may even be tail-recursive. Also, it produces permutations lexicographically. However, we can improve performance two-fold using an imperative solution internally while still exposing a functional interface externally.
The function permutations takes a generic sequence e as well as a generic comparison function f : ('a -> 'a -> int) and lazily yields immutable permutations lexicographically. The comparison functional allows us to generate permutations of elements which are not necessarily comparable as well as easily specify reverse or custom orderings.
The inner function permute is the imperative implementation of the algorithm described here. The conversion function let comparer f = { new System.Collections.Generic.IComparer<'a> with member self.Compare(x,y) = f x y } allows us to use the System.Array.Sort overload which does in-place sub-range custom sorts using an IComparer.
let permutations f e =
///Advances (mutating) perm to the next lexical permutation.
let permute (perm:'a[]) (f: 'a->'a->int) (comparer:System.Collections.Generic.IComparer<'a>) : bool =
try
//Find the longest "tail" that is ordered in decreasing order ((s+1)..perm.Length-1).
//will throw an index out of bounds exception if perm is the last permuation,
//but will not corrupt perm.
let rec find i =
if (f perm.[i] perm.[i-1]) >= 0 then i-1
else find (i-1)
let s = find (perm.Length-1)
let s' = perm.[s]
//Change the number just before the tail (s') to the smallest number bigger than it in the tail (perm.[t]).
let rec find i imin =
if i = perm.Length then imin
elif (f perm.[i] s') > 0 && (f perm.[i] perm.[imin]) < 0 then find (i+1) i
else find (i+1) imin
let t = find (s+1) (s+1)
perm.[s] <- perm.[t]
perm.[t] <- s'
//Sort the tail in increasing order.
System.Array.Sort(perm, s+1, perm.Length - s - 1, comparer)
true
with
| _ -> false
//permuation sequence expression
let c = f |> comparer
let freeze arr = arr |> Array.copy |> Seq.readonly
seq { let e' = Seq.toArray e
yield freeze e'
while permute e' f c do
yield freeze e' }
Now for convenience we have the following where let flip f x y = f y x:
let permutationsAsc e = permutations compare e
let permutationsDesc e = permutations (flip compare) e
My latest best answer
//mini-extension to List for removing 1 element from a list
module List =
let remove n lst = List.filter (fun x -> x <> n) lst
//Node type declared outside permutations function allows us to define a pruning filter
type Node<'a> =
| Branch of ('a * Node<'a> seq)
| Leaf of 'a
let permutations treefilter lst =
//Builds a tree representing all possible permutations
let rec nodeBuilder lst x = //x is the next element to use
match lst with //lst is all the remaining elements to be permuted
| [x] -> seq { yield Leaf(x) } //only x left in list -> we are at a leaf
| h -> //anything else left -> we are at a branch, recurse
let ilst = List.remove x lst //get new list without i, use this to build subnodes of branch
seq { yield Branch(x, Seq.map_concat (nodeBuilder ilst) ilst) }
//converts a tree to a list for each leafpath
let rec pathBuilder pth n = // pth is the accumulated path, n is the current node
match n with
| Leaf(i) -> seq { yield List.rev (i :: pth) } //path list is constructed from root to leaf, so have to reverse it
| Branch(i, nodes) -> Seq.map_concat (pathBuilder (i :: pth)) nodes
let nodes =
lst //using input list
|> Seq.map_concat (nodeBuilder lst) //build permutations tree
|> Seq.choose treefilter //prune tree if necessary
|> Seq.map_concat (pathBuilder []) //convert to seq of path lists
nodes
The permutations function works by constructing an n-ary tree representing all possible permutations of the list of 'things' passed in, then traversing the tree to construct a list of lists. Using 'Seq' dramatically improves performance as it makes everything lazy.
The second parameter of the permutations function allows the caller to define a filter for 'pruning' the tree before generating the paths (see my example below, where I don't want any leading zeros).
Some example usage: Node<'a> is generic, so we can do permutations of 'anything':
let myfilter n = Some(n) //i.e., don't filter
permutations myfilter ['A';'B';'C';'D']
//in this case, I want to 'prune' leading zeros from my list before generating paths
let noLeadingZero n =
match n with
| Branch(0, _) -> None
| n -> Some(n)
//Curry myself an int-list permutations function with no leading zeros
let noLZperm = permutations noLeadingZero
noLZperm [0..9]
(Special thanks to Tomas Petricek, any comments welcome)
If you need distinct permuations (when the original set has duplicates), you can use this:
let rec insertions pre c post =
seq {
if List.length post = 0 then
yield pre # [c]
else
if List.forall (fun x->x<>c) post then
yield pre#[c]#post
yield! insertions (pre#[post.Head]) c post.Tail
}
let rec permutations l =
seq {
if List.length l = 1 then
yield l
else
let subperms = permutations l.Tail
for sub in subperms do
yield! insertions [] l.Head sub
}
This is a straight-forward translation from this C# code. I am open to suggestions for a more functional look-and-feel.
Take a look at this one:
http://fsharpcode.blogspot.com/2010/04/permutations.html
let length = Seq.length
let take = Seq.take
let skip = Seq.skip
let (++) = Seq.append
let concat = Seq.concat
let map = Seq.map
let (|Empty|Cons|) (xs:seq<'a>) : Choice<Unit, 'a * seq<'a>> =
if (Seq.isEmpty xs) then Empty else Cons(Seq.head xs, Seq.skip 1 xs)
let interleave x ys =
seq { for i in [0..length ys] ->
(take i ys) ++ seq [x] ++ (skip i ys) }
let rec permutations xs =
match xs with
| Empty -> seq [seq []]
| Cons(x,xs) -> concat(map (interleave x) (permutations xs))
If you need permutations with repetitions, this is the "by the book" approach using List.indexed instead of element comparison to filter out elements while constructing a permutation.
let permutations s =
let rec perm perms carry rem =
match rem with
| [] -> carry::perms
| l ->
let li = List.indexed l
let permutations =
seq { for ci in li ->
let (i, c) = ci
(perm
perms
(c::carry)
(li |> List.filter (fun (index, _) -> i <> index) |> List.map (fun (_, char) -> char))) }
permutations |> Seq.fold List.append []
perm [] [] s