Related
I have a list of lists, where each inner list has the same length, and I need to project that as its inverse (that is, I need the dimensions flipped).
In other words, take the first item of each sublist and put that in a new list, take the second item of each sublist and put it in a new list etc. Then return a list of all these new lists.
Example: if input is:
let ls = [[1;2;3];[4;5;6];[7;8;9];[0;0;0]];;
Then output is supposed to be:
val it : int list list = [[1; 4; 7; 0]; [2; 5; 8; 0]; [3; 6; 9; 0]]
I have working code, but it doesn't feel right. It traverses multiple times over the lists, needs to do a List.rev multiple times and has to check for empty on the inner lists:
let rec getInnerHeads acc skipped lst =
match lst with
| [] -> List.rev acc, List.rev skipped
| item::rest ->
match item with
| [] -> [], skipped
| innerHead::skip1 ->
getInnerHeads (innerHead::acc) (skip1::skipped) rest
let rec flipDimensions acc lst =
match lst with
| [] -> acc |> List.rev
| z when (z |> List.forall List.isEmpty) -> acc |> List.rev
| rest ->
let (elem, skip1Elems) = getInnerHeads [] [] rest
flipDimensions (elem::acc) skip1Elems
The only upside of above code is that it is rail-recursive (at least I think it is).
Anybody has a more efficient, or succinct, or both algorithm? I checked F# Snippets and SO, figured this would've been asked before, it seems so common, but I didn't find any examples.
Maybe something with List.unfold?
let transpose matrix =
let rec loop acc = function
| (_::_)::_ as m -> loop (List.map List.head m :: acc) (List.map List.tail m)
| _ -> List.rev acc
loop [] matrix
I have defined a data type called FS the following way:
type Name= String
data Ext where { Txt::Ext ; Mp3::Ext ; Jar::Ext ; Doc::Ext ; Hs::Ext }
deriving (Eq, Show)
data FS where { A :: (Name,Ext) -> FS;
Dir :: Name-> [FS] -> FS }
deriving (Eq, Show)
(A stands for file and Dir for directory)
And I'm trying to make a function that given a FS (directory) it returns the same FS but ordered alphabetically at all levels, my attempt so far is the following:
orderFS :: FS-> FS
orderFS (Dir x y) = Dir x (map orderFS (sort y));
orderFS (A (x,y)) = A (x,y);
The only piece I'm missing is a function called "sort" that takes a [FS] and returns it ordered alphabetically by the Name field.
I read that there are functions like sort from Data.List that can help, but I have to do this without using anything else than Prelude.
So how should I implement such function? Thanks in advance
I do not believe that there are any sorting functions in Prelude but not in a module like Data.List. Note that Data.List is in the base package that is part of GHC, so in basically any situation where Prelude is available, I would imagine that Data.List would be as well---you shouldn't need to download/include any other packages in order to use it.
That said, if you do want to write your own sorting function, you are probably best off taking an existing simple sorting algorithm and using it. There are very neat/simple ways of writing quicksorts and merge sorts in Haskell, although the obvious implementations sometimes don't have the same exact performance characteristics as you would expect. Merge sort, for example, has roughly the same asymptotics, but partitioning the list into two actually takes some time, since the list is singly-linked and you therefore have to walk through half of it in order to split it. But, it can be a very nice short function that looks a lot like the algorithm, and is probably worth doing as a learning exercise.
Also, I noticed that you are defining your Ext and FS types as GADTs, which I'm not really sure about the motivation for; I would suggest using the non-GADT syntax, which is much simpler for this example:
type Name = String
data Ext = Txt | Mp3 | Jar | Doc | Hs deriving (Eq, Show)
data FS = A Name Ext | Dir Name [FS] deriving (Eq, Show)
In order to sort them by name, it would probably be worth writing a simple accessor function that can get the name of an FS:
name :: FS -> Name
name (A n _) = n
name (Dir n _) = n
Another approach would be to factor out the thing (Name) that is common to both cases:
data FS = NamedFS { name :: Name, fs :: UnnamedFS }
data UnnamedFS = A Ext | Dir [FS]
The first entry here uses record syntax, which, among other things, will automatically make a name :: FS -> Name accessor, as well as an fs :: FS -> UnnamedFS.
For the actual sort, it looks a lot like the algorithmic description of merge sort. To start with, let's write a function to divide a list in two:
split :: [a] -> ([a], [a])
split xs = splitAt (length xs `div` 2) xs
We also need a function to merge two sorted lists:
merge :: Ord a => [a] -> [a] -> [a]
merge [] x = x
merge x [] = x
merge (x:xs) (y:ys) | x < y = x:merge xs (y:ys)
| otherwise = y:merge (x:xs) ys
Actually, this is not what we want, because it always uses the < from the Ord instance; instead, we want something that takes in a comparison function. In this case, we assume that if the comparison function returns true when called with x and y, x is conceptually less than y.
merge :: (a -> a -> Bool) -> [a] -> [a] -> [a]
merge _ [] x = x
merge _ x [] = x
merge le (x:xs) (y:ys) | x `le` y = x:merge le xs (y:ys)
| otherwise = y:merge le (x:xs) ys
Now, we can implement mergesort like usual:
mergeSort :: (a -> a -> Bool) -> [a] -> [a]
mergeSort _ [] = []
mergeSort _ [x] = [x]
mergeSort f l = merge f (mergeSort f left) (mergeSort f right)
where (left, right) = split l
And just call it like:
-- fss is some [FS]
mergeSort (\x y -> name x < name y) fss
If we could use Data.Ord, this could be further reduced to:
mergeSort (comparing name) fss
The function in Data.List that could help you is sortOn
this together with a function getName :: FS -> Name would allow you to sort by comparing the Names.
If you cannot use functions from Data.List you will have to implement a sorting algorithm yourself (of which there are many to choose from). One example is "QuickSort" as implemented in the Learn you a Haskell book:
quicksort :: [FS] -> [FS]
quicksort [] = []
quicksort (x:xs) =
let smallerSorted = quicksort [a | a <- xs, getName a <= getName x]
biggerSorted = quicksort [a | a <- xs, getName a > getName x]
in smallerSorted ++ [x] ++ biggerSorted
Note that I changed the comparisons to compare the getNames instead of the whole nodes.
Another thing: you are using GADT syntax to define your datatypes and I cannot see any reason for you to do so. Here is how I would write them instead using ordinary datatype declarations:
data Ext
= Txt
| Mp3
| Jar
| Doc
| Hs
deriving (Eq, Show)
data FS
= File Name Ext
| Dir Name [FS]
deriving (Eq, Show)
In my opinion, the most natural list-sorting function in Haskell is the bottom-up mergesort (Peter Amidon's answer gives a top-down mergesort). Suppose you're starting with the list
[25,1,22,2,10,8,6,20,13,28,5,3,11]
The first step is to turn the list into a list of lists. The simplest way is with map (:[]), which yields
[[25],[1],[22],[2],[10],[8],[6],[20],[13],[28],[5],[3],[11]]
Next, you merge lists pairwise:
[[1,25],[2,22],[8,10],[6,20],[13,28],[3,5],[11]]
Repeat!
[[1,2,22,25],[6,8,10,20],[3,5,13,28],[11]]
And again!
[[1,2,6,8,10,20,22,25],[3,5,11,13,28]]
And once more:
[[1,2,3,5,6,8,10,11,13,20,22,25,28]]
As we now have just one list, we extract it.
To implement this, I think it's best to use a custom type for the lists of lists in this case, to express the appropriate strictness:
infixr 5 :::
data LL a = ![a] ::: LL a | Nil
Implementation proceeds step by step. While breaking the list up into singletons is the simplest way to start off the process, it's a bit wasteful and will slow down access to the first few elements. So let's go by pairs there too:
breakUp :: (a -> a -> Ordering) -> [a] -> LL a
breakUp _cmp [] = Nil
breakUp _cmp xs#[a] = xs ::: Nil
breakUp cmp (x1 : x2 : xs)
| GT <- cmp x1 x2 = [x2,x1] ::: breakUp cmp xs
| otherwise = [x1,x2] ::: breakUp cmp xs
Now we need to write a merge function:
merge :: (a -> a -> Ordering)
-> [a] -> [a] -> [a]
merge _cmp [] ys = ys
merge _cmp xs [] = xs
merge cmp xss#(x:xs) yss#(y:ys)
| GT <- cmp x y = y : merge cmp xss ys
| otherwise = x : merge cmp xs yss
Next we need to be able to merge a list of lists pairwise:
mergePairwise :: (a -> a -> Ordering)
-> LL a -> LL a
mergePairwise _cmp Nil = Nil
mergePairwise _cmp xs#(_ ::: Nil) = xs
mergePairwise cmp (xs1 ::: xs2 ::: xss)
= merge cmp xs1 xs2 ::: mergePairwise cmp xss
Then we must merge up until done:
mergeAll :: (a -> a -> Ordering)
-> LL a -> [a]
mergeAll _cmp Nil = []
mergeAll _cmp (xs ::: Nil) = xs
mergeAll cmp xss = mergeAll cmp (mergePairwise cmp xss)
Now we're cooking with glass grass bass!
sortBy :: (a -> a -> Ordering) -> [a] -> [a]
sortBy cmp = mergeAll cmp . breakUp cmp
sort :: Ord a => [a] -> [a]
sort = sortBy compare
There's a bit of room to improve the above implementation by recognizing that the lists stored in the lists of lists are never empty. So we might get a small performance improvement using
data NonEmpty a = a :| [a]
data LL a = (:::) {-# UNPACK #-} !(NonEmpty a) (LL a) | Nil
I am relatively new to Haskell, but I am trying to learn both by reading and trying to solve problems on Project Euler. I am currently trying to implement a function that takes an infinite list of integers and returns the ordered list of pairwise sums of elements in said list. I am really looking for solutions to the specific issue I am facing, rather than advice on different strategies or approaches, but those are welcome as well, as being a coder doesn't mean knowing how to implement a strategy, but also choosing the best strategy available.
My approach relies on traversing an infinite list of infinite generators and retrieving elements in order, with several mathematical properties that are useful in implementing my solution.
If I were trying to obtain the sequence of pairwise sums of the natural numbers, for example, this would be my code:
myList :: [Integer]
myList = [1..]
myGens :: [[Integer]]
myGens = gens myList
where
gens = \xs -> map (\x -> [x+y|y<-(dropWhile (<x) xs)]) xs
Regardless of the number set used, provided that it is sorted, the following conditions hold:
∀ i ≥ 0, head (gens xs !! i) == 2*(myList !! i)
∀ i,j,k ≥ 0, l > 0, (((gens xs) !! i) !! j) < (((gens xs) !! i+k) !! j+l)
Special cases for the second condition are:
∀ i,j ≥ 0, (((gens xs) !! i) !! j) < (((gens xs) !! i+1) !! j)
∀ i,j ≥ 0, k > 0, (((gens xs) !! i) !! j) < (((gens xs) !! i+k) !! j)
Here is the particular code I am trying to modify:
stride :: [Integer] -> [Int] -> [[Integer]] -> [Integer]
stride xs cs xss = x : stride xs counts streams
where
(x,i) = step xs cs xss
counts = inc i cs
streams = chop i xss
step :: [Integer] -> [Int] -> [[Integer]] -> (Integer,Int)
step xs cs xss = pace xs (defer cs xss)
pace :: [Integer] -> [(Integer,Int)] -> (Integer,Int)
pace hs xs#((x,i):xt) = minim (x,i) hs xt
where
minim :: (Integer,Int) -> [Integer] -> [(Integer,Int)] -> (Integer,Int)
minim m _ [] = m
minim m#(g,i) hs (y#(h,n):ynt) | g > h && 2*(hs !! n) > h = y
| g > h = minim y hs ynt
| 2*(hs !! n) > g = m
| otherwise = minim m hs ynt
defer :: [Int] -> [[a]] -> [(a,Int)]
defer cs xss = (infer (zip cs (zip (map head xss) [0..])))
infer :: [(Int,(a,Int))] -> [(a,Int)]
infer [] = []
infer ((c,xi):xis) | c == 0 = xi:[]
| otherwise = xi:(infer (dropWhile (\(p,(q,r)) -> p>=c) xis))
The set in question I am using has the property that multiple distinct pairs produce an identical sum. I want an efficient method of handling all duplicate elements at once, in order to avoid an increased cost of computing all the pairwise sums up to N, as it requires M more tests if M is the number of duplicates.
Does anyone have any suggestions?
EDIT:
I made some changes to the code, independently of what was suggested, and would appreciate feedback on the relative efficiencies of my original code, my revised code, and the proposals so far.
stride :: [Integer] -> [Int] -> [[Integer]] -> [Integer]
stride xs cs xss = x : stride xs counts streams
where
(x,is) = step xs cs xss
counts = foldr (\i -> inc i) cs is
streams = foldr (\i -> chop i) xss is
step :: [Integer] -> [Int] -> [[Integer]] -> (Integer,[Int])
step xs cs xss = pace xs (defer cs xss)
pace :: [Integer] -> [(Integer,Int)] -> (Integer,[Int])
pace hs xs#((x,i):xt) = minim (x,(i:[])) hs xt
where
minim :: (Integer,[Int]) -> [Integer] -> [(Integer,Int)] -> (Integer,[Int])
minim m _ [] = m
minim m#(g,is#(i:_)) hs (y#(h,n):ynt) | g > h && 2*(hs !! n) > h = (h,[n])
| g > h = minim (h,[n]) hs ynt
| g == h && 2*(hs !! n) > h = (g,n:is)
| g == h = minim (g,n:is) hs ynt
| g < h && 2*(hs !! n) > g = m
| g < h = minim m hs ynt
Also, I left out the code for inc and chop:
alter :: (a->a) -> Int -> [a] -> [a]
alter = \f -> \n -> \xs -> (take (n) xs) ++ [f (xs !! n)] ++ (drop (n+1) xs)
inc :: Int -> [Int] -> [Int]
inc = alter (1+)
chop :: Int -> [[a]] -> [[a]]
chop = alter (tail)
I'm going to present a solution that uses an infinite pairing heap. We'll have logarithmic overhead per element constructed, but no one knows how to do better (in a model with comparison-based methods and real numbers).
The first bit of code is just the standard pairing heap.
module Queue where
import Data.Maybe (fromMaybe)
data Queue k = E
| T k [Queue k]
deriving Show
fromOrderedList :: (Ord k) => [k] -> Queue k
fromOrderedList [] = E
fromOrderedList [k] = T k []
fromOrderedList (k1 : ks'#(k2 : _ks''))
| k1 <= k2 = T k1 [fromOrderedList ks']
mergePairs :: (Ord k) => [Queue k] -> Queue k
mergePairs [] = E
mergePairs [q] = q
mergePairs (q1 : q2 : qs'') = merge (merge q1 q2) (mergePairs qs'')
merge :: (Ord k) => Queue k -> Queue k -> Queue k
merge (E) q2 = q2
merge q1 (E) = q1
merge q1#(T k1 q1's) q2#(T k2 q2's)
= if k1 <= k2 then T k1 (q2 : q1's) else T k2 (q1 : q2's)
deleteMin :: (Ord k) => Queue k -> Maybe (k, Queue k)
deleteMin (E) = Nothing
deleteMin (T k q's) = Just (k, mergePairs q's)
toOrderedList :: (Ord k) => Queue k -> [k]
toOrderedList q
= fromMaybe [] $
do (k, q') <- deleteMin q
return (k : toOrderedList q')
Note that fromOrderedList accepts infinite lists. I think that this can be justified theoretically by pretending as though the infinite list of descendants effectively are merged "just in time". This feels like the kind of thing that should be in the literature on purely functional data structures already, but I'm going to be lazy and not look right now.
The function mergeOrderedByMin takes this one step further and merges a potentially infinite list of queues, where the min element in each queue is nondecreasing. I don't think that we can reuse merge, since merge appears to be insufficiently lazy.
mergeOrderedByMin :: (Ord k) => [Queue k] -> Queue k
mergeOrderedByMin [] = E
mergeOrderedByMin (E : qs') = mergeOrderedByMin qs'
mergeOrderedByMin (T k q's : qs')
= T k (mergeOrderedByMin qs' : q's)
The next function removes duplicates from a sorted list. It's in the library that m09 suggested, but for the sake of completeness, I'll define it here.
nubOrderedList :: (Ord k) => [k] -> [k]
nubOrderedList [] = []
nubOrderedList [k] = [k]
nubOrderedList (k1 : ks'#(k2 : _ks''))
| k1 < k2 = k1 : nubOrderedList ks'
| k1 == k2 = nubOrderedList ks'
Finally, we put it all together. I'll use the squares as an example.
squares :: [Integer]
squares = map (^ 2) [0 ..]
sumsOfTwoSquares :: [Integer]
sumsOfTwoSquares
= nubOrderedList $ toOrderedList $
mergeOrderedByMin
[fromOrderedList (map (s +) squares) | s <- squares]
If you don't want to modify your code that much, you can use the nub function of Data.List.Ordered (installable by cabal install data-ordlist) to filter duplicates out.
It runs in linear time, ie complexity wise your algorithm won't change.
for your example [1..] the result is just [2..]. A "very smart compiler" could deduce this from the general solution with implicit heap, that follows.
gens xs is better expressed as
gens xs = map (\t#(x:_) -> map (x+) t) $ tails xs -- or should it be
-- map (\(x:ys) -> map (x+) ys) $ tails xs -- ?
Its resulting list of lists is easily merged without duplicates by tree-like folding1 (pictured here), with
pairsums xs = foldi (\(x:l) r-> x : union l r) $ gens xs
This assumes the input list is ordered in increasing order. If it's merely in non-decreasing order (with only finite runs of equals in it, of course), you'll need to slap an orderedNub on top of that (as m09 mentions),
pairsums' = orderedNub . pairsums
Just by using foldi where foldr would work, we often get an algorithmic improvement in complexity from a factor of n to log n, a pretty significant speedup. I use it as a general tool all the time.
1The code, adjusted for infinite lists only:
foldi f (x:xs) = f x (foldi f (pairs f xs))
pairs f (x:y:t) = f x y : pairs f t
union (x:xs) (y:ys) = case compare x y of
LT -> x : union xs (y:ys)
EQ -> x : union xs ys
GT -> y : union (x:xs) ys
See also:
mergesort as foldtree (by Heinrich Apfelmus)
infinite tree folding (by Dave Bayer)
Implicit Heap (by apfelmus)
I propose to build the pairs above the diagonal, that way a lot of duplicates are not even generated:
sums xs = zipWith (map . (+)) hs ts where
(hs:ts) = tails xs
Now you have a list of lists, each containing sorted sums. Because they are sorted, it is possible to determine the next element of the sequence in a finite number of steps:
filtermerge :: (Ord a) => [[a]]->[a]
filtermerge ((h:t):ts) = h : filtermerge (insert t ts) where
insert [] ts = ts
insert xs [] = [xs]
insert h ([]:t) = insert h t
insert (h:t) ts#((h1:t1):t2)
| h < h1 = (h:t):ts
| h == h1 = insert (h:t) $ insert t1 t2
| otherwise = insert (h1:t1) $ insert (h:t) t2
filtermerge _ = []
I have been struggling with something that looks like a simple algorithm, but can't find a clean way to express it in a functional style so far. Here is an outline of the problem: suppose I have 2 arrays X and Y,
X = [| 1; 2; 2; 3; 3 |]
Y = [| 5; 4; 4; 3; 2; 2 |]
What I want is to retrieve the elements that match, and the unmatched elements, like:
matched = [| 2; 2; 3 |]
unmatched = [| 1; 3 |], [| 4; 4; 5 |]
In pseudo-code, this is how I would think of approaching the problem:
let rec match matches x y =
let m = find first match from x in y
if no match, (matches, x, y)
else
let x' = remove m from x
let y' = remove m from y
let matches' = add m to matches
match matches' x' y'
The problem I run into is the "remove m from x" part - I can't find a clean way to do this (I have working code, but it's ugly as hell). Is there a nice, idiomatic functional way to approach that problem, either the removal part, or a different way to write the algorithm itself?
This could be solved easily using the right data structures, but in case you wanted to do it manually, here's how I would do it in Haskell. I don't know F# well enough to translate this, but I hope it is similar enough. So, here goes, in (semi-)literate Haskell.
overlap xs ys =
I start by sorting the two sequences to get away from the problem of having to know about previous values.
go (sort xs) (sort ys)
where
The two base cases for the recursion are easy enough to handle -- if either list is empty, the result includes the other list in the list of elements that are not overlapping.
go xs [] = ([], (xs, []))
go [] ys = ([], ([], ys))
I then inspect the first elements in each list. If they match, I can be sure that the lists overlap on that element, so I add that to the included elements, and I let the excluded elements be. I continue the search for the rest of the list by recursing on the tails of the lists.
go (x:xs) (y:ys)
| x == y = let ( included, excluded) = go xs ys
in (x:included, excluded)
Then comes the interesting part! What I essentially want to know is if the first element of one of the lists does not exist in the second list – in that case I should add it to the excluded lists and then continue the search.
| x < y = let (included, ( xex, yex)) = go xs (y:ys)
in (included, (x:xex, yex))
| y < x = let (included, ( xex, yex)) = go (x:xs) ys
in (included, ( xex, y:yex))
And this is actually it. It seems to work for at least the example you gave.
> let (matched, unmatched) = overlap x y
> matched
[2,2,3]
> unmatched
([1,3],[4,4,5])
It seems that you're describing multiset (bag) and its operations.
If you use the appropriate data structures, operations are very easy to implement:
// Assume that X, Y are initialized bags
let matches = X.IntersectWith(Y)
let x = X.Difference(Y)
let y = Y.Difference(X)
There's no built-in Bag collection in .NET framework. You could use Power Collection library including Bag class where the above function signature is taken.
UPDATE:
You can represent a bag by a weakly ascending list. Here is an improved version of #kqr's answer in F# syntax:
let overlap xs ys =
let rec loop (matches, ins, outs) xs ys =
match xs, ys with
// found a match
| x::xs', y::ys' when x = y -> loop (x::matches, ins, outs) xs' ys'
// `x` is smaller than every element in `ys`, put `x` into `ins`
| x::xs', y::ys' when x < y -> loop (matches, x::ins, outs) xs' ys
// `y` is smaller than every element in `xs`, put `y` into `outs`
| x::xs', y::ys' -> loop (matches, ins, y::outs) xs ys'
// copy remaining elements in `xs` to `ins`
| x::xs', [] -> loop (matches, x::ins, outs) xs' ys
// copy remaining elements in `ys` to `outs`
| [], y::ys' -> loop (matches, ins, y::outs) xs ys'
| [], [] -> (List.rev matches, List.rev ins, List.rev outs)
loop ([], [], []) (List.sort xs) (List.sort ys)
After two calls to List.sort, which are probably O(nlogn), finding matches is linear to the sum of the lengths of two lists.
If you need a quick-and-dirty bag module, I would suggest a module signature like this:
type Bag<'T> = Bag of 'T list
module Bag =
val count : 'T -> Bag<'T> -> int
val insert : 'T -> Bag<'T> -> Bag<'T>
val intersect : Bag<'T> -> Bag<'T> -> Bag<'T>
val union : Bag<'T> -> Bag<'T> -> Bag<'T>
val difference : Bag<'T> -> Bag<'T> -> Bag<'T>
Inspired by this question and answer, how do I create a generic permutations algorithm in F#? Google doesn't give any useful answers to this.
EDIT: I provide my best answer below, but I suspect that Tomas's is better (certainly shorter!)
you can also write something like this:
let rec permutations list taken =
seq { if Set.count taken = List.length list then yield [] else
for l in list do
if not (Set.contains l taken) then
for perm in permutations list (Set.add l taken) do
yield l::perm }
The 'list' argument contains all the numbers that you want to permute and 'taken' is a set that contains numbers already used. The function returns empty list when all numbers all taken.
Otherwise, it iterates over all numbers that are still available, gets all possible permutations of the remaining numbers (recursively using 'permutations') and appends the current number to each of them before returning (l::perm).
To run this, you'll give it an empty set, because no numbers are used at the beginning:
permutations [1;2;3] Set.empty;;
I like this implementation (but can't remember the source of it):
let rec insertions x = function
| [] -> [[x]]
| (y :: ys) as l -> (x::l)::(List.map (fun x -> y::x) (insertions x ys))
let rec permutations = function
| [] -> seq [ [] ]
| x :: xs -> Seq.concat (Seq.map (insertions x) (permutations xs))
Tomas' solution is quite elegant: it's short, purely functional, and lazy. I think it may even be tail-recursive. Also, it produces permutations lexicographically. However, we can improve performance two-fold using an imperative solution internally while still exposing a functional interface externally.
The function permutations takes a generic sequence e as well as a generic comparison function f : ('a -> 'a -> int) and lazily yields immutable permutations lexicographically. The comparison functional allows us to generate permutations of elements which are not necessarily comparable as well as easily specify reverse or custom orderings.
The inner function permute is the imperative implementation of the algorithm described here. The conversion function let comparer f = { new System.Collections.Generic.IComparer<'a> with member self.Compare(x,y) = f x y } allows us to use the System.Array.Sort overload which does in-place sub-range custom sorts using an IComparer.
let permutations f e =
///Advances (mutating) perm to the next lexical permutation.
let permute (perm:'a[]) (f: 'a->'a->int) (comparer:System.Collections.Generic.IComparer<'a>) : bool =
try
//Find the longest "tail" that is ordered in decreasing order ((s+1)..perm.Length-1).
//will throw an index out of bounds exception if perm is the last permuation,
//but will not corrupt perm.
let rec find i =
if (f perm.[i] perm.[i-1]) >= 0 then i-1
else find (i-1)
let s = find (perm.Length-1)
let s' = perm.[s]
//Change the number just before the tail (s') to the smallest number bigger than it in the tail (perm.[t]).
let rec find i imin =
if i = perm.Length then imin
elif (f perm.[i] s') > 0 && (f perm.[i] perm.[imin]) < 0 then find (i+1) i
else find (i+1) imin
let t = find (s+1) (s+1)
perm.[s] <- perm.[t]
perm.[t] <- s'
//Sort the tail in increasing order.
System.Array.Sort(perm, s+1, perm.Length - s - 1, comparer)
true
with
| _ -> false
//permuation sequence expression
let c = f |> comparer
let freeze arr = arr |> Array.copy |> Seq.readonly
seq { let e' = Seq.toArray e
yield freeze e'
while permute e' f c do
yield freeze e' }
Now for convenience we have the following where let flip f x y = f y x:
let permutationsAsc e = permutations compare e
let permutationsDesc e = permutations (flip compare) e
My latest best answer
//mini-extension to List for removing 1 element from a list
module List =
let remove n lst = List.filter (fun x -> x <> n) lst
//Node type declared outside permutations function allows us to define a pruning filter
type Node<'a> =
| Branch of ('a * Node<'a> seq)
| Leaf of 'a
let permutations treefilter lst =
//Builds a tree representing all possible permutations
let rec nodeBuilder lst x = //x is the next element to use
match lst with //lst is all the remaining elements to be permuted
| [x] -> seq { yield Leaf(x) } //only x left in list -> we are at a leaf
| h -> //anything else left -> we are at a branch, recurse
let ilst = List.remove x lst //get new list without i, use this to build subnodes of branch
seq { yield Branch(x, Seq.map_concat (nodeBuilder ilst) ilst) }
//converts a tree to a list for each leafpath
let rec pathBuilder pth n = // pth is the accumulated path, n is the current node
match n with
| Leaf(i) -> seq { yield List.rev (i :: pth) } //path list is constructed from root to leaf, so have to reverse it
| Branch(i, nodes) -> Seq.map_concat (pathBuilder (i :: pth)) nodes
let nodes =
lst //using input list
|> Seq.map_concat (nodeBuilder lst) //build permutations tree
|> Seq.choose treefilter //prune tree if necessary
|> Seq.map_concat (pathBuilder []) //convert to seq of path lists
nodes
The permutations function works by constructing an n-ary tree representing all possible permutations of the list of 'things' passed in, then traversing the tree to construct a list of lists. Using 'Seq' dramatically improves performance as it makes everything lazy.
The second parameter of the permutations function allows the caller to define a filter for 'pruning' the tree before generating the paths (see my example below, where I don't want any leading zeros).
Some example usage: Node<'a> is generic, so we can do permutations of 'anything':
let myfilter n = Some(n) //i.e., don't filter
permutations myfilter ['A';'B';'C';'D']
//in this case, I want to 'prune' leading zeros from my list before generating paths
let noLeadingZero n =
match n with
| Branch(0, _) -> None
| n -> Some(n)
//Curry myself an int-list permutations function with no leading zeros
let noLZperm = permutations noLeadingZero
noLZperm [0..9]
(Special thanks to Tomas Petricek, any comments welcome)
If you need distinct permuations (when the original set has duplicates), you can use this:
let rec insertions pre c post =
seq {
if List.length post = 0 then
yield pre # [c]
else
if List.forall (fun x->x<>c) post then
yield pre#[c]#post
yield! insertions (pre#[post.Head]) c post.Tail
}
let rec permutations l =
seq {
if List.length l = 1 then
yield l
else
let subperms = permutations l.Tail
for sub in subperms do
yield! insertions [] l.Head sub
}
This is a straight-forward translation from this C# code. I am open to suggestions for a more functional look-and-feel.
Take a look at this one:
http://fsharpcode.blogspot.com/2010/04/permutations.html
let length = Seq.length
let take = Seq.take
let skip = Seq.skip
let (++) = Seq.append
let concat = Seq.concat
let map = Seq.map
let (|Empty|Cons|) (xs:seq<'a>) : Choice<Unit, 'a * seq<'a>> =
if (Seq.isEmpty xs) then Empty else Cons(Seq.head xs, Seq.skip 1 xs)
let interleave x ys =
seq { for i in [0..length ys] ->
(take i ys) ++ seq [x] ++ (skip i ys) }
let rec permutations xs =
match xs with
| Empty -> seq [seq []]
| Cons(x,xs) -> concat(map (interleave x) (permutations xs))
If you need permutations with repetitions, this is the "by the book" approach using List.indexed instead of element comparison to filter out elements while constructing a permutation.
let permutations s =
let rec perm perms carry rem =
match rem with
| [] -> carry::perms
| l ->
let li = List.indexed l
let permutations =
seq { for ci in li ->
let (i, c) = ci
(perm
perms
(c::carry)
(li |> List.filter (fun (index, _) -> i <> index) |> List.map (fun (_, char) -> char))) }
permutations |> Seq.fold List.append []
perm [] [] s