a haskell function to test if an integer appears after another integer - algorithm

I'm writing a function called after which takes a list of integers and two integers as parameters. after list num1 num2 should return True if num1 occurs in the list and num2 occurs in list afternum1. (Not necessarily immediately after).
after::[Int]->Int->Int->Bool
after [] _ _=False
after [x:xs] b c
|x==b && c `elem` xs =True
|x/=b && b `elem` xs && b `elem` xs=True
This is what I have so far,my biggest problem is that I don't know how to force num2 to be after num1.

There's a few different ways to approach this one; while it's tempting to go straight for recursion on this, it's nice to
avoid using recursion explicitly if there's another option.
Here's a simple version using some list utilities. Note that it's a Haskell idiom that the object we're operating over is usually the last argument. In this case switching the arguments lets us write it as a pipeline with it's third argument (the list) passed implicitly:
after :: Int -> Int -> [Int] -> Bool
after a b = elem b . dropWhile (/= a)
Hopefully this is pretty easy to understand; we drop elements of the list until we hit an a, assuming we find one we check if there's a b in the remaining list. If there was no a, this list is [] and obviously there's no b there, so it returns False as expected.
You haven't specified what happens if 'a' and 'b' are equal, so I'll leave it up to you to adapt it for that case. HINT: add a tail somewhere ;)
Here are a couple of other approaches if you're interested:
This is pretty easily handled using a fold;
We have three states to model. Either we're looking for the first elem, or
we're looking for the second elem, or we've found them (in the right order).
data State =
FindA | FindB | Found
deriving Eq
Then we can 'fold' (aka reduce) the list down to the result of whether it matches or not.
after :: Int -> Int -> [Int] -> Bool
after a b xs = foldl go FindA xs == Found
where
go FindA x = if x == a then FindB else FindA
go FindB x = if x == b then Found else FindB
go Found _ = Found
You can also do it recursively if you like:
after :: Int -> Int -> [Int] -> Bool
after _ _ [] = False
after a b (x:xs)
| x == a = b `elem` xs
| otherwise = after a b xs
Cheers!

You can split it into two parts: the first one will find the first occurrence of num1. After that, you just need to drop all elements before it and just check that num2 is in the remaining part of the list.
There's a standard function elemIndex for the first part. The second one is just elem.
import Data.List (elemIndex)
after xs x y =
case x `elemIndex` xs of
Just i -> y `elem` (drop (i + 1) xs)
Nothing -> False

If you'd like to implement it without elem or elemIndex, you could include a subroutine. Something like:
after xs b c = go xs False
where go (x:xs) bFound
| x == b && not (null xs) = go xs True
| bFound && x == c = True
| null xs = False
| otherwise = go xs bFound

Related

Haskell: shared letters between two words

I just started learning Haskell. I am trying to get the list of all common letters between two words, for example, for "hello" and "llama" that would be [ 'l', 'l' ], for "happy" and "pay", [ 'a', 'p', 'y' ].
I tried using intersect but I have trouble with duplicates, "happy" and "pay" result in [ 'a', 'p', 'p', 'y' ]. I can't just remove duplicates cause they can exist, as in the first example.
I would be grateful for any suggestions. Thanks!
You can use the multiset package:
Data.MultiSet> fromList "hello" `intersection` fromList "llama"
fromOccurList [('l',2)]
Data.MultiSet> fromList "happy" `intersection` fromList "pay"
fromOccurList [('a',1),('p',1),('y',1)]
The data-ordlist package also offers this functionality:
Data.List Data.List.Ordered> sort "hello" `isect` sort "llama"
"ll"
Data.List Data.List.Ordered> sort "happy" `isect` sort "pay"
"apy"
Here's a nice technique that's worth learning. Suppose you have two sorted lists:
[1,1,5,10,15,15,18]
[2,5,8,10,15,20]
and you want to merge them together into a single sorted list. In Haskell, there's a very elegant way to write this algorithm using pattern matching and guards:
merge (x:xs) (y:ys) | x < y = x : merge xs (y:ys)
| otherwise = y : merge (x:xs) ys
merge xs [] = xs
merge [] ys = ys
so that:
> merge [1,1,5,10,15,15,18] [2,5,8,10,15,20]
[1,1,2,5,5,8,10,10,15,15,15,18,20]
>
In a nutshell, when both lists are non-empty, it compares the heads of both lists and outputs the smallest head; then it uses recursion to output "the rest".
It could also have been written with the three cases (less, greater, and equal) all made explicit:
merge (x:xs) (y:ys) | x < y = x : merge xs (y:ys)
| x > y = y : merge (x:xs) ys
| otherwise = y : merge (x:xs) ys
merge xs [] = xs
merge [] ys = ys
and this general template can be used to implement a number of interesting algorithms on sorted lists. Here's one that removes common elements, for example:
uncommon (x:xs) (y:ys) | x < y = x : uncommon xs (y:ys)
| x > y = y : uncommon (x:xs) ys
| otherwise = uncommon xs ys
uncommon xs [] = xs
uncommon [] ys = ys
so that:
> uncommon [1,1,5,10,15,15,18] [2,5,8,10,15,20]
[1,1,2,8,15,18,20]
>
You might want to try modifying the uncommon function to create a diff function that outputs the result of removing the elements of the second list from the first. It will require modifying one of the first three guarded cases, and you'll also need to adjust one of the two "empty list" pattern matches:
> diff [1,1,5,10,15,15,18] [2,5,8,10,15,20]
[1,1,15,18]
>
Once you've figured this out, you'll find it easy to create a common function that outputs the shared elements of the two sorted lists to give:
> common [1,1,5,10,15,15,18] [2,5,8,10,15,20]
[5,10,15]
>
Since strings are just lists of characters, this would work for your problem, too, using sort from Data.List to pre-sort the lists:
> import Data.List
> common (sort "hello") (sort "llama")
"ll"
> common (sort "happy") (sort "pay")
"apy"
>
I think this is an ideal case to use Data.Map. I would implement this as follows;
import qualified Data.Map.Lazy as M
sharedLetters :: String -> String -> String
sharedLetters s1 s2 = let cm = foldr (checkMap (\(x,y) -> (x,y+1))) charMap s2
where checkMap f c m = if M.member c m then M.adjust f c m
else M.insert c (f (0,0)) m
charMap = foldr (checkMap (\(x,y) -> (x+1,y))) M.empty s1
in M.foldlWithKey (\r k (v1,v2) -> r ++ replicate (minimum [v1,v2]) k) "" cm
main :: IO String
main = do
putStr "Enter first string :"
s1 <- getLine
putStr "Enter second string :"
s2 <- getLine
return $ sharedLetters s1 s2
Enter first string :happy
Enter second string :pay
"apy"
Enter first string :pay
Enter second string :happy
"apy"
Enter first string :hello
Enter second string :llama
"ll"
Enter first string :llama
Enter second string :hello
"ll"
How about exploiting the fact that every letter shared between the words (allowing duplicates) shows up as a pair of that letter in the set formed from the union of those words? You can find such pairs efficiently by sorting the union set and picking out duplicates -
let find_dups ([]) = []; find_dups (x:y:xs) | x == y = x:find_dups(xs); find_dups (x:xs) = find_dups(xs)
let common_letters word1 word2 = find_dups (sort (word1 ++ word2))
> common_letters "hello" "fellows"
"ello"

Recursion confusion in Haskell again - subsets with an inclusion test

I'm testing a simple program to generate subsets with an inclusion test. For example, given
*Main Data.List> factorsets 7
[([2],2),([2,3],1),([3],1),([5],1),([7],1)]
calling chooseP 3 (factorsets 7), I would like to get (read from right to left, a la cons)
[[([5],1),([3],1),([2],2)]
,[([7],1),([3],1),([2],2)]
,[([7],1),([5],1),([2],2)]
,[([7],1),([5],1),([2,3],1)]
,[([7],1),([5],1),([3],1)]]
But my program is returning an extra [([7],1),([5],1),([3],1)] (and missing a [([7],1),([5],1),([2],2)]):
[[([5],1),([3],1),([2],2)]
,[([7],1),([3],1),([2],2)]
,[([7],1),([5],1),([3],1)]
,[([7],1),([5],1),([2,3],1)]
,[([7],1),([5],1),([3],1)]]
The inclusion test is: members' first part of the tuple must have a null intersection.
Once tested as working, the plan is to sum the internal products of each subset's snds, rather than accumulate them.
Since I've asked a similar question before, I imagine that an extra branch is generated since when the recursion splits at [2,3], the second branch runs over the same possibilities once it passes the skipped section. Any pointers on how to resolve that would be appreciated; and if you'd like to share ideas about how to enumerate and sum such product combinations more efficiently, that would be great, too.
Haskell code:
chooseP k xs = chooseP' xs [] 0 where
chooseP' [] product count = if count == k then [product] else []
chooseP' yys product count
| count == k = [product]
| null yys = []
| otherwise = f ++ g
where (y:ys) = yys
(factorsY,numY) = y
f = let zzs = dropWhile (\(fs,ns) -> not . and . map (null . intersect fs . fst) $ product) yys
in if null zzs
then chooseP' [] product count
else let (z:zs) = zzs in chooseP' zs (z:product) (count + 1)
g = if and . map (null . intersect factorsY . fst) $ product
then chooseP' ys product count
else chooseP' ys [] 0
Your code is complicated enough that I might recommend starting over. Here's how I would proceed.
Write a specification. Let it be as stupidly inefficient as necessary -- for example, the spec I choose below will build all combinations of k elements from the list, then filter out the bad ones. Even the filter will be stupidly slow.
sorted xs = sort xs == xs
unique xs = nub xs == xs
disjoint xs = and $ liftM2 go xs xs where
go x1 x2 = x1 == x2 || null (intersect x1 x2)
-- check that x is valid according to all the validation functions in fs
-- (there are other fun ways to spell this, but this is particularly
-- readable and clearly correct -- just what we want from a spec)
allFuns fs x = all ($x) fs
choosePSpec k = filter good . replicateM k where
good pairs = allFuns [unique, disjoint, sorted] (map fst pairs)
Just to make sure it's right, we can test it at the prompt:
*Main> mapM_ print $ choosePSpec 3 [([2],2),([2,3],1),([3],1),([5],1),([7],1)]
[([2],2),([3],1),([5],1)]
[([2],2),([3],1),([7],1)]
[([2],2),([5],1),([7],1)]
[([2,3],1),([5],1),([7],1)]
[([3],1),([5],1),([7],1)]
Looks good.
Now that we have a spec, we can try to improve the speed one refactoring at a time, always checking that it matches the spec. The first thing I'd want to do is notice that we can ensure uniqueness and sortedness just by sorting the input and picking things "in an increasing way". To do this, we can define a function which chooses subsequences of a given length. It piggy-backs on the tails function, which you can think of as nondeterministically choosing a place to split its input list.
subseq 0 xs = [[]]
subseq n xs = do
x':xt <- tails xs
xs' <- subseq (n-1) xt
return (x':xs')
Here's an example of this function in action:
*Main> subseq 3 [1..4]
[[1,2,3],[1,2,4],[1,3,4],[2,3,4]]
Now we can write a slightly faster chooseP by replacing replicateM with subseq. Recall that we're assuming the inputs are already sorted and unique, though.
choosePSlow k = filter good . subseq k where
good pairs = disjoint $ map fst pairs
We can sanity-check that it's working by running it on the particular input we have from above:
*Main> let i = [([2],2),([2,3],1),([3],1),([5],1),([7],1)]
*Main> choosePSlow 3 i == choosePSpec 3 i
True
Or, better yet, we can stress-test it with QuickCheck. We'll need a tiny bit more code. The condition k < 5 is just because the spec is so hopelessly slow that bigger values of k take forever.
propSlowMatchesSpec :: NonNegative Int -> OrderedList ([Int], Int) -> Property
propSlowMatchesSpec (NonNegative k) (Ordered xs)
= k < 5 && unique (map fst xs)
==> choosePSlow k xs == choosePSpec k xs
*Main> quickCheck propSlowMatchesSpec
+++ OK, passed 100 tests.
There are several more opportunities to make things faster. For instance, the disjoint test could be sped up using choose 2 instead of liftM2; or we might be able to ensure disjointness during element selection and prune the search even earlier; etc. How you want to improve it from here I leave to you -- but the basic technique (start with stupid and slow, then make it smarter, testing as you go) should be helpful to you.

Removing elements in a functional style

I have been struggling with something that looks like a simple algorithm, but can't find a clean way to express it in a functional style so far. Here is an outline of the problem: suppose I have 2 arrays X and Y,
X = [| 1; 2; 2; 3; 3 |]
Y = [| 5; 4; 4; 3; 2; 2 |]
What I want is to retrieve the elements that match, and the unmatched elements, like:
matched = [| 2; 2; 3 |]
unmatched = [| 1; 3 |], [| 4; 4; 5 |]
In pseudo-code, this is how I would think of approaching the problem:
let rec match matches x y =
let m = find first match from x in y
if no match, (matches, x, y)
else
let x' = remove m from x
let y' = remove m from y
let matches' = add m to matches
match matches' x' y'
The problem I run into is the "remove m from x" part - I can't find a clean way to do this (I have working code, but it's ugly as hell). Is there a nice, idiomatic functional way to approach that problem, either the removal part, or a different way to write the algorithm itself?
This could be solved easily using the right data structures, but in case you wanted to do it manually, here's how I would do it in Haskell. I don't know F# well enough to translate this, but I hope it is similar enough. So, here goes, in (semi-)literate Haskell.
overlap xs ys =
I start by sorting the two sequences to get away from the problem of having to know about previous values.
go (sort xs) (sort ys)
where
The two base cases for the recursion are easy enough to handle -- if either list is empty, the result includes the other list in the list of elements that are not overlapping.
go xs [] = ([], (xs, []))
go [] ys = ([], ([], ys))
I then inspect the first elements in each list. If they match, I can be sure that the lists overlap on that element, so I add that to the included elements, and I let the excluded elements be. I continue the search for the rest of the list by recursing on the tails of the lists.
go (x:xs) (y:ys)
| x == y = let ( included, excluded) = go xs ys
in (x:included, excluded)
Then comes the interesting part! What I essentially want to know is if the first element of one of the lists does not exist in the second list – in that case I should add it to the excluded lists and then continue the search.
| x < y = let (included, ( xex, yex)) = go xs (y:ys)
in (included, (x:xex, yex))
| y < x = let (included, ( xex, yex)) = go (x:xs) ys
in (included, ( xex, y:yex))
And this is actually it. It seems to work for at least the example you gave.
> let (matched, unmatched) = overlap x y
> matched
[2,2,3]
> unmatched
([1,3],[4,4,5])
It seems that you're describing multiset (bag) and its operations.
If you use the appropriate data structures, operations are very easy to implement:
// Assume that X, Y are initialized bags
let matches = X.IntersectWith(Y)
let x = X.Difference(Y)
let y = Y.Difference(X)
There's no built-in Bag collection in .NET framework. You could use Power Collection library including Bag class where the above function signature is taken.
UPDATE:
You can represent a bag by a weakly ascending list. Here is an improved version of #kqr's answer in F# syntax:
let overlap xs ys =
let rec loop (matches, ins, outs) xs ys =
match xs, ys with
// found a match
| x::xs', y::ys' when x = y -> loop (x::matches, ins, outs) xs' ys'
// `x` is smaller than every element in `ys`, put `x` into `ins`
| x::xs', y::ys' when x < y -> loop (matches, x::ins, outs) xs' ys
// `y` is smaller than every element in `xs`, put `y` into `outs`
| x::xs', y::ys' -> loop (matches, ins, y::outs) xs ys'
// copy remaining elements in `xs` to `ins`
| x::xs', [] -> loop (matches, x::ins, outs) xs' ys
// copy remaining elements in `ys` to `outs`
| [], y::ys' -> loop (matches, ins, y::outs) xs ys'
| [], [] -> (List.rev matches, List.rev ins, List.rev outs)
loop ([], [], []) (List.sort xs) (List.sort ys)
After two calls to List.sort, which are probably O(nlogn), finding matches is linear to the sum of the lengths of two lists.
If you need a quick-and-dirty bag module, I would suggest a module signature like this:
type Bag<'T> = Bag of 'T list
module Bag =
val count : 'T -> Bag<'T> -> int
val insert : 'T -> Bag<'T> -> Bag<'T>
val intersect : Bag<'T> -> Bag<'T> -> Bag<'T>
val union : Bag<'T> -> Bag<'T> -> Bag<'T>
val difference : Bag<'T> -> Bag<'T> -> Bag<'T>

Haskell any way to improve this code

Hey I have implemented this code segment as a move ordering system for a alpha-beta pruning function. It does speed up my code by a little but when I profiled my code I saw it was very clunky.
move_ord [] (primary_ord,secondary_ord) = primary_ord ++ secondary_ord
move_ord (y:ys) (primary_ord,secondary_ord) = case no_of_pebbles state y of
0 -> move_ord ys (primary_ord,secondary_ord)
13 -> move_ord ys (y : primary_ord,secondary_ord)
x
| 7 - y == x -> move_ord ys (y : primary_ord,secondary_ord)
| otherwise -> move_ord ys (primary_ord,y : secondary_ord)
It is meant to place moves with specific pebble values (13 and 7-y==x) at the front of the list. While also filtering out illegal moves of 0 pebbles.
Pebbles are stored as Int. y is a Int.
Thank you in advance.
Does the order in which the elements of primary_ord appear matter?
No it does not. I am ordering branches to check first for alpha-beta cutoffs. The cases I outlined have a higher probability of triggering a pruning on the next branch evaluated. Though since I have no other information they can be in any order as long as they appear in front of the other cases.
In that case, you should deliver the good ones as soon as you find them, and only defer delivering the bad ones.
If move_ord is - except in the recursive calls - only called with ([],[]) as the second argument, I'd recommend
move_ord = go []
where
go acc (y:ys) = case no_of_pebbles state y of
0 -> go acc ys
13 -> y : go acc ys
x | x == 7-y -> y : go acc ys
| otherwise -> go (y:acc) ys
go acc _ = acc
Thus a) you can run in smaller space (unless the consumer accumulates the entire result) and b) the consumer need not wait for the entire list to be traversed before it can start working.
Of course, if there are only very few or even none "good" ys, it may not make a difference, and if the consumer needs the entire list before it can do anything neither. But usually, that should improve matters somewhat. Otherwise, there is not much that can be done in this function, no_of_pebbles would be what uses the most resources here.
If move_ord can be called with non-empty primary_ord or secondary_ord, use a wrapper
move_ord xs (primary, secondary) = primary ++ go secondary xs
where
go acc ... -- as above
I'm assuming that move_ord starts out being called as move_ord ys ([], []). We then have a streaming filter pattern on Either.
import Data.Either
sorter :: (a -> Bool) -> [a] -> [Either a a]
sorter p = map go where go x = if p x then Left x else Right x
then,
uncurry (++)
. partitionEithers
. sorter (\x -> no_of_pebbles x == 13 || 7 - x == no_of_pebbles x)
. filter (\x -> no_of_pebbles x != 0)
Which is still a little ugly because we keep computing no_of_pebbles in various places. This might be alright for documentation purposes, but we could also precompute no_of_pebbles.
uncurry (++)
. partitionEithers
. sorter (\(x, num) -> num == 13 || 7 - x == num)
. filter ((!=0) . snd)
. map (\x -> (x, no_of_pebbles x))
Specialization.
As you use literal constant the compiler will infer default Integer type not Int.
Then you need to specialize the type signature of your function, like so.
move_ord :: [Int] -> ([Int], [Int]) -> ([Int], [Int])
Memoization.
Your input list can contain duplicate element, then two strategy are possible.
Memoize your call of no_of_pebbles, it will save you extract computation, or you can sort and remove the duplicate of your input list before processing it.
Return a tuple.
You accumulate the response as a tuple then maybe you should return it as is.
Trying to merge the two element of it into the function seems to be out of scope.
Should be manage later in your code, and it's good to know that list store in tuple are common data type know as dlist.

Calculating permutations in F#

Inspired by this question and answer, how do I create a generic permutations algorithm in F#? Google doesn't give any useful answers to this.
EDIT: I provide my best answer below, but I suspect that Tomas's is better (certainly shorter!)
you can also write something like this:
let rec permutations list taken =
seq { if Set.count taken = List.length list then yield [] else
for l in list do
if not (Set.contains l taken) then
for perm in permutations list (Set.add l taken) do
yield l::perm }
The 'list' argument contains all the numbers that you want to permute and 'taken' is a set that contains numbers already used. The function returns empty list when all numbers all taken.
Otherwise, it iterates over all numbers that are still available, gets all possible permutations of the remaining numbers (recursively using 'permutations') and appends the current number to each of them before returning (l::perm).
To run this, you'll give it an empty set, because no numbers are used at the beginning:
permutations [1;2;3] Set.empty;;
I like this implementation (but can't remember the source of it):
let rec insertions x = function
| [] -> [[x]]
| (y :: ys) as l -> (x::l)::(List.map (fun x -> y::x) (insertions x ys))
let rec permutations = function
| [] -> seq [ [] ]
| x :: xs -> Seq.concat (Seq.map (insertions x) (permutations xs))
Tomas' solution is quite elegant: it's short, purely functional, and lazy. I think it may even be tail-recursive. Also, it produces permutations lexicographically. However, we can improve performance two-fold using an imperative solution internally while still exposing a functional interface externally.
The function permutations takes a generic sequence e as well as a generic comparison function f : ('a -> 'a -> int) and lazily yields immutable permutations lexicographically. The comparison functional allows us to generate permutations of elements which are not necessarily comparable as well as easily specify reverse or custom orderings.
The inner function permute is the imperative implementation of the algorithm described here. The conversion function let comparer f = { new System.Collections.Generic.IComparer<'a> with member self.Compare(x,y) = f x y } allows us to use the System.Array.Sort overload which does in-place sub-range custom sorts using an IComparer.
let permutations f e =
///Advances (mutating) perm to the next lexical permutation.
let permute (perm:'a[]) (f: 'a->'a->int) (comparer:System.Collections.Generic.IComparer<'a>) : bool =
try
//Find the longest "tail" that is ordered in decreasing order ((s+1)..perm.Length-1).
//will throw an index out of bounds exception if perm is the last permuation,
//but will not corrupt perm.
let rec find i =
if (f perm.[i] perm.[i-1]) >= 0 then i-1
else find (i-1)
let s = find (perm.Length-1)
let s' = perm.[s]
//Change the number just before the tail (s') to the smallest number bigger than it in the tail (perm.[t]).
let rec find i imin =
if i = perm.Length then imin
elif (f perm.[i] s') > 0 && (f perm.[i] perm.[imin]) < 0 then find (i+1) i
else find (i+1) imin
let t = find (s+1) (s+1)
perm.[s] <- perm.[t]
perm.[t] <- s'
//Sort the tail in increasing order.
System.Array.Sort(perm, s+1, perm.Length - s - 1, comparer)
true
with
| _ -> false
//permuation sequence expression
let c = f |> comparer
let freeze arr = arr |> Array.copy |> Seq.readonly
seq { let e' = Seq.toArray e
yield freeze e'
while permute e' f c do
yield freeze e' }
Now for convenience we have the following where let flip f x y = f y x:
let permutationsAsc e = permutations compare e
let permutationsDesc e = permutations (flip compare) e
My latest best answer
//mini-extension to List for removing 1 element from a list
module List =
let remove n lst = List.filter (fun x -> x <> n) lst
//Node type declared outside permutations function allows us to define a pruning filter
type Node<'a> =
| Branch of ('a * Node<'a> seq)
| Leaf of 'a
let permutations treefilter lst =
//Builds a tree representing all possible permutations
let rec nodeBuilder lst x = //x is the next element to use
match lst with //lst is all the remaining elements to be permuted
| [x] -> seq { yield Leaf(x) } //only x left in list -> we are at a leaf
| h -> //anything else left -> we are at a branch, recurse
let ilst = List.remove x lst //get new list without i, use this to build subnodes of branch
seq { yield Branch(x, Seq.map_concat (nodeBuilder ilst) ilst) }
//converts a tree to a list for each leafpath
let rec pathBuilder pth n = // pth is the accumulated path, n is the current node
match n with
| Leaf(i) -> seq { yield List.rev (i :: pth) } //path list is constructed from root to leaf, so have to reverse it
| Branch(i, nodes) -> Seq.map_concat (pathBuilder (i :: pth)) nodes
let nodes =
lst //using input list
|> Seq.map_concat (nodeBuilder lst) //build permutations tree
|> Seq.choose treefilter //prune tree if necessary
|> Seq.map_concat (pathBuilder []) //convert to seq of path lists
nodes
The permutations function works by constructing an n-ary tree representing all possible permutations of the list of 'things' passed in, then traversing the tree to construct a list of lists. Using 'Seq' dramatically improves performance as it makes everything lazy.
The second parameter of the permutations function allows the caller to define a filter for 'pruning' the tree before generating the paths (see my example below, where I don't want any leading zeros).
Some example usage: Node<'a> is generic, so we can do permutations of 'anything':
let myfilter n = Some(n) //i.e., don't filter
permutations myfilter ['A';'B';'C';'D']
//in this case, I want to 'prune' leading zeros from my list before generating paths
let noLeadingZero n =
match n with
| Branch(0, _) -> None
| n -> Some(n)
//Curry myself an int-list permutations function with no leading zeros
let noLZperm = permutations noLeadingZero
noLZperm [0..9]
(Special thanks to Tomas Petricek, any comments welcome)
If you need distinct permuations (when the original set has duplicates), you can use this:
let rec insertions pre c post =
seq {
if List.length post = 0 then
yield pre # [c]
else
if List.forall (fun x->x<>c) post then
yield pre#[c]#post
yield! insertions (pre#[post.Head]) c post.Tail
}
let rec permutations l =
seq {
if List.length l = 1 then
yield l
else
let subperms = permutations l.Tail
for sub in subperms do
yield! insertions [] l.Head sub
}
This is a straight-forward translation from this C# code. I am open to suggestions for a more functional look-and-feel.
Take a look at this one:
http://fsharpcode.blogspot.com/2010/04/permutations.html
let length = Seq.length
let take = Seq.take
let skip = Seq.skip
let (++) = Seq.append
let concat = Seq.concat
let map = Seq.map
let (|Empty|Cons|) (xs:seq<'a>) : Choice<Unit, 'a * seq<'a>> =
if (Seq.isEmpty xs) then Empty else Cons(Seq.head xs, Seq.skip 1 xs)
let interleave x ys =
seq { for i in [0..length ys] ->
(take i ys) ++ seq [x] ++ (skip i ys) }
let rec permutations xs =
match xs with
| Empty -> seq [seq []]
| Cons(x,xs) -> concat(map (interleave x) (permutations xs))
If you need permutations with repetitions, this is the "by the book" approach using List.indexed instead of element comparison to filter out elements while constructing a permutation.
let permutations s =
let rec perm perms carry rem =
match rem with
| [] -> carry::perms
| l ->
let li = List.indexed l
let permutations =
seq { for ci in li ->
let (i, c) = ci
(perm
perms
(c::carry)
(li |> List.filter (fun (index, _) -> i <> index) |> List.map (fun (_, char) -> char))) }
permutations |> Seq.fold List.append []
perm [] [] s

Resources