How to implement a union-find (disjoint set) data structure in Coq? - data-structures

I am quite new to Coq, but for my project I have to use a union-find data structure in Coq. Are there any implementations of the union-find (disjoint set) data structure in Coq?
If not, can someone provide an implementation or some ideas? It doesn't have to be very efficient. (no need to do path compression or all the fancy optimizations) I just need a data structure that can hold an arbitrary data type (or nat if it's too hard) and perform: union and find.
Thanks in advance

If all you need is a mathematical model, with no concern for actual performance, I would go for the most straightforward one: a functional map (finite partial function) in which each element optionally links to another element with which it has been merged.
If an element links to nothing, then its canonical representative is itself.
If an element links to another element, then its canonical representative is the canonical representative of that other element.
Note: in the remaining of this answer, as is standard with union-find, I will assume that elements are simply natural numbers. If you want another type of elements, simply have another map that binds all elements to unique numbers.
Then you would define a function find : UnionFind → nat → nat that returns the canonical representative of a given element, by following links as long as you can. Notice that the function would use recursion, whose termination argument is not trivial. To make it happen, I think that the easiest way is to maintain the invariant that a number only links to a lesser number (i.e. if i links to j, then i > j). Then the recursion terminates because, when following links, the current element is a decreasing natural number.
Defining the function union : UnionFind → nat → nat → UnionFind is easier: union m i j simply returns an updated map with max i' j' linking to min i' j', where i' = find m i and j' = find m j.
[Side note on performance: maintaining the invariant means that you cannot adequately choose which of a pair of partitions to merge into the other, based on their ranks; however you can still implement path compression if you want!]
As for which data structure exactly to use for the map: there are several available.
The standard library (look under the title FSets) has several implementations (FMapList, FMapPositive and so on) satisfying the interface FMapInterface.
The stdpp libray has gmap.
Again if performance is not a concern, just pick the simplest encoding or, more importantly, the one that makes your proofs the simplest. I am thinking of just a list of natural numbers.
The positions of the list are the elements in reverse order.
The values of the list are offsets, i.e. the number of positions to skip forward in order to reach the target of the link.
For an element i linking to j (i > j), the offset is i − j.
For a canonical representative, the offset is zero.
With my best pseudo-ASCII-art skills, here is a map where the links are { 6↦2, 4↦2, 3↦0, 2↦1 } and the canonical representatives are { 5, 1, 0 }:
6 5 4 3 2 1 0 element
↓ ↓ ↓ ↓ ↓ ↓ ↓
/‾‾‾‾‾‾‾‾‾↘
[ 4 ; 0 ; 2 ; 3 ; 1 ; 0 ; 0 ] map
\ \____↗↗ \_↗
\___________/
The motivation is that the invariant discussed above is then enforced structurally. Hence, there is hope that find could actually be defined by structural induction (on the structure of the list), and have termination for free.
A related paper is: Sylvain Conchon and Jean-Christophe Filliâtre. A Persistent Union-Find Data Structure. In ACM SIGPLAN Workshop on ML.
It describes the implementation of an efficient union-find data structure in ML, that is persistent from the user perspective, but uses mutation internally. What may be more interesting for you, is that they prove it correct in Coq, which implies that they have a Coq model for union-find. However, this model reflects the memory store for the imperative program that they seek to prove correct. I’m not sure how applicable it is to your problem.

Maëlan has a good answer, but for an even simpler and more inefficient disjoint set data structure, you can just use functions to nat to represent them. This avoids any termination stickiness. In essence, the preimages of any total function form disjoint sets over the domain. Another way of looking at this is as representing any disjoint set G as the curried application find_root G : nat -> nat since find_root is the essential interface that disjoint sets provide.
This is also analogous to using functions to represent Maps in Coq like in Software Foundations. https://softwarefoundations.cis.upenn.edu/lf-current/Maps.html
Require Import Arith.
Search eq_nat_decide.
(* disjoint set *)
Definition ds := nat -> nat.
Definition init_ds : ds := fun x => x.
Definition find_root (g : ds) x := g x.
Definition in_same_set (g : ds) x y :=
eq_nat_decide (g x) (g y).
Definition union (g : ds) x y : ds :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
You can also make it generic over the type held in the disjoint set like so
Definition ds (a : Type) := a -> nat.
Definition find_root {a} (g : ds a) x := g x.
Definition in_same_set {a} (g : ds a) x y :=
eq_nat_decide (g x) (g y).
Definition union {a} (g : ds a) x y : ds a :=
fun z =>
if in_same_set g x z
then find_root g y
else find_root g z.
To initialize the disjoint set for a particular a, you need an Enum instance for your type a basically.
Definition init_bool_ds : ds bool := fun x => if x then 0 else 1.
You may want to trade out eq_nat_decide for eqb or some other roughly equivalent thing depending on your proof style and needs.

Related

How can I subtract a multiset from a set with a given multiset?

So I'm trying to define a function apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset"
It takes in a function C that may convert an 'a multiset into a single element of type 'a. Here we assume that each element in the domain of C is pairwise mutually exclusive and not the empty multiset (I already have another function that checks these things). apply will also take another multiset inp. What I'd like the function to do is check if there is at least one element in the domain of C that is completely contained in inp. If this is the case, then perform a set difference inp - s where s is the element in the domain of C and add the element the (C s) into this resulting multiset. Afterwards, keep running the function until there are no more elements in the domain of C that are completely contained in the given inp multiset.
What I tried was the following:
fun apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset" where
"apply_C C inp = (if ∃s ∈ (domain C). s ⊆# inp then apply_C C (add_mset (the (C s)) (inp - s)) else inp)"
However, I get this error:
Variable "s" occurs on right hand side only:
⋀C inp s.
apply_C C inp =
(if ∃s∈domain C. s ⊆# inp
then apply_C C
(add_mset (the (C s)) (inp - s))
else inp)
I have been thinking about this problem for days now, and I haven't been able to find a way to implement this functionality in Isabelle. Could I please have some help?
After thinking more about it, I don't believe there is a simple solutions for that Isabelle.
Do you need that?
I have not said why you want that. Maybe you can reduce your assumptions? Do you really need a function to calculate the result?
How to express the definition?
I would use an inductive predicate that express one step of rewriting and prove that the solution is unique. Something along:
context
fixes C :: ‹'a multiset ⇒ 'a option›
begin
inductive apply_CI where
‹apply_CI (M + M') (add_mset (the (C M)) M')›
if ‹M ∈ dom C›
context
assumes
distinct: ‹⋀a b. a ∈ dom C ⟹ b ∈ dom C ⟹ a ≠ b ⟹ a ∩# b = {#}› and
strictly_smaller: ‹⋀a b. a ∈ dom C ⟹ size a > 1›
begin
lemma apply_CI_determ:
assumes
‹apply_CI⇧*⇧* M M⇩1› and
‹apply_CI⇧*⇧* M M⇩2› and
‹⋀M⇩3. ¬apply_CI M⇩1 M⇩3›
‹⋀M⇩3. ¬apply_CI M⇩2 M⇩3›
shows ‹M⇩1 = M⇩2›
sorry
lemma apply_CI_smaller:
‹apply_CI M M' ⟹ size M' ≤ size M›
apply (induction rule: apply_CI.induct)
subgoal for M M'
using strictly_smaller[of M]
by auto
done
lemma wf_apply_CI:
‹wf {(x, y). apply_CI y x}›
(*trivial but very annoying because not enough useful lemmas on wf*)
sorry
end
end
I have no clue how to prove apply_CI_determ (no idea if the conditions I wrote down are sufficient or not), but I did spend much thinking about it.
After that you can define your definitions with:
definition apply_C where
‹apply_C M = (SOME M'. apply_CI⇧*⇧* M M' ∧ (∀M⇩3. ¬apply_CI M' M⇩3))›
and prove the property in your definition.
How to execute it
I don't see how to write an executable function on multisets directly. The problem you face is that one step of apply_C is nondeterministic.
If you can use lists instead of multisets, you get an order on the elements for free and you can use subseqs that gives you all possible subsets. Rewrite using the first element in subseqs that is in the domain of C. Iterate as long as there is any possible rewriting.
Link that to the inductive predicate to prove termination and that it calculates the right thing.
Remark that in general you cannot extract a list out of a multiset, but it is possible to do so in some cases (e.g., if you have a linorder over 'a).

Should I avoid constructing in Haskell?

While reading a snipped from Haskell for Great Good I found the following situation:
treeInsert :: (Ord a) => a -> Tree a -> Tree a
treeInsert x EmptyTree = singleton x
treeInsert x (Node a left right)
| x == a = Node x left right
| x < a = Node a (treeInsert x left) right
| x > a = Node a left (treeInsert x right)
Wouldn't it be better for performance if we just reused the given Tree when x == a?
treeInsert :: (Ord a) => a -> Tree a -> Tree a
treeInsert x EmptyTree = singleton x
treeInsert x all#(Node a left right)
| x == a = all
| x < a = Node a (treeInsert x left) right
| otherwise = Node a left (treeInsert x right)
In real life coding, what should I do? Are there any drawbacks when returning the same thing?
Let's look at the core! (Without optimisations here)
$ ghc-7.8.2 -ddump-simpl wtmpf-file13495.hs
The relevant difference is that the first version (without all#(...)) has
case GHC.Classes.> # a_aUH $dOrd_aUV eta_B2 a1_aBQ
of _ [Occ=Dead] {
GHC.Types.False ->
Control.Exception.Base.patError
# (TreeInsert.Tree a_aUH)
"wtmpf-file13495.hs:(9,1)-(13,45)|function treeInsert"#;
GHC.Types.True ->
TreeInsert.Node
# a_aUH
a1_aBQ
left_aBR
(TreeInsert.treeInsert # a_aUH $dOrd_aUV eta_B2 right_aBS)
where reusing the node with that as-pattern does just
TreeInsert.Node
# a_aUI
a1_aBR
left_aBS
(TreeInsert.treeInsert # a_aUI $dOrd_aUW eta_B2 right_aBT);
This is an avoided check that may well make a significant performance difference.
However, this difference has actually nothing to do with the as-pattern. It's just because your first snippet uses a x > a guard, which is not trivial. The second uses otherwise, which is optimised away.
If you change the first snippet to
treeInsert :: (Ord a) => a -> Tree a -> Tree a
treeInsert x EmptyTree = singleton x
treeInsert x (Node a left right)
| x == a = Node x left right
| x < a = Node a (treeInsert x left) right
| otherwise = Node a left (treeInsert x right)
then the difference boils down to
GHC.Types.True -> TreeInsert.Node # a_aUH a1_aBQ left_aBR right_aBS
vs
GHC.Types.True -> wild_Xa
Which is indeed just the difference of Node x left right vs all.
...without optimisations, that is. The versions diverge further when I turn on -O2. But I can't really make out how the performance would differ, there.
In real life coding, what should I do? Are there any drawbacks when returning the same thing?
a == b does not guarantee that f a == f b for all functions f. So, you may have to return new object even if they compare equal.
In other words, it may not be feasible to change Node x left right to Node a left right or all when a == x regardless of performance gains.
For example you may have types which carry meta data. When you compare them for equality, you may only care about the values and ignore the meta data. But if you replace them just because they compare equal then you will loose the meta data.
newtype ValMeta a b = ValMeta (a, b) -- value, along with meta data
deriving (Show)
instance Eq a => Eq (ValMeta a b) where
-- equality only compares values, ignores meta data
ValMeta (a, b) == ValMeta (a', b') = a == a'
The point is Eq type-class only says that you may compare values for equality. It does not guarantee anything beyond that.
A real-world example where a == b does not guarantee that f a == f b is when you maintain a Set of unique values within a self-balancing tree. A self-balancing tree (such as Red-Black tree) has some guarantees about the structure of tree but the actual depth and structure depends on the order that the data were added to or removed from the set.
Now when you compare 2 sets for equality, you want to compare that values within the set are equal, not that the underlying trees have the same exact structure. But if you have a function such as depth which exposes the depth of underlying tree maintaining the set then you cannot guarantee that the depths are equal even if the sets compare equal.
Here is a video of great Philip Wadler realizing live and on-stage that many useful relations do not preserve equality (starting at 42min).
Edit: Example from ghc where a == b does not imply f a == f b:
\> import Data.Set
\> let a = fromList [1, 2, 3, 4, 5, 10, 9, 8, 7, 6]
\> let b = fromList [1..10]
\> let f = showTree
\> a == b
True
\> f a == f b
False
Another real-world example is hash-table. Two hash-tables are equal if and only if their key-value pairs tie out. However, the capacity of a hash-table, i.e. the number of keys you may add before having to re-allocate and rehash, depends on the order of inserts/deletes.
So if you have a function which returns the capacity of hash table, it may return different values for hash-tables a and b even though a == b.
My two cents... perhaps not even about the original question:
Instead of writing guards with x < a and x == a, I would match compare a b against LT, EQ and GT, e.g.:
treeInsert x all#(Node a left right) =
case compare x a of
EQ -> ...
LT -> ...
GT -> ...
I would do this especially if x and a can be complex data structures, since a test like x < a could be expensive.
answer seems to be wrong. I just leave it here, for reference...
With your second function you avoid creating a new node, because the compiler cannot really understand equality (== is just some function.) If you change the first version to
-- version C
treeInsert :: (Ord a) => a -> Tree a -> Tree a
treeInsert x EmptyTree = singleton x
treeInsert x (Node a left right)
| x == a = Node a left right -- Difference here! Changed x to a.
| x < a = Node a (treeInsert x left) right
| x > a = Node a left (treeInsert x right)
the compiler will probably be able to do common subexpression elimination, because the optimizer will be able to see that Node a left right is the same as Node a left right.
On the other hand, I doubt that the compiler can deduce from a == x that Node a left right is the same as Node x left right.
So, I'm pretty sure that under -O2, version B and version C are the same, but version A is probably slower because it does an extra instantiation in the a == x case.
Well, if the first case had used a instead of x as follows, then there's at least the chance that GHC would eliminate the allocation of a new node through common subexpression elimination.
treeInsert x (Node a left right)
| x == a = Node a left right
However, this is all but irrelevant in any non-trivial use case, because the path down the tree to the node is going to be duplicated even when the element already exists. And this path is going to be significantly longer than a single node unless your use case is trivial.
In the world of ML, the fairly idiomatic way to avoid this is to throw a KeyAlreadyExists exception, and then catch that exception at the top-level insertion function and return the original tree. This would cause the stack to be unwound instead of allocating any of the Nodes on the heap.
A direct implementation of the ML idiom is basically a no-no in Haskell, for good reasons. If avoiding this duplication matters, the simplest and possibly best thing to do is to check if the tree contains the key before you insert it.
The downside of this approach, compared to a direct Haskell insert or the ML idiom, is that it involves two traversals of the path instead of one. Now, here is a non-duplicating, single-pass insert you can implement in Haskell:
treeInsert :: Ord a => a -> Tree a -> Tree a
treeInsert x original_tree = result_tree
where
(result_tree, new_tree) = loop x original_tree
loop x EmptyTree = (new_tree, singleton x)
loop x (Node a left right) =
case compare x a of
LT -> let (res, new_left) = loop x left
in (res, Node a new_left right)
EQ -> (original_tree, error "unreachable")
GT -> let (res, new_right) = loop x right
in (res, Node a left new_right)
However, older versions of GHC (roughly 7-10 years ago) don't handle this sort of recursion through lazy pairs of results very efficiently, and in my experience check-before-insert is likely to perform better. I'd be slightly surprised if this observation has really changed in the context of more recent GHC versions.
One can certainly imagine a function that directly constructs (but does not return) a new path for the tree, and decides to return the new path or the original path once it's known whether the element exists already. (The new path would immediately become garbage if it is not returned.) This conforms to the basic principles of the GHC runtime, but isn't really expressible in the source language.
Of course, any completely non-duplicating insertion function on a lazy data structure is going to have different strictness properties than a simple, duplicating insert. So no matter the implementation technique, they are different functions if laziness matters.
But of course, whether or not the path is duplicated may not matter that much. The cases where it would matter the most would be when you are using the tree persistently, because in linear use cases the old path would become garbage immediately after each insertion. And of course, this only matters when you are inserting a significant number of duplicates.

A concise way to access an element of a set in pseudocode

Let networklocs be a set of elements of the form (n, t, l) where n is a node in the network, t is a clock tick and l is the location of n at time t. How can I get (in a concise way in pseudo-code) the element of networklocs where node and time is given?
I know I can write a function like
getElement(ni,t)
for all (nj,t',l') in networklocs
if nj=ni and t'= t then return (nj,t',l')
But is there a more concise way to access an element of the set networklocs in the pseudo-code?
Note that I would like to keep networklocs as a set, so solutions with maps or arrays do not fit.
Note that returning ni and t is useless because they're already known. In notation and practice, you'd want
Let M be a map: <N, T> -> L
The operation you want is just a map lookup:
l <- M <n, t>
Although the map is the most likely notation, a predicate logic expression can also be used:
Let get(n,t) be x = <n, t, l> | x \in networklocs
The loop you provided as an example is neither correct nor pseudocode. It's a concrete implementation of a map, and it doesn't say what to do when the key is not found.

Deciding inhabitation?

Consider the basic system of simple types usually known as TAλ. One can prove that (as a consequence of the so called Subject Reduction Property and the fact that any typable term is strongly β-normalising)
If τ has an inhabitant, then it has one in β-normal form.
It follows that given an inhabitation problem Γ ⊢ X : τ we can effectively construct an algorithm that nondeterministically guesses step by step the shape of a normal solution: either (i) X is xY_1...Y_n or (ii) X is λz.Y:
(i) If for some n ≥ 0 there a judgment x : σ_1 → ... → σ_n → τ in Γ, then nondeterministically select it, set X = xY_1...Y_n and (only if n > 0) consider parallel problems
Γ ⊢ Y_1 : σ_1,...,Γ ⊢ Y_n : σ_n
(ii) If τ is τ_1 → τ_2, then for a fresh variable z, set X = λz.Y and consider the problem
Γ, z : τ_1 ⊢ Y : τ_2.
Furthermore, since all types in the constraints at each step of the algorithm are proper subtypes of the original input, the number of steps of the algorithm is at most polynomial in the size of τ. Therefore, the algorithm above is a decision procedure for the inhabitation problem.
My question is the following: what's wrong in the above reasoning? I've been searching all day for a decision procedure for the inhabitation problem for simple types, but all the proofs I can find are rather long and use complicated machinery (e.g. long normal forms, Curry-Howard isomorphism, etc...). There must be something that I don't see.
Sorry, I'm not used to unicode and SO doesn't support LaTeX. I also asked the same question on MO https://mathoverflow.net/questions/140045/is-there-an-easy-decision-algorithm-for-the-inhabitation-problem-for-simple-type, but the lambda calculus group doesn't seem too active there.

Can I always convert mutable-only algorithms to single-assignment and still be efficient?

The Context
The context of this question is that I want to play around with Gene Expression Programming (GEP), a form of evolutionary algorithm, using Erlang. GEP makes use of a string based DSL called 'Karva notation'. Karva notation is easily translated into expression parse trees, but the translation algorithm assumes an implementation having mutable objects: incomplete sub-expressions are created early-on the translation process and their own sub-expressions are filled-in later-on with values that were not known at the time they were created.
The purpose of Karva notation is that it guarantees syntactically correct expressions are created without any expensive encoding techniques or corrections of genetic code. The problem is that with a single-assignment programming language like Erlang, I have to recreate the expression tree continually as each sub expression gets filled in. This takes an inexpensive - O(n)? - update operation and converts it into one that would complete in exponential time (unless I'm mistaken). If I can't find an efficient functional algorithm to convert K-expressions into expression trees, then one of the compelling features of GEP is lost.
The Question
I appreciate that the K-expression translation problem is pretty obscure, so what I want is advice on how to convert an inherently-non-functional algorithm (alg that exploits mutable data structures) into one that does not. How do pure functional programming languages adapt many of the algorithms and data structures that were produced in the early days of computer science that depend on mutability to get the performance characteristics they need?
Carefully designed immutability avoids unecessary updating
Immutable data structures are only an efficiency problem if they're constantly changing, or you build them up the wrong way. For example, continually appending more to the end of a growing list is quadratic, whereas concatenating a list of lists is linear. If you think carefully, you can usually build up your structure in a sensible way, and lazy evaluation is your friend - hand out a promise to work it out and stop worrying.
Blindly trying to replicate an imperative algorithm can be ineffecient, but you're mistaken in your assertion that functional programming has to be asymptotically bad here.
Case study: pure functional GEP: Karva notation in linear time
I'll stick with your case study of parsing Karva notation for GEP. (
I've played with this solution more fully in this answer.)
Here's a fairly clean pure functional solution to the problem. I'll take the opportunity to name drop some good general recursion schemes along the way.
Code
(Importing Data.Tree supplies data Tree a = Node {rootLabel :: a, subForest :: Forest a} where type Forest a = [Tree a].)
import Data.Tree
import Data.Tree.Pretty -- from the pretty-tree package for visualising trees
arity :: Char -> Int
arity c
| c `elem` "+*-/" = 2
| c `elem` "Q" = 1
| otherwise = 0
A hylomorphism is the composition of an anamorphism (build up, unfoldr) and a catamorphism (combine, foldr).
These terms are introduced to the FP community in the seminal paper Functional Programming with Bananas, Lenses and Barbed wire.
We're going to pull the levels out (ana/unfold) and combine them back together (cata/fold).
hylomorphism :: b -> (a -> b -> b) -> (c -> (a, c)) -> (c -> Bool) -> c -> b
hylomorphism base combine pullout stop seed = hylo seed where
hylo s | stop s = base
| otherwise = combine new (hylo s')
where (new,s') = pullout s
To pull out a level, we use the total arity from the previous level to find where to split off this new level, and pass on the total arity for this one ready for next time:
pullLevel :: (Int,String) -> (String,(Int,String))
pullLevel (n,cs) = (level,(total, cs')) where
(level, cs') = splitAt n cs
total = sum $ map arity level
To combine a level (as a String) with the level below (that's already a Forest), we just pull off the number of trees that each character needs.
combineLevel :: String -> Forest Char -> Forest Char
combineLevel "" [] = []
combineLevel (c:cs) levelBelow = Node c subforest : combineLevel cs theRest
where (subforest,theRest) = splitAt (arity c) levelBelow
Now we can parse the Karva using a hylomorphism. Note that we seed it with a total arity from outside the string of 1, since there's only one node at the root level. Correspondingly we apply head to the result to get this singleton back out after the hylomorphism.
karvaToTree :: String -> Tree Char
karvaToTree cs = let
zero (n,_) = n == 0
in head $ hylomorphism [] combineLevel pullLevel zero (1,cs)
Linear Time
There's no exponential blowup, nor repeated O(log(n)) lookups or expensive modifications, so we shouldn't be in too much trouble.
arity is O(1)
splitAt part is O(part)
pullLevel (part,cs) is O(part) for grab using splitAt to get level, plus O(part) for the map arity level, so O(part)
combineLevel (c:cs) is O(arity c) for the splitAt, and O(sum $ map arity cs) for the recursive call
hylomorphism [] combineLevel pullLevel zero (1,cs)
makes a pullLevel call for each level, so the total pullLevel cost is O(sum parts) = O(n)
makes a combineLevel call for each level, so the total combineLevel cost is O(sum $ map arity levels) = O(n), since the total arity of the entire input is bound by n for valid strings.
makes O(#levels) calls to zero (which is O(1)), and #levels is bound by n, so that's below O(n) too
Hence karvaToTree is linear in the length of the input.
I think that puts to rest the assertion that you needed to use mutability to get a linear algorithm here.
Demo
Let's have a draw of the results (because Tree is so full of syntax it's hard to read the output!). You have to cabal install pretty-tree to get Data.Tree.Pretty.
see :: Tree Char -> IO ()
see = putStrLn.drawVerticalTree.fmap (:"")
ghci> karvaToTree "Q/a*+b-cbabaccbac"
Node {rootLabel = 'Q', subForest = [Node {rootLabel = '/', subForest = [Node {rootLabel = 'a', subForest = []},Node {rootLabel = '*', subForest = [Node {rootLabel = '+', subForest = [Node {rootLabel = '-', subForest = [Node {rootLabel = 'b', subForest = []},Node {rootLabel = 'a', subForest = []}]},Node {rootLabel = 'c', subForest = []}]},Node {rootLabel = 'b', subForest = []}]}]}]}
ghci> see $ karvaToTree "Q/a*+b-cbabaccbac"
Q
|
/
|
------
/ \
a *
|
-----
/ \
+ b
|
----
/ \
- c
|
--
/ \
b a
which matches the output expected from this tutorial where I found the example:
There isn't a single way to do this, it really has to be attempted case-by-case. I typically try to break them down into simpler operations using fold and unfold and then optimize from there. Karva decoding case is a breadth-first tree unfold as others have noted, so I started with treeUnfoldM_BF. Perhaps there are similar functions in Erlang.
If the decoding operation is unreasonably expensive, you could memoize the decoding and share/reuse subtrees... though it probably wouldn't fit into a generic tree unfolder and you'd need to write specialized function to do so. If the fitness function is slow enough, it may be fine to use a naive decoder like the one I have listed below. It will fully rebuild the tree each invocation.
import Control.Monad.State.Lazy
import Data.Tree
type MaxArity = Int
type NodeType = Char
treeify :: MaxArity -> [Char] -> Tree NodeType
treeify maxArity (x:xs) = evalState (unfoldTreeM_BF (step maxArity) x) xs
treeify _ [] = fail "empty list"
step :: MaxArity -> NodeType -> State [Char] (NodeType, [NodeType])
step maxArity node = do
xs <- get
-- figure out the actual child node count and use it instead of maxArity
let (children, ys) = splitAt maxArity xs
put ys
return (node, children)
main :: IO ()
main = do
let x = treeify 3 "0138513580135135135"
putStr $ drawTree . fmap (:[]) $ x
return ()
There are a couple of solutions when mutable state in functional programming is required.
Use a different algorithm that solves the same problem. E.g. quicksort is generally regarded as mutable and may therefore be less useful in a functional setting, but mergesort is generally better suited for a functional setting. I can't tell if this option is possible or makes sense in your case.
Even functional programming languages usually provide some way to mutate state. (This blog post seems to show how to do it in Erlang.) For some algorithms and data structures this is indeed the only available option (there's active research on the topic, I think); for example hash tables in functional programming languages are generally implemented with mutable state.
In your case, I'm not so sure immutability really leads to a performance bottleneck. You are right, the (sub)tree will be recreated on update, but the Erlang implementation will probably reuse all the subtrees that haven't changed, leading to O(log n) complexity per update instead of O(1) with mutable state. Also, the nodes of the trees won't be copied but instead the references to the nodes, which should be relatively efficient. You can read about tree updates in a functional setting in e.g. the thesis from Okasaki or in his book "Purely Functional Data Structures" based on the thesis. I'd try implementing the algorithm with an immutable data structure and switch to a mutable one if you have a performance problem.
Also see some relevant SO questions here and here.
I think I figured out how to solve your particular problem with the K trees, (the general problem is too hard :P). My solution is presented in some horrible sort of hybrid Python-like psudocode (I am very slow on my FP today) but it doesn't change a node after you create one (the trick is building the tree bottom-up)
First, we need to find which nodes belong to which level:
levels currsize nodes =
this_level , rest = take currsize from nodes, whats left
next_size = sum of the arities of the nodes
return [this_level | levels next_size rest]
(initial currsize is 1)
So in the +/*abcd, example, this should give you [+, /*, abcd]. Now you can convert this into a tree bottom up:
curr_trees = last level
for level in reverse(levels except the last)
next_trees = []
for root in level:
n = arity of root
trees, curr_trees = take n from curr_trees, whats left
next_trees.append( Node(root, trees) )
curr_trees = next_trees
curr_trees should be a list with the single root node now.
I am pretty sure we can convert this into single assignment Erlang/Haskell very easily now.

Resources