Leftist heap two version create implementation - data-structures

Recently, I am reading the book Purely-functional-data-structures
when I came to “Exercise 3.2 Define insert directly rather than via a call to merge” for Leftist_tree。I implement a my version insert.
let rec insert x t =
try
match t with
| E -> T (1, x, E, E)
| T (_, y, left, right ) ->
match (Elem.compare x y) with
| n when n < 0 -> makeT x left (insert y right)
| 0 -> raise Same_elem
| _ -> makeT y left (insert x right)
with
Same_elem -> t
And for verifying if it works, I test it and the merge function offered by the book.
let rec merge m n = match (m, n) with
| (h, E) -> h
| (E, h) -> h
| (T (_, x, a1, b1) as h1, (T (_, y, a2, b2) as h2)) ->
if (Elem.compare x y) < 0
then makeT x a1 (merge b1 h2)
else makeT y a2 (merge b2 h1)
Then I found an interesting thing.
I used a list ["a";"b";"d";"g";"z";"e";"c"] as input to create this tree. And the two results are different.
For merge method I got a tree like this:
and insert method I implemented give me a tree like this :
I think there's some details between the two methods even though I follow the implementation of 'merge' to design the 'insert' version. But then I tried a list inverse ["c";"e";"z";"g";"d";"b";"a"] which gave me two leftist-tree-by-insert tree. That really confused me so much that I don't know if my insert method is wrong or right. So now I have two questions:
if my insert method is wrong?
are leftist-tree-by-merge and leftist-tree-by-insert the same structure? I mean this result give me an illusion like they are equal in one sense.
the whole code
module type Comparable = sig
type t
val compare : t -> t -> int
end
module LeftistHeap(Elem:Comparable) = struct
exception Empty
exception Same_elem
type heap = E | T of int * Elem.t * heap * heap
let rank = function
| E -> 0
| T (r ,_ ,_ ,_ ) -> r
let makeT x a b =
if rank a >= rank b
then T(rank b + 1, x, a, b)
else T(rank a + 1, x, b, a)
let rec merge m n = match (m, n) with
| (h, E) -> h
| (E, h) -> h
| (T (_, x, a1, b1) as h1, (T (_, y, a2, b2) as h2)) ->
if (Elem.compare x y) < 0
then makeT x a1 (merge b1 h2)
else makeT y a2 (merge b2 h1)
let insert_merge x h = merge (T (1, x, E, E)) h
let rec insert x t =
try
match t with
| E -> T (1, x, E, E)
| T (_, y, left, right ) ->
match (Elem.compare x y) with
| n when n < 0 -> makeT x left (insert y right)
| 0 -> raise Same_elem
| _ -> makeT y left (insert x right)
with
Same_elem -> t
let rec creat_l_heap f = function
| [] -> E
| h::t -> (f h (creat_l_heap f t))
let create_merge l = creat_l_heap insert_merge l
let create_insert l = creat_l_heap insert l
end;;
module IntLeftTree = LeftistHeap(String);;
open IntLeftTree;;
let l = ["a";"b";"d";"g";"z";"e";"c"];;
let lh = create_merge `enter code here`l;;
let li = create_insert l;;
let h = ["c";"e";"z";"g";"d";"b";"a"];;
let hh = create_merge h;;
let hi = create_insert h;;
16. Oct. 2015 update
by observing the two implementation more precisely, it is easy to find that the difference consisted in merge a base tree T (1, x, E, E) or insert an element x I used graph which can express more clearly.
So i found that my insert version will always use more complexity to finish his work and doesn't utilize the leftist tree's advantage or it always works in the worse situation, even though this tree structure is exactly “leftist”.
and if I changed a little part , the two code will obtain the same result.
let rec insert x t =
try
match t with
| E -> T (1, x, E, E)
| T (_, y, left, right ) ->
match (Elem.compare x y) with
| n when n < 0 -> makeT x E t
| 0 -> raise Same_elem
| _ -> makeT y left (insert x right)
with
Same_elem -> t
So for my first question: I think the answer is not exact. it can truly construct a leftist tree but always work in the bad situation.
and the second question is a little meaningless (I'm not sure). But it is still interesting for this condition. for instance, even though the merge version works more efficiently but for construct a tree from a list without the need for insert order like I mentioned (["a";"b";"d";"g";"z";"e";"c"], ["c";"e";"z";"g";"d";"b";"a"] , if the order isn't important, for me I think they are the same set.) The merge function can't choose the better solution. (I think the the tree's structure of ["a";"b";"d";"g";"z";"e";"c"] is better than ["c";"e";"z";"g";"d";"b";"a"]'s )
so now my question is :
is the tree structure that each sub-right spine is Empty is a good structure?
if yes, can we always construct it in any input order?

A tree with each sub-right spine empty is just a list. As such a simple list is a better structure for a list. The runtime properties will be the same as a list, meaning inserting for example will take O(n) time instead of the desired O(log n) time.
For a tree you usually want a balanced tree, one where all children of a node are ideally the same size. In your code each node has a rank and the goal would be to have the same rank for the left and right side of each node. If you don't have exactly 2^n - 1 entries in the tree this isn't possible and you have to allow some imbalance in the tree. Usually a difference in rank of 1 or 2 is allowed. Insertion should insert the element on the side with smaller rank and removal has to rebalance any node that exceeds the allowed rank difference. This keeps the tree reasonably balanced, ensuring the desired runtime properties are preserved.
Check your text book what difference in rank is allowed in your case.

Related

Generate all unique directed graphs with 2 inputs to each node

I'm trying to generate all unique digraphs that fit a spec:
each node must have exactly 2 inputs
and are allowed arbitrarily many outputs to other nodes in the graph
My current solution is slow. Eg for 6 nodes, the algo has taken 1.5 days to get where I think it's complete, but it'll probably be checking for a few more days still.
My algorithm for a graph with n nodes:
generate all n-length strings of 0, where one symbol is a 1, eg, for n=3, [[0,0,1], [0,1,0], [1,0,0]]. These can be thought of as rows from an identity matrix.
generate all possible n * n matrixes where each row is all possible combinations of step 1. + step 1.
This is the connectivity matrix where each cell represents a connection from column-index to row-index
So, for n=3, these are possible:
[0,1,0] + [1,0,0] = [1,1,0]
[1,0,0] + [1,0,0] = [2,0,0]
These represent the inputs to a node, and by adding step 1 to itself, the result will always represent 2 inputs.
For ex:
A B C
A' [[0,1,1],
B' [0,2,0],
C' [1,1,0]]
So B and C connect to A once each: B -> A', C -> A',
And B connects to itself twice: B => B'
I only want unique ones, so for each connectivity matrix generated, I can only keep it if it is not isomorphic to an already-seen graph.
This step is expensive. I need to convert the graph to a "canonical form" by running through each permutation of isomorphic graphs, sorting them, and considering the first one as the "canonical form".
If anyone dives into testing any of this out, here are the count of unique graphs for n nodes:
2 - 6
3 - 44
4 - 475
5 - 6874
6 - 109,934 (I think, it's not done running yet but I haven't found a new graph in >24 hrs.)
7 - I really wanna know!
Possible optimizations:
since I get to generate the graphs to test, is there a way of ruling them out, without testing, as being isomorphic to already-seen ones?
is there a faster graph-isomorphism algorithm? I think this one is related to "Nauty", and there are others I've read of in papers, but I haven't had the expertise (or bandwidth) to implement them yet.
Here's a demonstrable connectivity matrix that can be plotted at graphonline.ru for fun, showing self connections, and 2 connections to t he same node:
1, 0, 0, 0, 0, 1,
1, 0, 0, 0, 1, 0,
0, 1, 0, 1, 0, 0,
0, 1, 2, 0, 0, 0,
0, 0, 0, 1, 0, 1,
0, 0, 0, 0, 1, 0,
here's the code in haskell if you want to play with it, but I'm more concerned about getting the algorithm right (eg pruning down the search space), than the implementation:
-- | generate all permutations of length n given symbols from xs
npermutations :: [a] -> Int -> [[a]]
npermutations xs size = mapM (const xs) [1..size]
identity :: Int -> [[Int]]
identity size = scanl
(\xs _ -> take size $ 0 : xs) -- keep shifting right
(1 : (take (size - 1) (repeat 0))) -- initial, [1,0,0,...]
[1 .. size-1] -- correct size
-- | return all possible pairings of [Column]
columnPairs :: [[a]] -> [([a], [a])]
columnPairs xs = (map (\x y -> (x,y)) xs)
<*> xs
-- | remove duplicates
rmdups :: Ord a => [a] -> [a]
rmdups = rmdups' Set.empty where
rmdups' _ [] = []
rmdups' a (b : c) = if Set.member b a
then rmdups' a c
else b : rmdups' (Set.insert b a) c
-- | all possible patterns for inputting 2 things into one node.
-- eg [0,1,1] means cells B, and C project into some node
-- [0,2,0] means cell B projects twice into one node
binaryInputs :: Int -> [[Int]]
binaryInputs size = rmdups $ map -- rmdups because [1,0]+[0,1] is same as flipped
(\(x,y) -> zipWith (+) x y)
(columnPairs $ identity size)
transposeAdjMat :: [[Int]] -> [[Int]]
transposeAdjMat ([]:_) = []
transposeAdjMat m = (map head m) : transposeAdjMat (map tail m)
-- | AdjMap [(name, inbounds)]
data AdjMap a = AdjMap [(a, [a])] deriving (Show, Eq)
addAdjColToMap :: Int -- index
-> [Int] -- inbound
-> AdjMap Int
-> AdjMap Int
addAdjColToMap ix col (AdjMap xs) =
let conns = foldl (\c (cnt, i) -> case cnt of
1 -> i:c
2 -> i:i:c
_ -> c
)
[]
(zip col [0..]) in
AdjMap ((ix, conns) : xs)
adjMatToMap :: [[Int]] -> AdjMap Int
adjMatToMap cols = foldl
(\adjMap#(AdjMap nodes) col -> addAdjColToMap (length nodes) col adjMap)
(AdjMap [])
cols
-- | a graph's canonical form : http://mfukar.github.io/2015/09/30/haskellxiii.html
-- very expensive algo, of course
canon :: (Ord a, Enum a, Show a) => AdjMap a -> String
canon (AdjMap g) = minimum $ map f $ Data.List.permutations [1..(length g)]
where
-- Graph vertices:
vs = map fst g
-- Find, via brute force on all possible orderings (permutations) of vs,
-- a mapping of vs to [1..(length g)] which is minimal.
-- For example, map [1, 5, 6, 7] to [1, 2, 3, 4].
-- Minimal is defined lexicographically, since `f` returns strings:
f p = let n = zip vs p
in (show [(snd x, sort id $ map (\x -> snd $ head $ snd $ break ((==) x . fst) n)
$ snd $ take_edge g x)
| x <- sort snd n])
-- Sort elements of N in ascending order of (map f N):
sort f n = foldr (\x xs -> let (lt, gt) = break ((<) (f x) . f) xs
in lt ++ [x] ++ gt) [] n
-- Get the first entry from the adjacency list G that starts from the given node X
-- (actually, the vertex is the first entry of the pair, hence `(fst x)`):
take_edge g x = head $ dropWhile ((/=) (fst x) . fst) g
-- | all possible matrixes where each node has 2 inputs and arbitrary outs
binaryMatrixes :: Int -> [[[Int]]]
binaryMatrixes size = let columns = binaryInputs size
unfiltered = mapM (const columns) [1..size] in
fst $ foldl'
(\(keep, seen) x -> let can = canon . adjMatToMap $ x in
(if Set.member can seen
then keep
else id $! x : keep
, Set.insert can seen))
([], Set.fromList [])
unfiltered
There are a number of approaches you could try. One thing that I do note is that having loops with multi-edges (colored loops?) is a little unusual, but is probably just needs a refinement of existing techniques.
Filter the output of another program
The obvious candidate here is of course nAUTy/traces (http://pallini.di.uniroma1.it/) or similar (saucy, bliss, etc). Depending on how you want to do this, it could be as simple as run nauty (for example) and output to file, then read in the list filtering as you go.
For larger values of n this could start to be a problem if you are generating huge files. I'm not sure whether you start to run out of space before you run out of time, but still. What might be better is to generate and test them as you go, throwing away candidates. For your purposes, there may be an existing library for generation - I found this one but I have no idea how good it is.
Use graph invariants
A very easy first step to more efficient listing of graphs is to filter using graph invariants. An obvious one would be degree sequence (the ordered list of degrees of the graph). Others include the number of cycles, the girth, and so on. For your purposes, there might be some indegree/outdegree sequence you could use.
The basic idea is to use the invariant as a filter to avoid expensive checks for isomorphism. You can store the (list of ) invariants for already generated graphs, and check the new one against the list first. The canonical form of a structure is a kind of invariant.
Implement an algorithm
There are lost of GI algorithms, including the ones used by nauty and friends. However, they do tend to be quite hard! The description given in this answer is an excellent overview, but the devil is in the details of course.
Also note that the description is for general graphs, while you have a specific subclass of graph that might be easier to generate. There may be papers out there for digraph listing (generating) but I have not checked.

How would you implement a Grid in a functional language?

I am interrested in different ways of implementing a constant grid in a functional language. A perfect solution should provide traversal in pesimistic constant time per step and not use imperative constructs (laziness is ok). Solutions not quite fulfilling those requirements are still welcome.
My proposal is based on four-way linked nodes like so
A fundamental operation would be to construct a grid of given size. It seems that this operation will determine the type, i.e. which directions will be lazy (obviously this data structure cannot be achieved without laziness). So I propose (in OCaml)
type 'a grid =
| GNil
| GNode of 'a * 'a grid Lazy.t * 'a grid Lazy.t * 'a grid * 'a grid
With references ordered: left, up, right, down. Left and up are suspended. I then build the grid diagonal-wise
Here is a make_grid function that constructs a grid of given size with the coordinate tuples as node values. Please note that gl, gu, gr, gd functions allow walking on a grid in all directions and if given GNil, will return GNil.
let make_grid w h =
let lgnil = Lazy.from_val GNil in
let rec build_ur x y ls dls = match ls with
| l :: ((u :: _) as ls') ->
if x = w && y = h then
GNode ((x, y), l, u, GNil, GNil)
else if x < w && 1 < y then
let rec n = lazy (
let ur = build_ur (x + 1) (y - 1) ls' (n :: dls) in
let r = gd ur in
let d = gl (gd r)
in GNode ((x, y), l, u, r, d)
)
in force n
else if x = w then
let rec n = lazy (
let d = build_dl x (y + 1) (n :: dls) [lgnil]
in GNode ((x, y), l, u, GNil, d)
)
in force n
else
let rec n = lazy (
let r = build_dl (x + 1) y (lgnil :: n :: dls) [lgnil] in
let d = gl (gd r)
in GNode ((x, y), l, u, r, d)
)
in force n
| _ -> failwith "make_grid: Internal error"
and build_dl x y us urs = match us with
| u :: ((l :: _) as us') ->
if x = w && y = h then
GNode ((x, y), l, u, GNil, GNil)
else if 1 < x && y < h then
let rec n = lazy (
let dl = build_dl (x - 1) (y + 1) us' (n :: urs) in
let d = gr dl in
let r = gu (gr d)
in GNode ((x, y), l, u, r, d)
)
in force n
else if y = h then
let rec n = lazy (
let r = build_ur (x + 1) y (n :: urs) [lgnil]
in GNode ((x, y), l, u, r, GNil)
)
in force n
else (* x = 1 *)
let rec n = lazy (
let d = build_ur x (y + 1) (lgnil :: n :: urs) [lgnil] in
let r = gu (gr d)
in GNode ((x, y), l, u, r, d)
)
in force n
| _ -> failwith "make_grid: Internal error"
in build_ur 1 1 [lgnil; lgnil] [lgnil]
It looks pretty complicated as it has to separately handle case when we're going up and when we're going down – build_ur and build_dl auxiliary functions respectively. The build_ur function is of type
build_ur :
int -> int ->
(int * int) grid Lazy.t list ->
(int * int) grid Lazy.t list -> (int * int) grid
It construct a node, given the current position x and y, the list of suspended elements of previous diagonal ls, the list of suspended previous elements of current diagonal urs. The name ls comes from the fact that the first element on ls is the left neighbour of current node. The urs list is needed for construction of the next diagonal.
The build_urs function proceeds with building the next node on the up-right diagonal, passing the current node in a suspension. The left and up neighbour are taken from ls and the right and down neighbours can be accessed through the next node on the diagonal.
Note that I put a bunch of GNils on the urs and ls lists. This is made to always ensure that build_ur and build_dl can consume at least two elements from those lists.
The build_dl function works analogously.
This implementation seems overly complicated for such a simple data structure. In fact I'm suprised it works cause I was driven by faith when writing it and am unable to comprehend completely why it works. Therefore I would like to know a simpler solution.
I was considering building the grid row-wise. This approach has less border cases but I can't eliminate the need of building subsequent rows in different directions. It's because when I go to the end with a row and would like to start building another from the beginning, I would have to somehow know the down node of the first node in current row, which I seemingly can't know until I return from the current function call. And if I can't eliminate bi-directionality, I would need two inner node constructiors: one with suspended left and top and the other with suspended right and top.
Also, here is a gist of this implementation along with omitted functions: https://gist.github.com/mkacz91/0e63aaa2a67f8e67e56f
The datastructure you are looking for if you want a functional solution is a zipper. I've written the rest of the code in Haskell because I find it more to my taste but it's easily ported to OCaml. Here's a gist without the interleaved comments.
{-# LANGUAGE RecordWildCards #-}
module Grid where
import Data.Maybe
We can start by understanding the datastructure for just lists: you can think of a zipper as a pointer deep inside a list. You have wathever is on the left of the element you point at, then the element you point at and finally whatever is on the right.
type ListZipper a = ([a], a, [a])
Given a list and an integer n, you can focus on the element which is at position n. Of course, if n is greater than the lenght of the list, then you just fail. One important thing to notice is that the left part of the list is stored backwards: moving the focus to the left will therefore be possible in constant time. As will moving to the right.
focusListAt :: Int -> [a] -> Maybe (ListZipper a)
focusListAt = go []
where
go _ _ [] = Nothing
go acc 0 (hd : tl) = Just (acc, hd, tl)
go acc n (hd : tl) = go (hd : acc) (n - 1) tl
Let's move on to Grids now. A Grid will just be a list of rows (lists).
newtype Grid a = Grid { unGrid :: [[a]] }
A zipper for a Grid is now given by a grid representing everything above the current focus, another representing everything below it, and a list zipper (advanced level: notice that this looks a bit like nested list zippers & could be reformulated in more generic terms).
data GridZipper a =
GridZipper { above :: Grid a
, below :: Grid a
, left :: [a]
, right :: [a]
, focus :: a }
By focusing on the right row first, and then the right element we may focus a Grid at some coordinates x and y.
focusGridAt :: Int -> Int -> Grid a -> Maybe (GridZipper a)
focusGridAt x y g = do
(before, line , after) <- focusListAt x $ unGrid g
(left , focus, right) <- focusListAt y line
let above = Grid before
let below = Grid after
return GridZipper{..}
Once we have a zipper, we can move around easily. The code for going either left or right is not suprisingly rather similar:
goLeft :: GridZipper a -> Maybe (GridZipper a)
goLeft g#GridZipper{..} =
case left of
[] -> Nothing
(hd:tl) -> Just $ g { focus = hd, left = tl, right = focus : right }
goRight :: GridZipper a -> Maybe (GridZipper a)
goRight g#GridZipper{..} =
case right of
[] -> Nothing
(hd:tl) -> Just $ g { focus = hd, left = focus : left, right = tl }
When going up or down, we have to be a bit careful because we need to focus on the spot right above (or below) the one we left in the new row. We also have to reassemble the previous row we were focused onto into a good old list (by appending the reversed left to focus : right).
goUp :: GridZipper a -> Maybe (GridZipper a)
goUp GridZipper{..} = do
let (line : above') = unGrid above
let below' = (reverse left ++ focus : right) : unGrid below
(left', focus', right') <- focusListAt (length left) line
return $ GridZipper { above = Grid above'
, below = Grid below'
, left = left'
, right = right'
, focus = focus' }
goDown :: GridZipper a -> Maybe (GridZipper a)
goDown GridZipper{..} = do
let (line : below') = unGrid below
let above' = (reverse left ++ focus : right) : unGrid above
(left', focus', right') <- focusListAt (length left) line
return $ GridZipper { above = Grid above'
, below = Grid below'
, left = left'
, right = right'
, focus = focus' }
Finally, I've also added a couple of helper functions to generate grids (with every cell containing a pair of its coordinates) and instances to be able to display grids and zippers in a terminal.
mkGrid :: Int -> Int -> Grid (Int, Int)
mkGrid m n = Grid $ [ zip (repeat i) [0..n-1] | i <- [0..m-1] ]
instance Show a => Show (Grid a) where
show = concatMap (('\n' :) . concatMap show) . unGrid
instance Show a => Show (GridZipper a) where
show GridZipper{..} =
concat [ show above, "\n"
, concatMap show (reverse left)
, "\x1B[33m[\x1B[0m", show focus, "\x1B[33m]\x1B[0m"
, concatMap show right
, show below ]
main creates a small grid of size 5*10, focuses on the element at coordinates (2,3) and moves around a bit.
main :: IO ()
main = do
let grid1 = mkGrid 5 10
print grid1
let grid2 = fromJust $ focusGridAt 2 3 grid1
print grid2
print $ goLeft =<< goLeft =<< goDown =<< goDown grid2
A simple solution for implementing infinite grids consists in using a hash table indexed by the coordinate pairs.
The following is a sample implementation that doesn't check for integer overflow:
type 'a cell = {
x: int; (* position on the horizontal axis *)
y: int; (* position on the vertical axis *)
value: 'a;
}
type 'a grid = {
cells: (int * int, 'a cell) Hashtbl.t;
init_cell: int -> int -> 'a;
}
let create_grid init_cell = {
cells = Hashtbl.create 10;
init_cell;
}
let hashtbl_get tbl k =
try Some (Hashtbl.find tbl k)
with Not_found -> None
(* Check if we have a cell at the given relative position *)
let peek grid cell x_offset y_offset =
hashtbl_get grid.cells (cell.x + x_offset, cell.y + y_offset)
(* Get the cell at the given relative position *)
let get grid cell x_offset y_offset =
let x = cell.x + x_offset in
let y = cell.y + y_offset in
let k = (x, y) in
match hashtbl_get grid.cells k with
| Some c -> c
| None ->
let new_cell = {
x; y;
value = grid.init_cell x y
} in
Hashtbl.add grid.cells k new_cell;
new_cell
let left grid cell = get grid cell (-1) 0
let right grid cell = get grid cell 1 0
let down grid cell = get grid cell 0 (-1)
(* etc. *)

Can I insert data unsorted in Red-black tree?

While I'm still struggling to find a solution for this question, i have another one which maybe is easier. The following is the insert function of Okasaki red-black tree implementation. What I want to do is to keep the data unsorted as i insert into the tree. So the data always go to the leftmost/bottom-most leaf everytime i insert. There is no need to compare for x < y, x > y or x == y. It seems pretty straightforward at first by just removing these guards and only do: ins s#(T color a y b) = balance color (ins a) y b. The behaviour seems to be that the tree is kept balanced but the coloring becomes a bit messed up. And eventually that affects future inserts.. Any idea how this can be achieved? I think this could possibility my first step to my previous question. I just started playing with Haskell, so I am not getting it right straightforward. Thanks a lot.
insertSet x s = T B a y b
where ins E = T R E x E
ins s#(T color a y b) =
if x < y then balance color (ins a) y b
else if x > y then balance color a y (ins b)
else s
['d','a','s','f'] s
/\
a f
/
d (unsorted tree)
you can use my RBTree implementation in haskellDB,
http://hackage.haskell.org/package/RBTree
using the insert function:
insert :: (a -> a -> Ordering) -> RBTree a -> a -> RBTree a
feed it a (\_ _ -> LT) function, then you can always put new element into left-most place.

Data Structure Differentiation, Intuition Building

According to this paper differentiation works on data structures.
According to this answer:
Differentiation, the derivative of a data type D (given as D') is the type of D-structures with a single “hole”, that is, a distinguished location not containing any data. That amazingly satisfy the same rules as for differentiation in calculus.
The rules are:
1 = 0
X′ = 1
(F + G)′ = F' + G′
(F • G)′ = F • G′ + F′ • G
(F ◦ G)′ = (F′ ◦ G) • G′
The referenced paper is a bit too complex for me to get an intuition.
What does this this mean in practice? A concrete example would be fantastic.
What's a one hole context for an X in an X? There's no choice: it's (-), representable by the unit type.
What's a one hole context for an X in an X*X? It's something like (-,x2) or (x1,-), so it's representable by X+X (or 2*X, if you like).
What's a one hole context for an X in an X*X*X? It's something like (-,x2,x3) or (x1,-,x3) or (x1,x2,-), representable by X*X + X*X + X*X, or (3*X^2, if you like).
More generally, an F*G with a hole is either an F with a hole and a G intact, or an F intact and a G with a hole.
Recursive datatypes are often defined as fixpoints of polynomials.
data Tree = Leaf | Node Tree Tree
is really saying Tree = 1 + Tree*Tree. Differentiating the polynomial tells you the contexts for immediate subtrees: no subtrees in a Leaf; in a Node, it's either hole on the left, tree on the right, or tree on the left, hole on the right.
data Tree' = NodeLeft () Tree | NodeRight Tree ()
That's the polynomial differentiated and rendered as a type. A context for a subtree in a tree is thus a list of those Tree' steps.
type TreeCtxt = [Tree']
type TreeZipper = (Tree, TreeCtxt)
Here, for example, is a function (untried code) which searches a tree for subtrees passing a given test subtree.
search :: (Tree -> Bool) -> Tree -> [TreeZipper]
search p t = go (t, []) where
go :: TreeZipper -> [TreeZipper]
go z = here z ++ below z
here :: TreeZipper -> [TreeZipper]
here z#(t, _) | p t = [z]
| otherwise = []
below (Leaf, _) = []
below (Node l r, cs) = go (l, NodeLeft () r : cs) ++ go (r, NodeRight l () : cs)
The role of "below" is to generate the inhabitants of Tree' relevant to the given Tree.
Differentiation of datatypes is a good way make programs like "search" generic.
My interpretation is that, the derivative (zipper) of T is the type of all instances that resembles the "shape" of T, but with exactly 1 element replaced by a "hole".
For instance, a list is
List t = 1 []
+ t [a]
+ t^2 [a,b]
+ t^3 [a,b,c]
+ t^4 [a,b,c,d]
+ ... [a,b,c,d,...]
if we replace any of those 'a', 'b', 'c' etc by a hole (represented as #), we'll get
List' t = 0 empty list doesn't have hole
+ 1 [#]
+ 2*t [#,b] or [a,#]
+ 3*t^2 [#,b,c] or [a,#,c] or [a,b,#]
+ 4*t^3 [#,b,c,d] or [a,#,c,d] or [a,b,#,d] or [a,b,c,#]
+ ...
Another example, a binary tree is
data Tree t = TEmpty | TNode t (Tree t) (Tree t)
-- Tree t = 1 + t (Tree t)^2
so adding a hole generates the type:
{-
Tree' t = 0 empty tree doesn't have hole
+ (Tree X)^2 the root is a hole, followed by 2 normal trees
+ t*(Tree' t)*(Tree t) the left tree has a hole, the right is normal
+ t*(Tree t)*(Tree' t) the left tree is normal, the right has a hole
# or x or x
/ \ / \ / \
a b #? b a #?
/\ /\ / \ /\ /\ /\
c d e f #? #? e f c d #? #?
-}
data Tree' t = THit (Tree t) (Tree t)
| TLeft t (Tree' t) (Tree t)
| TRight t (Tree t) (Tree' t)
A third example which illustrates the chain rule is the rose tree (variadic tree):
data Rose t = RNode t [Rose t]
-- R t = t*List(R t)
the derivative says R' t = List(R t) + t * List'(R t) * R' t, which means
{-
R' t = List (R t) the root is a hole
+ t we have a normal root node,
* List' (R t) and a list that has a hole,
* R' t and we put a holed rose tree at the list's hole
x
|
[a,b,c,...,p,#?,r,...]
|
[#?,...]
-}
data Rose' t = RHit [Rose t] | RChild t (List' (Rose t)) (Rose' t)
Note that data List' t = LHit [t] | LTail t (List' t).
(These may be different from the conventional types where a zipper is a list of "directions", but they are isomorphic.)
The derivative is a systematic way to record how to encode a location in a structure, e.g. we can search with: (not quite optimized)
locateL :: (t -> Bool) -> [t] -> Maybe (t, List' t)
locateL _ [] = Nothing
locateL f (x:xs) | f x = Just (x, LHit xs)
| otherwise = do
(el, ctx) <- locateL f xs
return (el, LTail x ctx)
locateR :: (t -> Bool) -> Rose t -> Maybe (t, Rose' t)
locateR f (RNode a child)
| f a = Just (a, RHit child)
| otherwise = do
(whichChild, listCtx) <- locateL (isJust . locateR f) child
(el, ctx) <- locateR f whichChild
return (el, RChild a listCtx ctx)
and mutate (plug in the hole) using the context info:
updateL :: t -> List' t -> [t]
updateL x (LHit xs) = x:xs
updateL x (LTail a ctx) = a : updateL x ctx
updateR :: t -> Rose' t -> Rose t
updateR x (RHit child) = RNode x child
updateR x (RChild a listCtx ctx) = RNode a (updateL (updateR x ctx) listCtx)

Weight-Biased Leftist Heaps: advantages of top-down version of merge?

I am self-studying Okasaki's Purely Functional Data Structures, now on exercise 3.4, which asks to reason about and implement a weight-biased leftist heap. This is my basic implementation:
(* 3.4 (b) *)
functor WeightBiasedLeftistHeap (Element : Ordered) : Heap =
struct
structure Elem = Element
datatype Heap = E | T of int * Elem.T * Heap * Heap
fun size E = 0
| size (T (s, _, _, _)) = s
fun makeT (x, a, b) =
let
val sizet = size a + size b + 1
in
if size a >= size b then T (sizet, x, a, b)
else T (sizet, x, b, a)
end
val empty = E
fun isEmpty E = true | isEmpty _ = false
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (_, x, a1, b1), h2 as T (_, y, a2, b2)) =
if Elem.leq (x, y) then makeT (x, a1, merge (b1, h2))
else makeT (y, a2, merge (h1, b2))
fun insert (x, h) = merge (T (1, x, E, E), h)
fun findMin E = raise Empty
| findMin (T (_, x, a, b)) = x
fun deleteMin E = raise Empty
| deleteMin (T (_, x, a, b)) = merge (a, b)
end
Now, in 3.4 (c) & (d), it asks:
Currently, merge operates in two
passes: a top-down pass consisting of
calls to merge, and a bottom-up pass
consisting of calls to the helper
function, makeT. Modify merge to
operate in a single, top-down pass.
What advantages would the top-down
version of merge have in a lazy
environment? In a concurrent
environment?
I changed the merge function by simply inlining makeT, but I fail to see any advantages, so I think I haven't grasped the spirit of these parts of the exercise. What am I missing?
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (s1, x, a1, b1), h2 as T (s2, y, a2, b2)) =
let
val st = s1 + s2
val (v, a, b) =
if Elem.leq (x, y) then (x, a1, merge (b1, h2))
else (y, a2, merge (h1, b2))
in
if size a >= size b then T (st, v, a, b)
else T (st, v, b, a)
end
I think I've figured out one point with regards to lazy evaluation. If I don't use the recursive merge to calculate the size, then the recursive call won't need to be evaluated until the child is needed:
fun merge (h, E) = h
| merge (E, h) = h
| merge (h1 as T (s1, x, a1, b1), h2 as T (s2, y, a2, b2)) =
let
val st = s1 + s2
val (v, ma, mb1, mb2) =
if Elem.leq (x, y) then (x, a1, b1, h2)
else (y, a2, h1, b2)
in
if size ma >= size mb1 + size mb2
then T (st, v, ma, merge (mb1, mb2))
else T (st, v, merge (mb1, mb2), ma)
end
Is that all? I am not sure about concurrency though.
I think you've essentially got it as far as the lazy evaluation goes -- it's not very helpful to use lazy evaluation if you are going to have to end up traversing the whole data structure to find out anything every time you do a merge...
As to the concurrency, I expect the issue is that if, while one thread is evaluating the merge, another comes along and wants to look something up, it will not be able to get anything useful done at least until the first thread completes the merge. (And it might even take longer than that.)
It doesn’t any benefit to WMERGE-3-4C function in a lazy environment. It still does all the work that the original down-up merge did. It pretty sure it would not be any easier for the language system to memorize..
No benefit to WMERGE-3-4C functions in a concurrent environment. Each call to WMERGE-3-4C does all its work before passing the buck to another instance of WMERGE-3-4C. In fact, if we eliminated the recursion by hand, WMERGE-3-4C could be implemented as a single loop that does all the work while accumulating a stack, then a second loop that does the REDUCE work on the stack. The first loop would not be naturally parallizable, though maybe the REDUCE could operate by calling the function on pairs, in parallel, until only one element remained in the list.

Resources