As I figured out, variables in Haskell are immutable (thus, they are not really `variables').
In this case, if we have a complex and big data structure, like a red-black tree, how are we supposed to implement operations that actually change the data structure?
Create a copy of the tree each time an element is inserted or deleted?
Yes, the solution is to return a new data structure which represents the modified value, however there is no requirement to copy the entire structure for your example (red-black trees) since you can just copy the nodes on the path from the root to the inserted node. This allows the insert operation to be the same complexity as the imperative version.
Chris Okasaki's Purely functional data structures contains a number of implementations of immutable data structures - this book is a modified version of his phD thesis which you can find here
Your intuition about copying is exactly the right direction you should be thinking. However, the 'copying' intelligently implemented by the Haskell runtime. For example, given
data Tree = Empty | Branch Tree Integer Tree
You could implement a function to replace the left branch:
replaceLeft :: Tree -> Tree -> Tree
replaceLeft (Branch _ i r) newLeft = Branch newLeft i r
The result isn't that you create a entire copy of the new tree, as that isn't necessary. The value i and the trees r and newLeft are unchanged so we don't have to copy their contents into fresh memory.
The new Branch that is created (this is the only real allocation in this example: the creation of the structure to hold the left/right trees on the Integer) still references the exact same values from the old branch, no copying needed!
Since the data structures are immutable, there's no harm in doing this.
Related
Edited after Alex Taggart's remark below.
I am using a zipper to easily traverse and edit a tree which can grow to many thousands of nodes. Each node is incomplete when it is first created. Data is going to be added/removed all the time in random positions, leaf nodes are going to be replaced by branches, etc.
The tree can be very unbalanced.
Fast random access to a node is also important.
An implementation would be to traverse the tree using a zipper and create a hash table of the nodes indexed by key. Needless to say the above would be very inefficient as:
2 copies of each node need to be created
any changes need to be consistently mirrored between the 2 data structures (tree and hashmap).
In short, is there a time/space efficient way to combine the easiness of traversing/updating with a zipper and the fast access of a hash table in clojure?
Clojure's data structures are persistent and use structural sharing. This means that operations like adding, removing or accumulating are not as inefficient as you describe. The memory cost will be minimal since you are not duplicating what's already there.
By default Clojure's data structures are immutable. The nodes in your tree like structure will thus not update themselves unless you use some sort of reference type (like a Var). I don't know enough about your specific use case to advice on the best way to access nodes. One way to access nodes in a nested structure is the get-in function where you supply the path to the node to return its value.
Hope this helps solving your problem.
I was just looking at Eric Lippert's simple implementation of an immutable binary tree, and I have a question about it. After showing the implementation, Eric states that
Note that another nice feature of
immutable data structures is that it
is impossible to accidentally (or
deliberately!) create a tree which
contains a cycle.
It seems that this feature of Eric's implementation does not come from the immutability alone, but also from the fact that the tree is built up from the leaves. This naturally prevents a node from having any of its ancestors as children. It seems that if you built the tree in the other direction, you'd introduce the possibility of cycles.
Am I right in my thinking, or does the impossibility of cycles in this case come from the immutability alone? Considering the source, I wonder whether I'm missing something.
EDIT: After thinking it over a bit more, it seems that building up from the leaves might be the only way to create an immutable tree. Am I right?
If you're using an immutable data structure, in a strict (as opposed to lazy) language, it's impossible to create a cycle; as you must create the elements in some order, and once an element is created, you cannot mutate it to point at an element created later. So if you created node n, and then created node m which pointed at n (perhaps indirectly), you could never complete the cycle by causing n to point at m as you are not allowed to mutate n, nor anything that n already points to.
Yes, you are correct that you can only ever create an immutable tree by building up from the leaves; if you started from the root, you would have to modify the root to point at its children as you create them. Only by starting from the leaves, and creating each node to point to its children, can you construct a tree from immutable nodes.
If you really want to try hard at it you could create a tree with cycles in it that is immutable. For example, you could define an immutable graph class and then say:
Graph g = Graph.Empty
.AddNode("A")
.AddNode("B")
.AddNode("C")
.AddEdge("A", "B")
.AddEdge("B", "C")
.AddEdge("C", "A");
And hey, you've got a "tree" with "cycles" in it - because of course you haven't got a tree in the first place, you've got a directed graph.
But with a data type that actually uses a traditional "left and right sub trees" implementation of a binary tree then there is no way to make a cyclic tree (modulo of course sneaky tricks like using reflection or unsafe code.)
When you say "built up from the leaves", I guess you're including the fact that the constructor takes children but never takes a parent.
It seems that if you built the tree in
the other direction, you'd introduce
the possibility of cycles.
No, because then you'd have the opposite constraint: the constructor would have to take a parent but never a child. Therefore you can never create a descendant until all its ancestors are created. Therefore no cycles are possible.
After thinking it over a bit more, it
seems that building up from the leaves
might be the only way to create an
immutable tree. Am I right?
No... see my comments to Brian and ergosys.
For many applications, a tree whose child nodes point to their parents is not very useful. I grant that. If you need to traverse the tree in an order determined by its hierarchy, an upward-pointing tree makes that hard.
However for other applications, that sort of tree is exactly the sort we want. For example, we have a database of articles. Each article can have one or more translations. Each translation can have translations. We create this data structure as a relational database table, where each record has a "foreign key" (pointer) to its parent. None of these records need ever change its pointer to its parent. When a new article or translation is added, the record is created with a pointer to the appropriate parent.
A common use case is to query the table of translations, looking for translations for a particular article, or translations in a particular language. Ah, you say, the table of translations is a mutable data structure.
Sure it is. But it's separate from the tree. We use the (immutable) tree to record the hierarchical relationships, and the mutable table for iteration over the items. In a non-database situation, you could have a hash table pointing to the nodes of the tree. Either way, the tree itself (i.e. the nodes) never get modified.
Here's another example of this data structure, including how to usefully access the nodes.
My point is that the answer to the OP's question is "yes", I agree with the rest of you, that the prevention of cycles does come from immutability alone. While you can build a tree in the other direction (top-down), if you do, and it's immutable, it still cannot have cycles.
When you're talking about powerful theoretical guarantees like
another nice feature of immutable data structures is that
it is impossible to accidentally (or
deliberately!) create a tree which
contains a cycle [emphasis in original]
"such a tree wouldn't be very useful" pales in comparison -- even if it were true.
People create un-useful data structures by accident all the time, let alone creating supposedly-useless ones on purpose. The putative uselessness doesn't protect the program from the pitfalls of cycles in your data structures. A theoretical guarantee does (assuming you really meet the criteria it states).
P.S. one nice feature of upward-pointing trees is that you can guarantee one aspect of the definition of trees that downward-pointing tree data structures (like Eric Lippert's) don't: that every node has at most one parent. (See David's comment and my response.)
You can't build it from the root, it requires you to mutate nodes you already added.
As a programmer when should I consider using a RB tree, B- tree or an AVL tree?
What are the key points that needs to be considered before deciding on the choice?
Can someone please explain with a scenario for each tree structure why it is chosen over others with reference to the key points?
Take this with a pinch of salt:
B-tree when you're managing more than thousands of items and you're paging them from a disk or some slow storage medium.
RB tree when you're doing fairly frequent inserts, deletes and retrievals on the tree.
AVL tree when your inserts and deletes are infrequent relative to your retrievals.
I think B+ trees are a good general-purpose ordered container data structure, even in main memory. Even when virtual memory isn't an issue, cache-friendliness often is, and B+ trees are particularly good for sequential access - the same asymptotic performance as a linked list, but with cache-friendliness close to a simple array. All this and O(log n) search, insert and delete.
B+ trees do have problems, though - such as the items moving around within nodes when you do inserts/deletes, invalidating pointers to those items. I have a container library that does "cursor maintenance" - cursors attach themselves to the leaf node they currently reference in a linked list, so they can be fixed or invalidated automatically. Since there's rarely more than one or two cursors, it works well - but it's an extra bit of work all the same.
Another thing is that the B+ tree is essentially just that. I guess you can strip off or recreate the non-leaf nodes depending on whether you need them or not, but with binary tree nodes you get a lot more flexibility. A binary tree can be converted to a linked list and back without copying nodes - you just change the pointers then remember that you're treating it as a different data structure now. Among other things, this means you get fairly easy O(n) merging of trees - convert both trees to lists, merge them, then convert back to a tree.
Yet another thing is memory allocation and freeing. In a binary tree, this can be separated out from the algorithms - the user can create a node then call the insert algorithm, and deletes can extract nodes (detach them from the tree, but dont free the memory). In a B-tree or B+-tree, that obviously doesn't work - the data will live in a multi-item node. Writing insert methods that "plan" the operation without modifying nodes until they know how many new nodes are needed and that they can be allocated is a challenge.
Red black vs. AVL? I'm not sure it makes any big difference. My own library has a policy-based "tool" class to manipulate nodes, with methods for double-linked lists, simple binary trees, splay trees, red-black trees and treaps, including various conversions. Some of those methods were only implemented because I was bored at one time or another. I'm not sure I've even tested the treap methods. The reason I chose red-black trees rather than AVL is because I personally understand the algorithms better - which doesn't mean they're simpler, it's just a fluke of history that I'm more familiar with them.
One last thing - I only originally developed my B+ tree containers as an experiment. It's one of those experiments that never ended really, but it's not something I'd encourage others to repeat. If all you need is an ordered container, the best answer is to use the one that your existing library provides - e.g. std::map etc in C++. My library evolved over years, it took quite a while to get it stable, and I just relatively recently discovered it's technically non-portable (dependent on a bit of undefined behaviour WRT offsetof).
In memory B-Tree has the advantage when the number of items is more than 32000... Look at speedtest.pdf from stx-btree.
When choosing data structures you are trading off factors such as
speed of retrieval v speed of update
how well the structure copes with worst case operations, for example insertion of records that arrive in a sorted order
space wasted
I would start by reading the Wikipedia articles referenced by Robert Harvey.
Pragmatically, when working in languages such as Java the average programmer tends to use the collection classes provided. If in a performance tuning activity one discovers that the collection performance is problematic then one can seek alternative implementations. It's rarely the first thing a business-led development has to consider. It's extremely rare that one needs to implement such data structures by hand, there are usually libraries that can be used.
Can someone give me a real life example ( in programming, C#) of needing to use a Binary Tree or even just an ordinary tree?
I understand the principle of a Binary Tree and how they work, but I'm trying to find some real life example's of their usage?
Tony
In C#, Java, Python, C++ (using the STL) and other high-level languages, most of the time you will use one of the built-in/library-included types to store your data, at least the data you work on at the moment, so most of the time you won't be using a binary tree or another kind of tree explicitly.
This being said, some of these built-in types are implemented as trees of one kind or another "in the backstage", and in some situations you will have to implement one yourself.
Also, a related thing you HAVE to know is binary search. This is mostly done in binary trees (binary search trees :P) but the idea can be extrapolated to a lot of problems, even without trees involved, so try understand it well.
Edit: Real life classical example:
Imagine that you want to search for the phone number of a particular person in the phone guide of a big city. All things being equal, you will open it roughly at the middle, look for the guys in that page, and see if your "target" is before or after it, thus cutting the data by half. Then you repeat the operation in the half where you know your "target" is, and again and again until you found your "target". As each time you are looking into half the data you had before, you require a total of log(base 2) n operations to reach your "target", where n is the total size of the data.
So in a 1 million phone book, you find your target in log(base 2) 1 million = 20 comparisons, instead of comparing one by one as in a linear search (that's 1 million comparisons in the worst case).
Note that this only work in already sorted data.
Balanced binary trees, storing data maintained in sorted order, are used to achieve O(log(n)) lookup, delete, and insert times. "Balanced" just means there is a bounded limit between the depth of the shallowest and deepest leaves, counting empty left/right nodes as leaves. (optimally the depth of left and right subtrees differs at most by one, some implementations relax this to make the algorithms simpler)
You can use an array, rather than a tree, in sorted order with binary search to achieve O(log(n)) lookup time, but then the insert/delete times are O(n).
Some trees (notably B-trees for databases) use more than 2 branches per node, to widen the tree and reduce the maximum depth (which determines search times).
I can't think of a reason to use binary trees that are not maintained in sorted order (a point that has not been mentioned in most of the answers here), but maybe there's some application for this. Besides the sorted binary balanced tree, anything with hierarchy (as other answerers have mentioned, XML or directory structures) is a good application for trees, whether binary or not.
edit: re: unsorted binary trees: I just remembered that LISP and Scheme both make heavy use of unbalanced binary trees. The cons function takes two arguments (e.g. (define c (cons a b)) ) and returns a tree node whose branches are the two arguments. The car function takes such a tree node and returns the first argument given to cons. The cdr function is similar but returns the second argument to cons. Finally nil represents a null object. These are the primitives used to make all data structures in LISP and Scheme. Lists are implemented using an extreme unbalanced binary tree. The list containing literal elements 'Alabama, 'Alaska, 'Arizona, and 'Arkansas can be constructed explicitly as
(cons 'Alabama (cons 'Alaska (cons 'Arizona (cons 'Arkansas nil))))
and can be traversed using car and cdr (where car is used to get the head of the list and cdr is used to get the sublist excluding the list head). This is how Scheme works, I think LISP is the same or very similar. More complicated data structures, like binary trees (which need 3 members per node: two to hold the left and right nodes, and a third to hold the node value) or trees containing more than two branches per node can be constructed using a list to implement each node.
How about the directory structure in Unix. For instance the du command i.e. the disk usage command does a post order traversal (traversal order:: left child -> right child -> root node) of a tree representing the directory structure in order to fetch the disk space used by that directory.
The following slides should help.
http://www.cse.unt.edu/~rada/CSCE3110/Lectures/Trees.ppt
cheers
In Java, trees are used to implement certain sorted data structures, such as the TreeSet:
http://java.sun.com/j2se/1.5.0/docs/api/java/util/TreeSet.html
They are used for data structures where you want the order to be based on some property of the elements, rather than on insertion order.
Here are some examples:
The in-memory representation of a parsed program or expression is a tree. In the case of expressions (excluding ternary operators) the tree will be binary.
The components of a GUI are organized as a tree.
Any "containment" hierarchy can be represented as a tree. (HTML, XML and SGML are examples.
And of course, binary (and n-ary) trees can be used to represent indexes, maps, sets and other "generic" data structures.
An easy example is searching. If you store your list data in a tree, for example, you get O(log(n)) lookup times. A standard array implementation of a list would achieve O(n) lookup time.
XML, HTML (and SGML) documents are trees.
I'm looking for a good data structure to build equivalence classes on nodes of a tree. In an ideal structure, the following operations should be fast (O(1)/O(n) as appropriate) and easy (no paragraphs of mystery code):
(A) Walk the tree from the root; on each node --> child transition enumerate all the equivalent versions of the child node
(B) Merge two equivalence classes
(C) Create new nodes from a list of existing nodes (the children) and other data
(D) Find any nodes structurally equivalent to node (i.e. they have the same number of children, corresponding children belong to the same equivalence class, and their "other data" is equal) so that new (or newly modified) nodes may be put in the right equivalence class (via a merge)
So far I've considered (some of these could be used in combination):
A parfait, where the children are references to collections of nodes instead of to nodes. (A) is fast, (B) requires walking the tree and updating nodes to point to the merged collection, (C) requires finding the collection containing each child of the new node, (D) requires walking the tree
Maintaining a hash of nodes by their characteristics. This makes (D) much faster but (B) slower (since the hash would have to be updated when equivalence classes were merged)
String the nodes together into a circular linked list. (A) is fast, (B) would be fast but for the fact that that "merging" part of a circular list with itself actually splits the list (C) would be fast, (D) would require walking the tree
Like above, but with an additional "up" pointer in each node, which could be used to find a canonical member of the circular list.
Am I missing a sweet alternative?
You seem to have two forms of equivalence to deal with. Plain equivalence (A), tracked as equivalence classes which are kept up to date and structural equivalence (D), for which you occasionally go build a single equivalence class and then throw it away.
It sounds to me like the problem would be conceptually simpler if you maintain equivalence classes for both plain and structural equivalence. If that introduces too much churn for the structural equivalence, you could maintain equivalence classes for some aspects of structural equivalence. Then you could find a balance where you can afford the maintenance of those equivalence classes but still greatly reduce the number of nodes to examine when building a list of structurally equivalent nodes.
I don't think any one structure is going to solve your problems, but you might take a look at the Disjoint-set data structure. An equivalence class, after all, is the same thing as a partitioning of a set. It should be able to handle some of those operations speedily.
Stepping back for a moment I'd suggest against using a tree at all. Last time I had to confront a similar problem, I began with a tree, but later moved onto an array.
Reasons being multiple but number one reason was performance, my classes with up to 100 or so children would actually perform better while manipulating them as array than through the the nodes of a tree, mostly because of hardware locality, and CPU prefetch logic, and CPU pipelining.
So although algorithmically an array structure requires a larger N of operations than a tree, performing these dozens of operations is likely faster than chasing pointers across memory.