traversing a ttk::treeview in Tk - treeview

Is there a simple way to traverse the items of a Tcl/Tk ttk::treview as if they were items in a listbox? Example:
A
| |-- B
visit | | |-- C
order | | |-- D ----> A B C D E F G
| E
V |-- F
|-- G
I understand that this would correspond to traversing the tree in preorder and this is, in fact, my current solution. Since I do have a complete tree with maximum depth N, I can do something like:
foreach lev1 [.tree children {}] {
do_stuff $lev1
foreach lev2 [.tree children $lev1] {
do_stuff$lev2
foreach lev3 [.tree children $lev2] {
do_stuff $lev3
....
}
}
}
but I am looking for an easier way to do it.
I have considered adding a tag (say mytag) to each node and use: .tree tag has mytag to get the list of all the nodes. The problem is that, AFAIK, the resulting order is not guaranteed and I may end up with a different type of visit.

Recursive traversal ought to do the trick for you. Something along the lines of
proc traverse {item} {
do_stuff $item
foreach [.tree children $item] {
traverse $item
}
}
.tree traverse {}
Feels fairly simple too.
(Disclaimer: I haven't actually tested this.)

Related

In Ruby, can you store a point to a node in a hash for later replacement

I have to assemble some massive JSON payloads, and I want to avoid massive duplicately nested leaves. What I'd like to do is something like this:
tree = {}
tree[0] = {}
tree[0][1] = "stub"
# now save this pointer for later
stub = &tree[0][1]
...
# now go get the leaf
leaf = {0 => ["a","b","c"}
# now without having to search the entire tree, just use the old stub pointer
stub = leaf
Can this be done in Ruby?
Thanks for any help,
kevin
We don't have pointers (at least not at the Ruby level) but we do have references. So you could use a real hash instead of your "stub" string:
tree = {}
tree[0] = {}
tree[0][1] = {}
then stash that reference in leaf:
leaf = tree[0][1]
and modify the content of leaf without assigning anything new to leaf:
leaf[0] = %w[a b c]
That would leave you with tree[0][1] being {0 => ['a', 'b', 'c']} as desired. Of course, if you say leaf = {0 => %w[a b c]} then you'll have a new reference and you'll break the connection with tree[0][1].
Usually this goes in the other direction. When you need a new leaf, you create it:
leaf = {0 => %w[a b c]}
and then you put that leaf in the tree:
tree[0][1] = leaf
tree[0][6] = leaf # Possibly in multiple places
Then you could say leaf[11] = %w[x y z] and tree[0][1][11] and tree[0][6][11] would also be ['x', 'y', 'z'] because leaf, tree[0][1], and tree[0][6] would all refer to the same underlying hash.

Finding all paths between two nodes on a DAG

I have a DAG that has the following adjacency list
L | G, B, P
G | P, I
B | I
P | I
I | R
R | \
I want to find all paths from L to R. I know that I have to do some kind of DFS, and this is what I have so far. (Excuse the Javascript)
function dfs(G, start_vertex) {
const fringe = []
const visited = new Set()
const output = []
fringe.push(start_vertex)
while (fringe.length != 0) {
const vertex = fringe.pop()
if (!visited.has(vertex)) {
output.push(vertex)
for (neighbor in G[vertex].neighbors) {
fringe.push(neighbor)
}
visited.add(vertex)
}
}
return output
}
The output of dfs(G, "L") is [ 'L', 'P', 'I', 'R', 'B', 'G' ] which is indeed a depth first traversal of this graph, but not the result I'm looking for. After doing some searching, I realize there may be a recursive solution, but there were some comments about this problem being "NP-hard" and something about "exponential paths" which I don't understand.
The problem is indeed np-hard because the number of possible paths between two nodes is exponential to the number of nodes. so no way around having a worst-case exponential runtime.
All paths with start head to vertex vertex can be split into paths with heads head||v where v is adjacent to final vertex of head, unless final vertex of head is already vertex: (pseudo-javascript, can have syntax problems)
function print_all_rec(G, head, vertex){
if(head[head.length-1] == vertex){
print(head); //we're here
return;
}
for(v in head[head.length-1].neighbors){
var newHead = head; newHead.append(v);
print_all_rec(G, newHead, vertex);
}
}
function print_all_routes(G, from, to){
var start = [];
start.append(from);
print_all_rec(G, start, to);
}

Concatenation of iterators

I saw this example in "Programming in Scala" chapter 24 "Collections in depth". This example shows two alternative ways to implement a tree:
by extending Traversable[Int] - here the complexity of def foreach[U](f: Int => U): Unit would be O(N).
by extending Iterable[Int] - here the complexity of def iterator: Iterator[Int] would be O(N log(N)).
This is to demonstrate why it would be helpful to have two separate traits, Traversable and Iterable.
sealed abstract class Tree
case class Branch(left: Tree, right: Tree) extends Tree
case class Node(elem: Int) extends Tree
sealed abstract class Tree extends Traversable[Int] {
def foreach[U](f: Int => U) = this match {
case Node(elem) => f(elem)
case Branch(l, r) => l foreach f; r foreach f
}
}
sealed abstract class Tree extends Iterable[Int] {
def iterator: Iterator[Int] = this match {
case Node(elem) => Iterator.single(elem)
case Branch(l, r) => l.iterator ++ r.iterator
}
}
Regarding the implementation of foreach they say:
traversing a balanced tree takes time proportional to the number of
elements in the tree. To see this, consider that for a balanced tree
with N leaves you will have N - 1 interior nodes of class Branch. So
the total number of steps to traverse the tree is N + N - 1.
That makes sense. :)
However, they mention that the concatenation of the two iterators in the iterator method has time complexity of log(N), so the total complexity of the method would be N log(N):
Every time an element is produced by a concatenated iterator such as
l.iterator ++ r.iterator, the computation needs to follow one
indirection to get at the right iterator (either l.iterator, or
r.iterator). Overall, that makes log(N) indirections to get at a leaf
of a balanced tree with N leaves. So the cost of visiting all elements of a tree went up from about 2N for the foreach traversal method to N log(N) for the traversal with iterator.
????
Why does the computation of the concatenated iterator need to get at a leaf of the left or right iterator?
The pun on "collections in depth" is apt. The depth of the data structure matters.
When you invoke top.iterator.next(), each interior Branch delegates to the iterator of the Branch or Node below it, a call chain which is log(N).
You incur that call chain on every next().
Using foreach, you visit each Branch or Node just once.
Edit: Not sure if this helps, but here is an example of eagerly locating the leaves but lazily producing the values. It would stackoverflow or be slower in older versions of Scala, but the implementation of chained ++ was improved. Now it's a flat chain that gets shorter as it's consumed.
sealed abstract class Tree extends Iterable[Int] {
def iterator: Iterator[Int] = {
def leafIterator(t: Tree): List[Iterator[Int]] = t match {
case Node(_) => t.iterator :: Nil
case Branch(left, right) => leafIterator(left) ::: leafIterator(right)
}
this match {
case n # Node(_) => Iterator.fill(1)(n.value)
case Branch(left # Node(_), right # Node(_)) => left.iterator ++ right.iterator
case b # Branch(_, _) =>
leafIterator(b).foldLeft(Iterator[Int]())((all, it) => all ++ it)
}
}
}
case class Branch(left: Tree, right: Tree) extends Tree {
override def toString = s"Branch($left, $right)"
}
case class Node(elem: Int) extends Tree {
def value = {
Console println "An expensive leaf calculation"
elem
}
override def toString = s"Node($elem)"
}
object Test extends App {
// many leaves
val n = 1024 * 1024
val ns: List[Tree] = (1 to n).map(Node(_)).toList
var b = ns
while (b.size > 1) {
b = b.grouped(2).map { case left :: right :: Nil => Branch(left, right) }.toList
}
Console println s"Head: ${b.head.iterator.take(3).toList}"
}
In this implementation, the topmost branch does NOT know how many elements there are in its left and right sub-branches.
Therefore, the iterator is built recursively with the divide and conquer approach which is clearly represented in the iterator method - you get to each node (case Branch), you produce the iterator of the single node case Node => ... and then you join them.
Without getting into each and every node, it would not know what elements there are and how the tree is structured (odd branches allowed vs not allowed etc.).
EDIT:
Let's have a look inside the ++ method on Iterator.
def ++[B >: A](that: => GenTraversableOnce[B]): Iterator[B] = new Iterator.JoinIterator(self, that)
and then at Iterator.JoinIterator
private[scala] final class JoinIterator[+A](lhs: Iterator[A], that: => GenTraversableOnce[A]) extends Iterator[A] {
private[this] var state = 0 // 0: lhs not checked, 1: lhs has next, 2: switched to rhs
private[this] lazy val rhs: Iterator[A] = that.toIterator
def hasNext = state match {
case 0 =>
if (lhs.hasNext) {
state = 1
true
} else {
state = 2
rhs.hasNext
}
case 1 => true
case _ => rhs.hasNext
}
def next() = state match {
case 0 =>
if (lhs.hasNext) lhs.next()
else {
state = 2
rhs.next()
}
case 1 =>
state = 0
lhs.next()
case _ =>
rhs.next()
}
override def ++[B >: A](that: => GenTraversableOnce[B]) =
new ConcatIterator(this, Vector(() => that.toIterator))
}
From that we can see that joining iterators just creates a recursive structure in the rhs field. Furthermore, let's focus on it a bit more.
Consider an even tree with structure level1 [A]; level2 [B][C]; level 3[D][E][F][F]
When you call JoinIterator on the iterator you preserve the existing lhs iterator. However, you always .toIterator on rhs. Which means that for each subsequent level, the rhs part will be reconstructed. So for B ++ C you get that looks like A.lhs (stands for B) and A.rhs (stands for C.toIterator) where C.toIterator stands for C.lhs and C.rhs etc. Thus, the added complexity.
I hope this answers your question.

Least Common Ancestor algorithm variation

root
/ \
A B
/ | \ / \
C D E F G
| |
H I
Given a tree and a list of types {C,D,E,F}. The summary is {A,F}
(as CDE implies A)
If the list of types was {C,D,E,F, I}. The summary is root (as cde implies a, i implies g, and gf implies b, and ab implies root).
At a high level, how would the algorithm for finding the summary work? (pseudo code only)
Pseudo Code, a simple tree traversal
String getSummary(Node node){
if(node contains element in the set)
return node name;
else
String result = "";
for(Node child : node.getChildren){
if(child contains element in the set)
result += getSummary(child);
}
if(result are all the name of its children)
return node name;
return result;

Pseudocode to compare two trees

This is a problem I've encountered a few times, and haven't been convinced that I've used the most efficient logic.
As an example, presume I have two trees: one is a folder structure, the other is an in-memory 'model' of that folder structure. I wish to compare the two trees, and produce a list of nodes that are present in one tree and not the other - and vice versa.
Is there an accepted algorithm to handle this?
Seems like you just want to do a pre-order traversal, essentially. Where "visiting" a node means checking for children that are in one version but not the other.
More precisely: start at the root. At each node, get a set of items in each of the two versions of the node. The symmetric difference of the two sets contains the items in one but not the other. Print/output those. The intersection contains the items that are common to both. For each item in the intersection (I assume you aren't going to look further into the items that are missing from one tree), call "visit" recursively on that node to check its contents. It's a O(n) operation, with a little recursion overhead.
public boolean compareTrees(TreeNode root1, TreeNode root2) {
if ((root1 == null && root2 != null) ||
(root1 != null && root2 == null)) {
return false;
}
if (root1 == null && root2 == null) {
return true;
}
if (root1.data != root2.data) {
return false;
}
return compareTrees(root1.left, root2.left) &&
compareTrees(root1.right, root2.right);
}
If you use a sort tree, like an AVL tree, you can also traverse your tree efficiently in-order. That will return your paths in sorted order from "low" to "high".
Then you can sort your directory array (e.g. Using quicksort) using the same compare method as you use in your tree algorithm.
Then start comparing the two side by side, advancing to the next item by traversing your tree in-order and checking the next item in your sorted directory array.
This should be more efficient in practice, but only benchmarking can tell.
A simple example code in python.
class Node(object):
def __init__(self, val):
self.val = val
self.child = {}
def get_left(self):
# if left is not in the child dictionary that means the element does not have a left child
if 'left' in self.child:
return self.child['left']
else:
return None
def get_right(self):
# if right is not in the child dictionary that means the element does not have a right child
if 'right' in self.child:
return self.child['right']
else:
return None
def traverse_tree(a):
if a is not None:
print 'current_node : %s' % a.val
if 'left' in a.child:
traverse_tree(a.child['left'])
if 'right' in a.child:
traverse_tree(a.child['right'])
def compare_tree(a, b):
if (a is not None and b is None) or (a is None and b is not None):
return 0
elif a is not None and b is not None:
print a.val, b.val
# print 'currently comparing a : %s, b : %s, left : %s, %s , right : %s, %s' % (a.val, b.val, a.child['left'].val, b.child['left'].val, a.child['right'].val, b.child['right'].val)
if a.val==b.val and compare_tree(a.get_left(), b.get_left()) and compare_tree(a.get_right(), b.get_right()):
return 1
else:
return 0
else:
return 1
# Example
a = Node(1)
b = Node(0)
a.child['left'] = Node(2)
a.child['right'] = Node(3)
a.child['left'].child['left'] = Node(4)
a.child['left'].child['right'] = Node(5)
a.child['right'].child['left'] = Node(6)
a.child['right'].child['right'] = Node(7)
b.child['left'] = Node(2)
b.child['right'] = Node(3)
b.child['left'].child['left'] = Node(4)
#b.child['left'].child['right'] = Node(5)
b.child['right'].child['left'] = Node(6)
b.child['right'].child['right'] = Node(7)
if compare_tree(a, b):
print 'trees are equal'
else:
print 'trees are unequal'
# DFS traversal
traverse_tree(a)
Also pasted an example that you can run.
You may also want to have a look at how git does it. Essentially whenever you do a git diff, under the hood a tree comparison is done.

Resources