A binary tree T is semi-balanced if for every node m in T:
R(m)/2 <= L(m) <= 2*R(m),
where L(m) is the number of nodes in the left-sub-tree of m and R(m) is the number of nodes in the right-sub-tree of m.
(a) Write a recurrence relation to count the number of semi-balanced binary trees with N
nodes.
(b) Provide a Dynamic Programming algorithm for computing the recurrence in (a).
How do i go about making the recurrence relation for this?
Does the following qualify?
if(node==NULL)
return;
if(given relation is true)
count++
else find for right tree;
find for left tree;
I guess he is asking more of a recurrence relation like a function or something.?
Also how do i go about doing the problem using dynamic programming? I guess i dont need to store anything if i apply the above suggested code snippet.
Kindly help.
Hint: Let C(n) be number of semi-balanced trees with n nodes. If you know values for C(1), C(2), ..., C(n) than it is easy to calculate C(n+1) by taking root node and dividing remaining n nodes into left and right sub-trees by condition stated.
Number of nodes in sub-trees can be from n/3 to 2*n/3, since these values satisfy condition R(n)/2 <= L(n) <= 2*R(n).
Update:
C(n) = sum from n/3 to 2n/3 L(n)*R(n)
Related
Isomorphism means that arbitary sub-trees of a full binary tree swapping themselves can be identical to another one.
The answer is definitely not Catalan Number, because the amount of Catalan Number counts the isomorphic ones.
I assume by n nodes you mean n internal nodes. So it will have 2n+1 vertices, and 2n edges.
Next, we can put an order on binary trees as follows. A tree with more nodes is bigger. If two trees have the same number of nodes, compare the left side, and break ties by comparing the right. If two trees are equal in this order, it isn't hard to show by induction that they are the same tree.
For your problem, we can assume that for each isomorphism class we are only interested in the maximal tree in that isomorphism class. Note that this means that both the left and the right subtrees must also be maximal in their isomorphism classes, and the left subtree must be the same as or bigger than the right.
So suppose that f(n) is the number of non-isomorphic binary trees with n nodes. We can now go recursively. Here are our cases:
n=0 there is one, the empty tree.
n=1 there is one. A node with 2 leaves.
n > 1. Let us iterate over m, the number on the right. If 2m+1 < n then there are f(m) maximal trees on the right, f(n-m-1) on the left, and all of those are maximal for f(m) * f(n-m-1). If 2m+1 = n then we want a maximal tree on the right with m nodes, and a maximal tree on the left with m nodes, and the one on the right has to be smaller than or equal to the one on the left. But there is a well-known formula for how many ways to do that, which is f(m) * (f(m) + 1) / 2. And finally we can't have n < 2m+1 because in that case we don't have a maximal tree.
Using this, you should be able to write a recursive function to calculate the answer.
UPDATE Here is such a function:
cache = {0: 1, 1:1}
def count_nonisomorphic_full_trees (n):
if n not in cache:
answer = 0
for m in range(n):
c = count_nonisomorphic_full_trees(m)
if n < m+m+1:
break
elif n == m+m+1:
answer = answer + c*(c+1)//2
else:
d = count_nonisomorphic_full_trees(n-m-1)
answer = answer + c*d
cache[n] = answer
return cache[n]
Note that it starts off slower than the Catalan numbers but still grows exponentially.
In Tree based Implementation of Union Find operation, each element is stored in a node, which contains a pointer to a set name. A node v whose set pointer points back to v is also a set name. Each set is a tree, rooted at a node with a self-referencing set pointer.
To perform a union, we simply make the root of one tree point to the root of the other. To perform a find, we follow set name pointers from the starting node until reaching a node whose set name pointer refers back to itself.
In Union by size -> When performing a union, we make the root of smaller tree
point to the root of the larger. This implies O(n log n) time for
performing n union find operations. Each time we follow a pointer, we are going to a subtree of size at most double the size of the previous subtree. Thus, we will follow at most O(log n) pointers for any find.
I do not understand how for each union operation, Find operation is always O(log n). Can someone please explain how the worst case complexity is actually computed?
Let's assume for the moment, that each tree of height h contains at least 2^h nodes. What happens, if you join two such trees?
If they are of different height, the height of the combined tree is the same as the height of the higher one, thus the new tree still has more than 2^h nodes (same height but more nodes).
Now if they are the same height, the resulting tree will increase its height by one, and will contain at least 2^h + 2^h = 2^(h+1) nodes. So the condition will still hold.
The most basic trees (1 node, height 0) also fulfill the condition. It follows, that all trees that can be constructed by joining two trees together fulfill it as well.
Now the height is just the maximal number of steps to follow during a find. If a tree has n nodes and height h (n >= 2^h) this gives immediately log2(n) >= h >= steps.
You can do n union find (union by rank or size) operations with complexity O(n lg* n) where lg* n is the inverse Ackermann function using path compression optimization.
Note that O(n lg* n) is better than O(n log n)
In the question Why is the Ackermann function related to the amortized complexity of union-find algorithm used for disjoint sets? you can find details about this relation.
We need to prove that maximum height of trees is log(N) where N is the number of items in UF (1)
In the base case, all trees have a height of 0. (1) of course satisfied
Now assuming all the trees satisfy (1) we need to prove that joining any 2 trees with i, j (i <= j) nodes will create a new tree with maximum height is log(i + j)(2):
Because the joining 2 trees procedure gets root node of the smaller tree and attach it to the root node of the bigger one so the height of the new tree will be:
max(log(j), 1 + log(i)) = max(log(j), log(2i)) <= log(i + j) => (2) proved
log(j): height of new tree is still the height of the bigger tree
1 + log(i): when height of 2 trees are the same
See the picture below for more details:
Ref: book Algorithms
I have written the following algorithm that given a node x in a Binary Search Tree T, will set the field s for all nodes in the subtree rooted at x, such that for each node, s will be the sum of all odd keys in the subtree rooted in that node.
OddNodeSetter(T, x):
if (T.x == NIL):
return 0;
if (T.x.key mod 2 == 1):
T.x.s = T.x.key + OddNodeSetter(T, x.left) + OddNodeSetter(T, x.right)
else:
T.x.s = OddNodeSetter(T, x.left) + OddNodeSetter(T, x.right)
I've thought of using the master theorem for this, with the recurrence
T(n) = T(k) + T(n-k-1) + 1 for 1 <= k < n
however since the size of the two recursive calls could vary depending on k and n-k-1 (i.e. the number of nodes in the left and right subtree of x), I can't quite figure out how to solve this recurrence though. For example in case the number of nodes in the left and right subtree of x are equal, we can express the recurrence in the form
T(n) = 2T(n/2) + 1
which can be solved easily, but that doesn't prove the running time in all cases.
Is it possible to prove this algorithm runs in O(n) with the master theorem, and if not what other way is there to do this?
The algorithm visits every node in the tree exactly once, hence O(N).
Update:
And obviously, a visit takes constant time (not counting the recursive calls).
There is no need to use the Master theorem here.
Think of the problem this way: what is the maximum number of operations you have do for each node in the tree? It is bounded by a constant. And what the is the number of nodes in the tree? It is n.
The multiplication of constant with n is still O(n).
We implement Disjoint Data structure with tree. in this data structure makeset() create a set with one element, merge(i, j) merge two tree of set i and j in such a way that tree with lower height become a child of root of the second tree. if we do n makeset() operation and n-1 merge() operations in random manner, and then do one find operation. what is the cost of this find operation in worst case?
I) O(n)
II) O(1)
III) O(n log n)
IV) O(log n)
Answer: IV.
Anyone could mentioned a good tips that the author get this solution?
The O(log n) find is only true when you use union by rank (also known as weighted union). When we use this optimisation, we always place the tree with lower rank under the root of the tree with higher rank. If both have the same rank, we choose arbitrarily, but increase the rank of the resulting tree by one. This gives an O(log n) bound on the depth of the tree. We can prove this by showing that a node that is i levels below the root (equivalent to being in a tree of rank >= i) is in a tree of at least 2i nodes (this is the same as showing a tree of size n has log n depth). This is easily done with induction.
Induction hypothesis: tree size is >= 2^j for j < i.
Case i == 0: the node is the root, size is 1 = 2^0.
Case i + 1: the length of a path is i + 1 if it was i and the tree was then placed underneath
another tree. By the induction hypothesis, it was in a tree of size >= 2^i at
that time. It is being placed under another tree, which by our merge rules means
it has at least rank i as well, and therefore also had >= 2^i nodes. The new tree
therefor has >= 2^i + 2^i = 2^(i + 1) nodes.
A weight balanced tree is a binary tree in which for each node, the no. of nodes in the left subtree is atleast half and at most twice the no. of nodes in the right sub tree. So how to approach for forming a recurrence for finding the height of this weight balanced binary tree ?
Only full binary trees can be weight-balanced. Let H(n) be the maximum height of a weight-balanced binary tree with exactly n interior nodes (so, 2*n+1 nodes). The base case
H(0) = 0
is obvious. The recurrence
floor((4*n/3-1)/2)
H(n) = 1 + max max(H(l), H(n-1-l)) for all n > 0
l=ceiling((2*n/3-1)/2)
follows from the recurrence for height and the fact that, given n-1 interior nodes that are not the root, we must distribute them between the left (l) and the right (n-1-l) subtrees, subject to the weight-balance criterion. (There are 2*n interior and exterior nodes to distribute; the most unbalanced split possible is one-third/two-thirds; a full binary tree with k nodes has (k-1)/2 interior nodes.)
Let's conjecture that H(n) is nondecreasing and write a new recurrence
H'(0) = 0
H'(n) = 1 + H'(floor(2*n/3-1/2)) for all n > 0.
The point of the new recurrence is that, if it is in fact nondecreasing, then H(n) = H'(n), by an strong induction proof that involves simplifying the maxes in the other recurrence. In fact, we can prove by induction without solving H'(n) that it is, in fact, nondecreasing, so this simplification is fine.
As for solving H'(n), I'll wave my hands and tell you that Akra--Bazzi applies (alternatively, the Master Theorem Case 2 if you want to play fast and loose with the floor), yielding the asymptotic bound H'(n) = Theta(log n).