Give a recursive algorithm btProd which takes as input a binary tree, and outputs
the value that is the product of the numbers contained in the binary tree. If the input is the null tree, then the algorithm should return null.
Algorithm btProd(P)
Require: Input is a tree P
1: btProd(null) ← 0
2: btProd(leaf x) ← x
3: btProd(node L x R) ← btProd(L) + x + btProd(R )
That's the way i would do it but i'm not sure if that's correct
As mentioned in the comments, the product is commutative. Thus, you can traverse the tree in any order you like (pre-,in-,post-order). The recursion you motivated as pseudo-code seems correct, assuming that when you write + x + you mean btProd(L) times btProd(R).
Related
This is a recursive algorithm that I came up with. I've seen examples of algorithms that are similar to this in books.
f(n)
if n is an integer
return n
else
l = left child of n
r = right child of n
return f(l) n f(r)
It can be used to evaluate expression trees like the one shown on the left in the above image in Θ(n) time. As long as this is all correct, I want to extend it to evaluate expression trees like the one on the right, where common subexpressions are de-duplicated. I think the algorithm can evaluate these types of trees correctly, but I am not sure of the time that it will take. Perhaps some method of storing subtrees should be used? Such as:
f(n, A)
if n is an integer
return node
else
if n has more than 1 parent AND n is in A (A is a list of stored subtrees)
return n from A
else
l = left child of n
r = right child of n
s = f(l, A) n f(r, A)
add s to list A
return s
Is this extension correct? It seems really messy. Also I have a feeling it would run in O(n2) time because the function would be called on n nodes and during each call it would have to iterate over a list of stored nodes that has an upper bound of n. Can this be done in better than quadratic time?
Processing a DAG should work fine if you store the result of a subgraph evaluation at the operator node upon the first visit. Any subsequent visit of that node would not trigger a recursive call to evaluate the subexpression but simply return the stored value. The technique is called 'memoization'. The run time is basically the number of edges in the graph, assuming all operator evaluations are O(1).
Pseudocode:
f(n)
if n is an integer
return n
else
if property evalResult of n is defined
return property evalResult of n
else
l = left successor of n
r = right successor of n
s = f(l) n f(r)
set property evalResult of n to s
return s
i was trying to solve how to find in a given array and two indexes the minimum value between these two indexes in O(Log(n)).
i saw the solution of using a segment-tree but couldn't understand why the time complexity for this solution is O(Logn) because it doesnt seems like this because if your range is not exactly within the nod's range you need to start spliting the search.
First proof:
The claim is that there are at most 2 nodes which are expanded at each level. We will prove this by contradiction.
Consider the segment tree given below.
Let's say that there are 3 nodes that are expanded in this tree. This means that the range is from the left most colored node to the right most colored node. But notice that if the range extends to the right most node, then the full range of the middle node is covered. Thus, this node will immediately return the value and won't be expanded. Thus, we prove that at each level, we expand at most 2 nodes and since there are logn levels, the nodes that are expanded are 2⋅logn=Θ(logn).
Source
Second proof:
There are four cases when query the interval (x,y)
FIND(R,x,y) //R is the node
% Case 1
if R.first = x and R.last = y
return {R}
% Case 2
if y <= R.middle
return FIND(R.leftChild, x, y)
% Case 3
if x >= R.middle + 1
return FIND(R.rightChild, x, y)
% Case 4
P = FIND(R.leftChild, x, R.middle)
Q = FIND(R.rightChild, R.middle + 1, y)
return P union Q.
Intuitively, first three cases reduce the level of tree height by 1, since the tree has height log n, if only first three cases happen, the running time is O(log n).
For the last case, FIND() divide the problem into two subproblems. However, we assert that this can only happen at most once. After we called FIND(R.leftChild, x, R.middle), we are querying R.leftChild for the interval [x, R.middle]. R.middle is the same as R.leftChild.last. If x > R.leftChild.middle, then it is Case 1; if x <= R.leftChild, then we will call
FIND ( R.leftChild.leftChild, x, R.leftChild.middle );
FIND ( R.leftChild.rightChild, R.leftChild.middle + 1, , R.leftChild.last );
However, the second FIND() returns R.leftChild.rightChild.sum and therefore takes constant time, and the problem will not be separate into two subproblems (strictly speaking, the problem is separated, though one subproblem takes O(1) time to solve).
Since the same analysis holds on the rightChild of R, we conclude that after case4 happens the first time, the running time T(h) (h is the remaining level of the tree) would be
T(h) <= T(h-1) + c (c is a constant)
T(1) = c
which yields:
T(h) <= c * h = O(h) = O(log n) (since h is the height of the tree)
Hence we end the proof.
Source
I have written the following algorithm that given a node x in a Binary Search Tree T, will set the field s for all nodes in the subtree rooted at x, such that for each node, s will be the sum of all odd keys in the subtree rooted in that node.
OddNodeSetter(T, x):
if (T.x == NIL):
return 0;
if (T.x.key mod 2 == 1):
T.x.s = T.x.key + OddNodeSetter(T, x.left) + OddNodeSetter(T, x.right)
else:
T.x.s = OddNodeSetter(T, x.left) + OddNodeSetter(T, x.right)
I've thought of using the master theorem for this, with the recurrence
T(n) = T(k) + T(n-k-1) + 1 for 1 <= k < n
however since the size of the two recursive calls could vary depending on k and n-k-1 (i.e. the number of nodes in the left and right subtree of x), I can't quite figure out how to solve this recurrence though. For example in case the number of nodes in the left and right subtree of x are equal, we can express the recurrence in the form
T(n) = 2T(n/2) + 1
which can be solved easily, but that doesn't prove the running time in all cases.
Is it possible to prove this algorithm runs in O(n) with the master theorem, and if not what other way is there to do this?
The algorithm visits every node in the tree exactly once, hence O(N).
Update:
And obviously, a visit takes constant time (not counting the recursive calls).
There is no need to use the Master theorem here.
Think of the problem this way: what is the maximum number of operations you have do for each node in the tree? It is bounded by a constant. And what the is the number of nodes in the tree? It is n.
The multiplication of constant with n is still O(n).
My scenario is a perfectly balanced binary tree containing integers.
I've searched and found many explanations of best/worst case scenarios for binary trees. Best case is O(1) (target found in root), and worst is O(log(n)) (height of the tree).
I have found little to no information on calculating average complexity. The best answer I could find was O(log(n)) - 1, but I guess I don't quite understand (if correct) how this average case is calculated.
Also, would searching for an integer not in the tree yield the same complexity, I think it would, but any incite is appreciated.
Lets say we have a perfect balanced binaray tree containing n = 2k integers, so the depth is log₂(n) = k.
The best and worst case is, as you say, O(1) and O(log(n)).
Short way
Lets pick a random integer X (uniform distributed) from the binary tree. The last row the tree contains the same number of integers as the first k-1 rows together. With probability 1/2 X is in the frist k-1 rows, so we need at most O(k-1) = O(log(n)-1) steps to find it. And also with probability 1/2 X is in the last row, where we need O(k) = O(log(n)) steps.
In total we get
E[X] ≤ P(row of X ≤ k-1)⋅O(log(n)-1) + P(row of X = k)⋅O(log(n))
= 1/2⋅O(log(n)-1) + 1/2⋅O(log(n))
= 1/2⋅O(log(n)-1) + 1/2⋅O(log(n)-1)
= O(log(n)-1)
Notice: This is a little ugly but in O-notation O(x) and O(x±c) is the same for any constant value c.
Long way
Now lets try to calculate the average case for a random (uniform distributed) integer X containd in the tree and lets name the set of integers on the i-th "row" of the tree Ti. Ti contains 2i Elements. T0 denotes the root.
The probability of picking an integer in the i-th row is P(X ∈ Ti) = 2i/n = 2i-k.
To find an integer on row i it take O(2i) = O(i) steps.
So the expected number of steps is
E[X] = Σi=0,...,k-1 O(i)⋅2i-k.
To simplify this we use
O(i)⋅2i-k + O(i+1)⋅2i+1-k ≤ O(i)⋅2i+1-k + O(i+1)⋅2i+1-k ≤ O(i+1)⋅2i+2-k
This leads us to
E[X] = Σi=0,...,k-1 O(i)⋅2i-k ≤ O(k-1)⋅2⁰
Since k = log(n), we see that the average case is in O(log(n)-1) = O(log(n)).
Values not in the tree
If the value is not in the tree, you have to walk through the whole tree. After log(n) steps you have found a leaf. If the value equals your input, you have found what you seached for. If not, you know, that the value you searched for is not containd in the tree. So if you seach for a value that is not in the tree it will take O(log(n)).
This is a homework, I have difficulties in thinking of it. Please give me some ideas on recursions and DP solutions. Thanks a lot
generate and print all structurally distinct full binary
trees with n leaves in dotted parentheses form,
"full" means all internal (non-leaf) nodes have
exactly two children.
For example, there are 5 distinct full binary trees
with 4 leaves each.
In Python you could do this
def gendistinct(n):
leafnode = '(.)'
dp = []
newset = set()
newset.add(leafnode)
dp.append(newset)
for i in range(1,n):
newset = set()
for j in range(i):
for leftchild in dp[j]:
for rightchild in dp[i-j-1]:
newset.add('(' + '.' + leftchild + rightchild + ')')
dp.append(newset)
return dp[-1]
alltrees = gendistinct(4)
for tree in alltrees:
print tree
Another Python example with a different strategy.
This is recursive and uses generators. It is slower than the other implementation here but should use less memory since only one list should ever exist in memory at a time.
#!/usr/bin/env python
import itertools
def all_possible_trees(n):
if n == 1:
yield 'l'
for split in range(1, n):
gen_left = all_possible_trees(split)
gen_right = all_possible_trees(n-split)
for left, right in itertools.product(gen_left, gen_right):
yield [left, right]
if __name__ == '__main__':
import sys
n = int(sys.argv[1])
for thing in all_possible_trees(n):
print(thing)
I don't see an obvious way to do it with recursion, but no doubt there is one.
Rather, I would try a dynamic programming approach.
Note that under your definition of full tree, a tree with n leaves has n-1 internal nodes. Also note that the trees can be generated from smaller trees by joining together at the root two trees with sizes 1 to n-1 leaves on the left with n-1 to 1 leaves on the right.
Note also that the "trees" of various sizes can be stored as dotted parenthesis strings. To build a new tree from these, concatenate ( Left , Right ).
So start with the single tree with 1 leaf (that is, a single node). Build the lists of trees of increasing size up to n. To build the list of k-leaf trees, for each j = 1 to k-1, for each tree of j leaves, for each tree of k-j leaves, concatenate to build the tree (with k leaves) and add to the list.
As you build the n-leaf trees, you can print them out rather than store them.
There are 5*1 + 2*1 + 1*2 + 1*5 = 14 trees with 5 leaves.
There are 14*1 + 5*1 + 2*2 + 1*5 + 1*14 = 42 trees with 6 leaves.
U can use recursion, on i-th step u consider i-th level of tree and u chose which nodes will be present on this level according to constraints:
- there is parent on previous level
- no single children present (by your definition of "full" tree)
recursion ends when u have exactly N nodes.