recurrence relation for tree - algorithm

suppose there is a tree with number of child nodes increasing from 2 to 4 then 8 and so on.how can we write recurrence relation for such a tree.

using subtitution method- T(n)=2T(n/2)+n
=2[2T(n/2^2)+n/2]+n
=2^2T(n/2^2)+n+n
=2^2[2T(n/2^3)+n/2^2]+n+n
=2^3T(n/2^3)+n+n+n
similarly subtituing k times-- we get
T(n)=2^k T(n/2^k)+nk
the recursion will terminate when n/2^k=1
k=log n base 2.
thus T(n)=2^log n(base2)+n(log n)
=n+nlogn
thus the tight bound for the above recurrence relation will be
=n log N (base of log is 2)

Take a look at this link.
T(n) = 2 T(n/2) + O(n) [the O(n) is for Combine]
T(1) = O(1)

Related

How do you solve this recurrence relation using the recursion tree?

I am having a difficult time understanding and solving recurrence relations. Can someone please help me with this particular one?
T(n)=T(n/3)+T(2n/3)+n
Look at this image:
Which is a recursion tree you can continue.
Also, it's like this one I have found in the URL: Recursion tree T(n) = T(n/3) + T(2n/3) + cn
Which may help you for further help. It's almost like your question, just use c=1 instead of c.
The shortest path to a leaf occurs when we take the heavy branch each time.
Consider k as the height of the tree results: (pow(n*(1/3),k) ≤ 1) meaning k ≥ lg3 n.
The longest path to a leaf occurs when we take the light branch each time.
Consider k as the height of the tree results: (pow(n*(2/3),k) ≤ 1) meaning k ≥ lg3/2 n.
Now look at this image:
This means on any full level, the sum of the additive terms to n.
Now, let's look at what we have.
If you pick height to be
log3 n
The result would be:
T(n) ≥ nlog3 n --> T(n) ≥ Ω (nlogn)
If you pick height to be
log3/2 n
The result would be:
T(n) ≥ nlog3/2 n --> T(n) ≤ O(nlogn)
And this two (1 & 2) will leads us to T(n) = Θ(nlogn)
Other sources : help

How to solve such this recurrence to find out time complexity

There is this version of merge sort where the array is divided into n/3 and 2n/3 halves each time(instead of n/2 and n/2 originally).
The recurrence here would be:
T(n)=T(n/3)+T(2n/3)+n
Now the problem is, how to solve this to get the time complexity of this implementation?
There is Akra–Bazzi_method to calculate complexity for some more complex cases than Master Theorem is intended for.
In this example you'll get the same Theta(NlogN) as for equal parts (p=1, T=Theta(n(1+Integral(n/n^2*dn)))
T(n) denotes the total time taken by the algorithm.
We can calculate time complexity of this recurrence relation through recursion tree.
T(n)=T(n/3)+T(2n/3)+n ------- 1
Root node of T(n) is n, Root node will be expanded into 2 parts:
T(n/3) and T(2n/3)
In next step we will find root node value of T(n/3) and T(2n/3)
To compute T(n/3) substitute n/3 in place of n in equation 1
T(n/3)=T(n/9)+T(2n/9)+n/3
To compute T(2n/3) substitute 2n/3 in place of n in equation 1
T(2n/3)=T(2n/9)+T(4n/9)+2n/3
Root node of T(n/3) is n/3root node will be expanded into 2 parts:
T(n/9) and T(2n/9)
Expand root node value till you will get individual elements i.e T(1)
Calculation of depth:
For calculating depth, n/(b^i)=1
So we will get, n/(3^i) or n/((3/2)^i)
If n=9 then n/3=3, 2n/3=6
for next level n/9=1, 2n/9=2,4n/9=4
Right part of recursion tree n->2n/3->4n/9 this is the longest path that we will take to expand the root node
If we take left part of tree to expand the root node, we will use n/(3^i) to find the depth of tree, to know where the tree will stop
So here we are using right part of tree, n/((3/2)^i)
n=(3/2)^i
log n=log(3/2)^i
i=(logn base 3/2)
Now, calculating cost of each level
Since, cost of each level is same i.e. n
T(n) = cost of level * depth
T(n) = n * i
T(n) = n(logn base 3/2)
Or we can calculate using T(n)=n+n+n..... i times i.e T(n) = n * i
You can even find time complexity using Akra–Bazzi method

Is CLRS completely accurate to state that max-heapify running time is described by the recurrence `T(n) = T(2n/3) + O(1)`?

In CLRS on page 155, about max-heaps, the running time of max-heapify is described as T(n) = T(2n/3) + O(1).
I understand why the first recursive call is on a subproblem of size 2n/3 in the case where we have a nearly complete binary tree (always the case with heaps) in which the deepest level of nodes is half full (and we are recursing on the child that is the root of the subtree that contains these nodes at the deepest level). A more in depth explanation of this is here.
What I don't understand is: after that first recursive call, the subtree is now a complete binary tree, so the next recursive calls will be on problems of size n/2.
So is it accurate to simply state that the running time of max-heapify is described by the recurrence T(n) = T(2n/3) + O(1)?
Converting my comment to an answer: if you assume that T(n), the time required to build a max-heap with n nodes, is a nondecreasing function of n, then we know that T(m) ≤ T(n) for any m ≤ n. You're correct that the ratio of 2n / 3 is the worst-case ratio and that after the first level of the recurrence it won't be reached, but under the above assumption you can safely conclude that T(n / 2) ≤ T(2n / 3), so we can upper-bound the recurrence as
T(n) ≤ T(2n / 3) + O(1)
even if strict equality doesn't hold. That then lets us use the master theorem to conclude that T(n) = O(log n).

Randomized selection complexity

after analyzing the algorithm complexity I have a few questions:
For the best case complexity - the recurrence relation is T(n) = T(n/2) + dn which implies that the complexity is Θ(n).
So by the master theory I can clearly see why this is true , but when I draw the algorithm recursive calls as a tree I don't fully understand the final result. (It seems like I have one branch in height of log(n) which in each level I operate a partition O(n) - so it suppose to be nlog(n) .
(just for memorizing - this is very similar to the best case of mergeSort algorithem , but here we ignore the unwanted sub-array after partition).
Thanks!
It is as Yves Daoust wrote. Image it with real numbers, i.e. n=1024
T(n) = T(n/2) + dn
T(1024) = T(512) + 1024
T(512) = T(256) + 512
....
T(2) = T(1) + 2 -> this would be the last operation
Therefore you get 1024+512+256+...+1 <= 2048, which is 2n
You must think about that dn is as big as n, but in recurrence relation the n is not global variable, it is local variable based on method you call.
So there is log(n) calls but they do not take n-time everyone, they take less and less time.

Recurrence for the Worst-Case Running Time of Quicksort

Assume we constructed a quicksort and the pivot value takes linear time. Find the recurrence for worst-case running time.
My answer:
T(n)= T(n-1) + T(1) + theta(n)
Worst case occurs when the subarrays are completely unbalanced.
There is 1 element in one subarray and (n-1) elements in the other subarray.
theta(n) because it takes running time n to find the pivot.
Am I doing this correctly?
Your recurrence is mostly correct, but you don't actually have two recursive calls made. In the worst-case for quicksort, the pivot will be the largest or smallest element in the array, so you'll recur on one giant array of size n - 1. The other subarray has length 0, so no recursive calls are made. To top everything off, the total work done is Θ(n) per level, so the recurrence relation would more appropriately be
T(n) = T(n - 1) + Θ(n)
This in turn then solves to Θ(n2).
Hope this helps!
you cannot observe, because according to my research T(N)= T(N-K)+T(K-1)+n
we cannot observe exact value until we have
value of k,
T(n) = T(an/(a+b)) + T(bn/(a+b)) + n
Where a/(a+b) and b/(a+b) are fractions of array under consideration

Resources