Complexity of T(n) = 2T(n/2) + n/2 (without master's theorem)? - algorithm

I am looking at the best case running time for merge sort, and have found the following recurrence relation: T(n) = 2T(n/2) + n/2. I am aware of the fact that merge sort is theta(nlogn) in all cases. In attempting to solve this recurrence relation, I use telescoping:
T(n) = 2*T(n/2) + n/2
T(n) = 2^2*T(n/4) + n/4 + n/2
T(n) = 2^k*T(1) + (n/2 + n/4 + ... + n/2^k)
2^k = n -> log_2(n) = k
T(n) = n + n(1/2 + 1/4 + ... + 1/n)
I am unsure how to solve the summation in the last part... I'm not even sure if that is correct. My thinking is that there would be log_2(n) total items being added in the summation? I am unsure how to derive that 2T(n/2) + n/2 is theta(nlogn) without using the master's theorem please...

As pointed out in the comment, your calculation seems to be wrong.
T(n) = 2*T(n/2) + n/2
T(n) = 2*(2*T(n/4) + n/4) + n/2 = 4*T(n/4) + 2*(n/4) + n/2 = 4*T(n/4) + 2*(n/2)
T(n) = 4*(2*T(n/8) + n/8) + 2*(n/2) = 8*T(n/8) + (n/2) + 2*(n/2) = 8*T(n/8) + 3*(n/2)
...
T(n) = 2^k * T(n / 2^k) + k*(n/2), 2^k = n ---> k = log(n)
T(n) = log(n) * T(1) + log(n) * (n/2)
T(n) = logn + n*log(n)/2
Therefore time complexity of merge sort = O(n*log(n))

Related

Solving recurrence using the iteration method

I need help in solving T(n) = T(n/4) + T(n/3) + 2n using iteration method (recursion tree. I am thinking it would be either Θ(2n) or Θ(n)?
It's straightforward. We have two following inequalities:
T(n) > 2T(n/4) + 2n
and
T(n) < 2T(n/3) + 2n
Now, try to find upper bound and lower bound by expansion. Based on both cases, you will find that T(n) = Theta(n).
For example, for T'(n) = 2T(n/3) + 2n we have the following expansion:
T'(n) = 2T(n/3) + 2n = 2^2 T(n/3^2) + (1 + 2/3) * 2n
By induction we can show that:
T'(n) = 2^log_3(n) T(0) + (1 + 2/3 + 2^2/3^2 + ...) * 2n
< n + 6 * n = 7n
Because 2^log_3(n) < 2^log_2(n) = n and (1 + 2/3 + 2^2/3^2 + ...) is a geometric sum with factor 2/3. Hence, the sum will be 1/(1-2/3) = 3 when n goes to infinity.
You can do the same analysis for the lower bound of T(n).
Therefore, as c1 * n <= T(n) <= c_2 * n, we can conclude that T(n) is in Theta(n).

calculate the complexity of an algorithm using Back Substitution

I have the function :
int ps (int n)
{ if (n == 1) return 1;
else return (Extra(n) + Ps (n/4) + Ps (n/4)); }
Extra(n) is O(n)
I have tried to find T(n) of this function which is T(n)=T(n)+2 T(n/4) and I have calculated the complexity using the master theorem it is O(n)
but I don't know how to find the complexity of it using back substitution
First, you are wrong in terms of complexity. You didn't mention Extra(n) in writing the time complexity. So, T(n) = 2 T(n/4) + n. Now, I think the new recurrent complexity term is easy to solve by substitution:
T(n) = 2T(n/4) + n = 2 (2 T(n/8) + n/4) + n = 2^2 T(n/8) + n/2 + n =
2^2 (2 T(n/16) + n/8) + n/2 + n = 2^3 T(n/16) + n/4 + n/2 + n
Now, by mathematical induction, if we suppose n = 2^k, you can find that:
T(n) = n + n/2 + n/4 + n/8 + ... + n/2^k = n (1 + 1/2 + 1/4 + ... + 1/2^k) <= 2n
The last part of the above analysis comes from the being geometric series of the sum with factor 1/2. Hence, T(n) is in Theta(n) and O(n).

Is T(n)= T(n-1) + n always n(n+1)/2 or O(n^2)

I watched a video where they prove T(n)= T(n-1) + n is O(n^2)
I have the following expressions which are:
T(1) = 4
T(N) = T(N – 1) + N + 3, N > 1
My question is, is the expression above solved the same way, even though there is a +3 after N.
The question is a bit messed up, but i hope you get the point. If there are questions i will try to explain better.
In a word is T(N) = T(N – 1) + N + 3 = O(n^2)
T(n) = T(n-1) + n-1 + 4 => given equation by adding 1 and subtracting 1
T(n) = T(n-1) + n-1 + T(1) ...(1)
Now, T(1) = constant.
Therefore, from eq(1),
T(n) = T(n-1) + (n-1) ...(2)
Eq(2) reduces to T(n) = T(n-k) + n*k - k*(k+1)/2 ...(3)
Upon substituting (n-k)=1 or k=(n-1) in eq(3),
we get,
T(n) = T(1) + n*(n-1) - (n-1)(n)/2
T(n) = n*(n-1)/2 => O(n^2)
PS: If we won't neglect T(1) in eq(1), final equation we get is T(n) = n*(n-1)/2 + T(1) + 4*k => T(n) = n*(n-1)/2 + 4 + 4*(n-1) which still gives O(n^2) as final answer.

Time complexity using recursion tree method

I've been trying to solve the given problem using recursion tree method but my answer has not been coming of the same form
T(n)=8T(n/2)+n^2
The answer of the given problem is Theta(n^3)
Try to expand the equation:
T(n) = 8 T(n/2) + n^2
T(n) = 8(8T(n/4) +(n/2)^2) + n^2 = 8^2T(n/4) + n^2 + 8 (n/2)^2
T(n) = 8^3T(n/8) + n^2 + 8 (n/2)^2 + 8^2 (n/4)^2
Now you can generalize the above sum:
T(n) = sum 8^(i) (n/2^i)^2 for i from 0 to log(n)
Simplify:
T(n) = sum 2^(3i) n^2/2^(2i) for i from 0 to log(n)
T(n) = sum 2^i n^2 for i from 0 to log(n)
T(n) = n^2 (sum 2^i for i from 0 to log(n))
T(n) = n^2 * (2^(log(n)+1) - 1) = n^2 * (2n - 1) = Theta(n^3)
In the above, you should be aware that sum 2^i for i from 0 to log(n) is 1 + 2 + 2^2 + ... + 2^(log(n)) = 2^(log(n) + 1) - 1 = 2n - 1.

recursive big-o for modified cases from merge sort (T(n/2) + cn)

what would be the big-o for
T(n)= T(n/2) + cn
I know the mergesort case T(n) = 2T(n/2)+cn i.e. linearithmic
and i was able to solve T(n) = 2T(n/2)+c to get linear but am confused in the first one...
The first one should be pretty simple:
T(n) = T(n/2) + cn = T(n/4) + cn/2 + cn = T(n/8) + cn/4 + cn/2 + cn
= T(1) + c(n/2^m + ... + n/4 + n/2 + n)
<= T(1) + c(n + n/2 + n/4 + n/8 + ...) = 2cn + T(1)
Where m = log(n).
Hence in terms of big-o notation T(n) ~ O(n).
BTW, not hard to prove this is actually theta of n,
T(n) = T(n/2) + cn = T(n/4) + cn/2 + cn = T(n/8) + cn/4 + cn/2 + cn
= T(1) + c(n/2^m + ... + n/4 + n/2 + n)
>= T(1) + c(n/2 + n/4 + n/8 + ...) = cn + T(1)
and therefore T(n) is actually theta of n, since both big-o and big-omega of n.

Resources