Use substitution to find out the running time of T(1) = 1 ; T(n) = 4T(n/3) + n - algorithm

I've got to this point 4^logn + 3n[(4/3)^logn -1] and am having trouble finishing it.
Log is base 3.(Wasn't sure how to do subscripts and exponents.)
Thanks .

Masters theorem is general tool for such problems , but if you want the specific solution by substitution then it as follows :
T(n) = 4T(n/3) + n
T(n) = 4(4T(n/9) + n/3) + n = 4^2T(n/9) + (4/3)n + n
T(n) = 4^2(4T(n/27) + n/3^2) + (4/3)n + n = 4^3T(n/27) + (4/3)^2n + (4/3)n + n
T(n) = 4^kT(n/3^k) + (4/3)^(k-1)n + (4/3)^(k-2)n....
boundary condition
n/3^k = 1, k = log3(n)
T(1) = 1
geometric series summation
T(n) = 4^log3(n) + 1((4/3)^log3(n) - 1)/(log3(n)-1)*n
using log rules
T(n) = n^(log3(4)) + n^(1+log3(4/3))/(log3(n)-1) - n/(log3(n)-1)
T(n) = n^(log3(4)) + n^(1+log3(4)-log(3))/(log3(n)-1) - n/(log3(n)-1)
T(n) = n^(log3(4)) + n^(log3(4))/(log3(n)-1) - n/(log3(n)-1)
T(n) = O(n^log3(4))

This is a perfect spot to use the Master Theorem. In your case, we want to write the recurrence in the form
T(n) = aT(n / b) + O(nd).
With your recurrence, you get
a = 4
b = 3
c = 1
Since logba > d, the Master Theorem says that this solves to O(nlog3 4).
Hope this helps!

Related

Solution for the recurrence algorithm

How is T(n) = 4T(n/2 + 2) + n solved ?
I found a solution in a website:
https://ita.skanev.com/04/04/03.html
I don't understand it.
Is T(n) = 4T(n/2 + 2) + n equivalent with T(n) = 4T(n/2) + (n + 2) ?
I think this is just Case 1 in the Master Theorem. You can read the theorem here https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms).
Basically, in your problem, a = 4, b = 2, and c = 1. So, we have log_b(a) = 2 > c. Therefore T(n) = Theta(n^2).

Solving recurrence relations where they cannot be easily put in MT form

I have the following recurrence relation:
T(n) = T(n/2) + T(n/2) + n
In this instance, I believe the solution is found by:
T(n) = T(n/2) + T(n/2) + n = 2T(n/2) + n
Here, the Master Theorem can be applied:
a = b = 2, f(n) = n
n^(log2(2)) vs n --> n vs n
Thus, the solution would be:
Theta(n log n)
How would I find the solution for the following situations:
T(n) = T(n/2) + T(n/4) + n
T(n) = T(n/2) + T(n/3) + n^2
Those don't look like they can be easily put into MT form, so I am not quite sure what to do.
Those are of a form that can be handled by Akra--Bazzi: https://en.m.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method
Since T(n) = T(n/2) + T(n/4) + n satisfies the criteria of Akra-Bazzi method[see CRLS, pages 112 - 113 ], than you can use that method to solve the recurrence.
To solve the recurrence, we first need to find the unique real number p such that
(1/2)^p + (1/4)^p = 1
<=> (1/2)^p + (1/2^2)^p = 1
<=> (1/2)^p + (1/2)^2p = 1
<=> (1/2)^p * [1 + (1/2)^p] = 1
Let x = (1/2)^p, then we have an equation of the following form
x * (1 + x) = 1
<=> x + x^2 = 1
<=> x^2 + x - 1 = 0
=> x = (-1 + sqrt(5)) / 2 -> x = 1
=> (1/2)^p = 1
<=> (1/2)^p = (1/2)^0
<=> p = 0
The solution to the recurrence is then
T(n) = 𝚯(x^p * (1 + integral-from-1-to-n-(u/u^(p+1))du))
<=> T(n) = 𝚯(x^0 * (1 + integral-from-1-to-n-(u/u^(0+1))du))
<=> T(n) = 𝚯(1 * (1 + integral-from-1-to-n-(1)du))
<=> T(n) = 𝚯(1 * (1 + (n - 1)))
<=> T(n) = 𝚯(1 * (n))
<=> T(n) = 𝚯(n)
The same way can you solve T(n) = T(n/2) + T(n/3) + n^2

Solving the similar recurrence: T(n) = 3T(n/3) + n/3

Given..
T(0) = 3 for n <= 1
T(n) = 3T(n/3) + n/3 for n > 1
So the answer's suppose to be O(nlogn).. Here's how I did it and it's not giving me the right answer:
T(n) = 3T(n/3) + n/3
T(n/3) = 3T(n/3^2) + n/3^2
Subbing this into T(n) gives..
T(n) = 3(3T(n/3^2) + n/3^2) + n/3
T(n/3^2) = 3(3(3T(n/3^3) + n/3^3) + n/3^2) + n/3
Eventually it'll look like..
T(n) = 3^k (T(n/3^k)) + cn/3^k
Setting k = lgn..
T(n) = 3^lgn * (T(n/3^lgn)) + cn/3^lgn
T(n) = n * T(0) + c
T(n) = 3n + c
The answer's O(n) though..What is wrong with my steps?
T(n) = 3T(n/3) + n/3
T(n/3) = 3T(n/9) + n/9
T(n) = 3(3T(n/9) + n/9) + n/3
= 9T(n/9) + 2*n/3 //statement 1
T(n/9)= 3T(n/27) + n/27
T(n) = 9 (3T(n/27)+n/27) + 2*n/3 // replacing T(n/9) in statement 1
= 27 T (n/27) + 3*(n/3)
T(n) = 3^k* T(n/3^k) + k* (n/3) // eventually
replace k with log n to the base 3.
T(n) = n T(1) + (log n) (n/3);
// T(1) = 3
T(n) = 3*n + (log n) (n/3);
Hence , O (n* logn)
Eventually it'll look like.. T(n) = 3^k (T(n/3^k)) + cn/3^k
No. Eventually it'll look like
T(n) = 3^k * T(n/3^k) + k*n/3
You've opened the parenthesis inaccurately.
These types of problems are easily solved using the masters theorem. In your case a = b = 3, c = log3(3) = 1 and because n^c grows with the same rate as your f(n) = n/3, you fall in the second case.
Here you have your k=1 and therefore the answer is O(n log(n))
This question can be solved by Master Theorem:
In a recursion form :
where a>=1, b>1, k >=0 and p is a real number, then:
if a > bk, then
if a = bk
a.) if p >-1, then
b.) if p = -1, then
c.) if p < -1, then
3. if a < bk
a.) if p >=0, then
b.) if p<0, then T(n) = O(nk)
So, the above equation
T(n) = 3T(n/3) + n/3
a = 3, b = 3, k =1, p =0
so it fall into 2.a case, where a = bk
So answer will be
O(n⋅log(n))

solving recurrence T(n) = T(n/2) + T(n/2 - 1) + n/2 + 2

Need some help on solving this runtime recurrence, using Big-Oh:
T(n) = T(n/2) + T(n/2 - 1) + n/2 + 2
I don't quite get how to use the Master Theorem here
For n big enough you can assume T(n/2 - 1) == T(n/2), so you can change
T(n) = T(n/2) + T(n/2 - 1) + n/2 + 2
into
T(n) = 2*T(n/2) + n/2 + 2
And use Master Theorem (http://en.wikipedia.org/wiki/Master_theorem) for
T(n) = a*T(n/b) + f(n)
a = 2
b = 2
f(n) = n/2 + 2
c = 1
k = 0
log(a, b) = 1 = c
and so you have (case 2, since log(a, b) = c)
T(n) = O(n**c * log(n)**(k + 1))
T(n) = O(n * log(n))

Solve the recurrence: T(n)=2T(n/2)+n/logn

I can find the sum of each row (n/log n-i) and also I can draw its recursive tree but I can't calculate sum of its rows.
T(n)=2T(n/2)+n/logn
T(1) = 1
Suppose n = 2^k;
We know for harmonic series (euler formula):
Sum[i = 1 to n](1/i) ~= log(n) [n -> infinity]
t(n) = 2t(n/2) + n/log(n)
= 2(2t(n/4) + n/2/log(n/2)) + n/log(n)
= 4t(n/4) + n/log(n/2) + n/log(n)
= 4(2t(n/8) + n/4/log(n/4)) + n/log(n/2) + n/log(n)
= 8t(n/8) + n/log(n/4) + n/log(n/2) + n/log(n)
= 16t(n/16) + n/log(n/8) + n/log(n/4) + n/log(n/2) + n/log(n)
= n * t(1) + n/log(2) + n/log(4) + ... + n/log(n/2) + n/log(n)
= n(1 + Sum[i = 1 to log(n)](1/log(2^i)))
= n(1 + Sum[i = 1 to log(n)](1/i))
~= n(1 + log(log(n)))
= n + n*log(log(n)))
~= n*log(log(n)) [n -> infinity]
When you start unrolling the recursion, you will get:
Your base case is T(1) = 1, so this means that n = 2^k. Substituting you will get:
The second sum behaves the same as harmonic series and therefore can be approximated as log(k). Now that k = log(n) the resulting answer is:
Follow Extended Masters Theorem Below.
Using Extended Masters Theorem T(n)=2T(n/2)+n/logn can be solved easily as follows.
Here n/log n part can be rewritten as n * (logn)^-1,
Effictively maaking value of p=-1.
Now Extended Masters Theorem can be applied easily, it will relate to case 2b of Extended Masters Theorem .
T(n)= O(nloglogn)
Follow this for more detailed explanation
https://www.youtube.com/watch?v=Aude2ZqQjUI

Resources