How to solve T(n)=T(n/4)+T(3n/4)+nlogn - algorithm

I have got a question of solving this recursive complexity T(n)=T(n/4)+T(3n/4)+nlogn. Can you help me to solve it?

You can use Akra-Bazzi method with the following paramters:
a_1 = a_2 = 1,
b_1 = 1/4, b_2 = 3/4
p = 1
T(n) = \Theta(n * (1 + integral( u log(u)/ u^2 du,1, n))) =
\Theta(n * (1 + (log^2(n)/2))) =
\Theta(n log^2(n))

Notice that 3n/4 > n/4. Hence we can see that T(n) <= 2T(3n/4) + n log n.
Now we can apply Master Theorem.
We can see that a=2, b=4/3 and f(n) = n log n
We can see that, log_(4/3) 2 = 2.41
Hence n^log_b a >= f(n).
Thus by Master's theorem, we have T(n) = O(n^2.41)

Related

How to Solve this task 𝑇(𝑛)=4𝑇(𝑛/4)+𝑛√n?

I'm new to Divide and Conquer. I would like to know how to find out the running time of this calculation?
What exactly do I have to pay attention to and how do I proceed?
n=1 runningtime = O(1)
so lets see the calculation for this :
T(n) = 4T(n/4) + n * sqrt(n)
expanding for sum k steps it will be like
T(n) = 4^k[T(n/4^k)] + n * sqrt(n) * {sqrt(1/4)+sqrt(1/16)....}
here {sqrt(1/4)+sqrt(1/16)....} is Geometric progression
if we take k=log4(n) //here base is 4
T(n) = n * [T(1)] + n * sqrt(n)*{1-[2/sqrt(n)]}
T(n) = n * [T(1)] + n * sqrt(n) -2 * n
you still can use
Master theorem
T(n) = aT(n/b) + f(n).
If f(n) = Θ(n^d), where d ≥ 0, then
T(n) = Θ(n^d) if a < bd,
T(n) = Θ((n^d)log n) if a = bd,
T(n) = Θ(n^(logba)) if a > bd
yes the ans is O(n^3/2)
{sorry could not comment due to low reputation}

Solving recurrences with iteration, substitution, Master Theorem?

I'm familiar with solving recurrences with iteration:
t(1) = c1
t(2) = t(1) + c2 = c1 + c2
t(3) = t(2) + c2 = c1 + 2c2
...
t(n) = c1 + (n-1)c2 = O(n)
But what if I had a recurrence with no base case? How would I solve it using the three methods mentioned in the title?
t(n) = 2t(n/2) + 1
For Master Theorem I know the first step, find a, b, and f(n):
a = 2
b = 2
f(n) = 1
But not where to go from here. I'm at a standstill because I'm not sure how to approach the question.
I know of 2 ways to solve this:
(1) T(n) = 2T(n/2) + 1
(2) T(n/2) = 2T(n/4) + 1
now replace T(n/2) from (2) into (1)
T(n) = 2[2T(n/4) + 1] + 1
= 2^2T(n/4) + 2 + 1
T(n/4) = 2T(n/8) + 1
T(n) = 2^2[2T(n/8) + 1] + 2 + 1
= 2^3T(n/8) + 4 + 2 + 1
You would just keep doing this until you can generalize. Eventually you will spot that:
T(n) = 2^kT(n/2^k) + sum(2^(k-1))
You want T(1) so set n/2^k = 1 and solve for k. When you do this you will find that, k = lgn
Substitute lgn for k you will end up with
T(n) = 2^lgnT(n/2^lgn) + (1 - 2^lgn) / (1 - 2)
2^lgn = n so,
T(n) = nT(1) + n - 1
T(n) = n + n - 1 where n is the dominant term.
For Master Theorem its really fast
Consider, T(n) = aT(n/b) + n^c for n>1
There are three cases (note that b is the log base)
(1) if logb a < c, T(n)=Θ(n^c),
(2) if logb a = c, T (n) = Θ(n^c log n),
(3) if logb a > c, T(n) = Θ(n^(logb a)).
In this case a = 2, b = 2, and c = 0 (n^0 = 1)
A quick check shows case 3.
n^(log2 2)
note log2 2 is 1
So by master theorem this is Θ(n)
Apart from the Master Theorem, the Recursion Tree Method and the Iterative Method there is also the so
called "Substitution Method".
Often you will find people talking about the
substitution method, when in fact they mean the iterative method (especially on Youtube).
I guess this stems from the fact that in the iterative method you are also substituting
something, namely the n+1-th recursive call into the n-th one...
The standard reference work about algorithms
(CLRS)
defines it as follows:
Substitution Method
Guess the form of the solution.
Use mathematical induction to find the constants and show that the solution works.
As example let's take your recurrence equation: T(n) = 2T(ⁿ/₂)+1
We guess that the solution is T(n) ∈ O(n²), so we have to prove that
T(n) ≤ cn² for some constant c.
Also, let's assume that for n=1 you are doing some constant work c.
Given:
T(1) ≤ c
T(n) = 2T(ⁿ/₂)+1
To prove:
∃c > 0, ∃n₀ ∈ ℕ, ∀n ≥ n₀, such that T(n) ≤ cn² is true.
Base Case:
n=1: T(1) ≤ c
n=2: T(2) ≤ T(1) + T(1) + 1 ≤ 4c
(≤c) (≤c) (cn²)
Induction Step:
As inductive hypothesis we assume T(n) ≤ cn² for all positive numbers smaller than n
especially for (ⁿ/₂).
Therefore T(ⁿ/₂) ≤ c(ⁿ/₂)², and hence
T(n) ≤ 2c(ⁿ/₂)² + 1 ⟵ Here we're substituting c(ⁿ/₂)² for T(ⁿ/₂)
= (¹/₂)cn² + 1
≤ cn² (for c ≥ 2, and all n ∈ ℕ)
So we have shown, that there is a constant c, such that T(n) ≤ cn² is true for all n ∈ ℕ.
This means exactly T(n) ∈ O(n²). ∎
(for Ω, and hence Θ, the proof is similar).

How can I solve recurrence relation T(n)=T(√n)+O(n)? [duplicate]

I know how to solve the recurrence relations using Master Method.
Also I'm aware of how to solve the recurrences below:
T(n) = sqrt(n)*T(sqrt(n)) + n
T(n) = 2*T(sqrt(n)) + lg(n)
In the above two recurrences there is same amount of work at each level of the recursion tree. And there are a total of log log n levels in the recursion tree.
I'm having trouble in solving this one:
T(n) = 4*T(sqrt(n)) + n
EDIT:
Here n is a power of 2
Suppose that n = 2^k. We have T(2^k) = 4*T(2^(k/2)) + 2^k. Let S(k) = T(2^k). We have S(k) = 4S(k/2) + 2^k. By using Mater Theorem, we get S(k) = O(2^k). Since S(k) = O(2^k) and S(k) = T(2^k), T(2^k) = O(2^k) which implies T(n) = O(n).
I'm having trouble in solving this one: T(n) = 4*T(sqrt(n)) + n
EDIT: Here n is a power of 2
This edit is important. So lets say that the recurrence stops at 2.
So the question now is how deep the recursion tree is. Well, that is the number of times that you can take the square root of n before n gets sufficiently small (say, less than 2). If we write
n = 2lg n
then on each recursive call n will have its square root taken. This is equivalent to halving the above exponent, so after k iterations we have that
n1/(2k) = 2lg n/(2k)
We want to stop when this is less than 2, giving
2lg n/(2k) = 2
lg n/(2k) = 1
lg n = 2k
lg lg n = k
So after lg lg n iterations of square rooting the recursion stops. (source)
For each recursion we will have 4 new branches, the total of branches is 4 ^ (depth of the tree) therefore 4^(lg lg n).
EDIT:
Source
T(n) = 4 T(sqrt(n)) + n
4 [ 4 T(sqrt(sqrt(n) + n ] + n
4^k * T(n^(1/2^k)) +kn because n is power of 2.
4^k * T(2^(L/2^k)) +kn [ Let n = 2^L , L= logn]
4^k * T(2) +kn [ Let L = 2^k, k = logL = log log n]
2^2k * c +kn
L^2 * c + nloglogn
logn^2 * c + nloglogn
= O(nloglogn)
T(n) = 4T(√n) + n
suppose that (n = 2^m) . so we have :
T(2^m) = 4T(2^(m/2)) + (2^m)
now let name T(2^m) as S(m):
S(m) = 4S(m/2) + m . now with master Method we can solve this relation, and the answer is :
S(m) = Θ(m^2)
now we step back to T(2^m):
T(2^m) = Θ((2^m)^2)
now we need m to solve our problem and we can get it from the second line and we have :
n = 2^m => m=lgn
and the problem solved .
T(n) = Θ((2^lgn)^2)
T(n) = Θ(n^2)

How to solve some hardcore Recurrences?

I am trying to solve these Recurrences for my algorithms class. Can someone please help me because Master Theorem does not work and I can not compute the sum that occurs from the tree in the first and I have not seen a good solved example for the second!
T(n) = 2*T(n/3) + n/log^2(n)
T(n) = T(n-10) + logn
First example is a job for Master Theorem: a=2, b=3, f=n/log^2(n). log_b(a) < 1, so it is the third case, because f(n) grows (significantly) faster than n^(log_b(a))*log^k(n) for any k. This means that main job is done outside recursion and T(n)=O(n/log^2(n)).
Second example can be solved this way:
T(n)
= T(n - 10) + log(n)
= ...
= log(n) + log(n - 10) + ...
= log(n * (n-10) * (n-20) * ...)
= [n = 10k]
= log(10^k) + log(k!)
= k*log(10) + k*log(k) - k + O(log(k))
= O(k*log(k))
= O(n*log(n))

How to solve this recurrence relation: T(n) = 4*T(sqrt(n)) + n

I know how to solve the recurrence relations using Master Method.
Also I'm aware of how to solve the recurrences below:
T(n) = sqrt(n)*T(sqrt(n)) + n
T(n) = 2*T(sqrt(n)) + lg(n)
In the above two recurrences there is same amount of work at each level of the recursion tree. And there are a total of log log n levels in the recursion tree.
I'm having trouble in solving this one:
T(n) = 4*T(sqrt(n)) + n
EDIT:
Here n is a power of 2
Suppose that n = 2^k. We have T(2^k) = 4*T(2^(k/2)) + 2^k. Let S(k) = T(2^k). We have S(k) = 4S(k/2) + 2^k. By using Mater Theorem, we get S(k) = O(2^k). Since S(k) = O(2^k) and S(k) = T(2^k), T(2^k) = O(2^k) which implies T(n) = O(n).
I'm having trouble in solving this one: T(n) = 4*T(sqrt(n)) + n
EDIT: Here n is a power of 2
This edit is important. So lets say that the recurrence stops at 2.
So the question now is how deep the recursion tree is. Well, that is the number of times that you can take the square root of n before n gets sufficiently small (say, less than 2). If we write
n = 2lg n
then on each recursive call n will have its square root taken. This is equivalent to halving the above exponent, so after k iterations we have that
n1/(2k) = 2lg n/(2k)
We want to stop when this is less than 2, giving
2lg n/(2k) = 2
lg n/(2k) = 1
lg n = 2k
lg lg n = k
So after lg lg n iterations of square rooting the recursion stops. (source)
For each recursion we will have 4 new branches, the total of branches is 4 ^ (depth of the tree) therefore 4^(lg lg n).
EDIT:
Source
T(n) = 4 T(sqrt(n)) + n
4 [ 4 T(sqrt(sqrt(n) + n ] + n
4^k * T(n^(1/2^k)) +kn because n is power of 2.
4^k * T(2^(L/2^k)) +kn [ Let n = 2^L , L= logn]
4^k * T(2) +kn [ Let L = 2^k, k = logL = log log n]
2^2k * c +kn
L^2 * c + nloglogn
logn^2 * c + nloglogn
= O(nloglogn)
T(n) = 4T(√n) + n
suppose that (n = 2^m) . so we have :
T(2^m) = 4T(2^(m/2)) + (2^m)
now let name T(2^m) as S(m):
S(m) = 4S(m/2) + m . now with master Method we can solve this relation, and the answer is :
S(m) = Θ(m^2)
now we step back to T(2^m):
T(2^m) = Θ((2^m)^2)
now we need m to solve our problem and we can get it from the second line and we have :
n = 2^m => m=lgn
and the problem solved .
T(n) = Θ((2^lgn)^2)
T(n) = Θ(n^2)

Resources