Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
T(n) = 2T(n/2) + 0(1)
T(n) = T(sqrt(n)) + 0(1)
In the first one I use substitution method for n, logn, etc; all gave me wrong answers.
Recurrence trees: I don't know if I can apply as the root will be a constant.
Can some one help?
Let's look at the first one. First of all, you need to know T(base case). You mentioned that it's a constant, but when you do the problem it's important that you write it down. Usually it's something like T(1) = 1. I'll use that, but you can generalize to whatever it is.
Next, find out how many times you recur (that is, the height of the recursion tree). n is your problem size, so how many times can we repeatedly divide n by 2? Mathematically speaking, what's i when n/(2^i) = 1? Figure it out, hold onto it for later.
Next, do a few substitutions, until you start to notice a pattern.
T(n) = 2(2(2T(n/2*2*2) + θ(1)) + θ(1)) + θ(1)
Ok, the pattern is that we multiply T() by 2 a bunch of times, and divide n by 2 a bunch of times. How many times? i times.
T(n) = (2^i)*T(n/(2^i)) + ...
For the big-θ terms at the end, we use a cute trick. Look above where we have a few substitutions, and ignore the T() part. We want the sum of the θ terms. Notice that they add up to (1 + 2 + 4 + ... + 2^i) * θ(1). Can you find a closed form for 1 + 2 + 4 + ... + 2^i? I'll give you that one; it's (2^i - 1). It's a good one to just memorize, but here's how you'd figure it out.
Anyway, all in all we get
T(n) = (2^i) * T(n/(2^i)) + (2^i - 1) * θ(1)
If you solved for i earlier, then you know that i = log_2(n). Plug that in, do some algebra, and you get down to
T(n) = n*T(1) + (n - 1)*θ(1). T(1) = 1. So T(n) = n + (n - 1)*θ(1). Which is n times a constant, plus a constant, plus n. We drop lower order terms and constants, so it's θ(n).
Prasoon Saurav is right about using the master method, but it's important that you know what the recurrence relation is saying. The things to ask are, how much work do I do at each step, and what is the number of steps for an input of size n?
Use Master Theorem to solve such recurrence relations.
Let a be an integer greater than or equal to 1 and b be a real number greater than
1. Let c be a positive real number and
d a nonnegative real number. Given a recurrence of the form
T (n) = a T(n/b) + nc .. if n > 1
T(n) = d .. if n = 1
then for n a power of b,
if logb a < c, T (n) = Θ(nc),
if logb a = c, T (n) = Θ(nc log n),
if logb a > c, T (n) = Θ(nlogb a).
1) T(n) = 2T(n/2) + 0(1)
In this case
a = b = 2;
logb a = 1; c = 0 (since nc =1 => c= 0)
So Case (3) is applicable. So T(n) = Θ(n) :)
2) T(n) = T(sqrt(n)) + 0(1)
Let m = log2 n;
=> T(2m) = T( 2m / 2 ) + 0(1)
Now renaming K(m) = T(2m) => K(m) = K(m/2) + 0(1)
Apply Case (2).
For part 1, you can use Master Theorem as #Prasoon Saurav suggested.
For part 2, just expand the recurrence:
T(n) = T(n ^ 1/2) + O(1) // sqrt(n) = n ^ 1/2
= T(n ^ 1/4) + O(1) + O(1) // sqrt(sqrt(n)) = n ^ 1/4
etc.
The series will continue to k terms until n ^ 1/(2^k) <= 1, i.e. 2^k = log n or k = log log n. That gives T(n) = k * O(1) = O(log log n).
Let's look at the first recurrence, T(n) = 2T(n/2) + 1. The n/2 is our clue here: each nested term's parameter is half that of its parent. Therefore, if we start with n = 2^k then we will have k terms in our expansion, each adding 1 to the total, before we hit our base case, T(0). Hence, assuming T(0) = 1, we can say T(2^k) = k + 1. Now, since n = 2^k we must have k = log_2(n). Therefore T(n) = log_2(n) + 1.
We can apply the same trick to your second recurrence, T(n) = T(n^0.5) + 1. If we start with n = 2^2^k we will have k terms in our expansion, each adding 1 to the total. Assuming T(0) = 1, we must have T(2^2^k) = k + 1. Since n = 2^2^k we must have k = log_2(log_2(n)), hence T(n) = log_2(log_2(n)) + 1.
Recurrence relations and recursive functions as well should be solved by starting at f(1). In case 1, T(1) = 1; T(2) = 3; T(4) = 7; T(8) = 15; It's clear that T(n) = 2 * n -1, which in O notation is O(n).
In second case T(1) = 1; T(2) = 2; T(4) = 3; T(16) = 4; T(256) = 5; T(256 * 256) =6; It will take little time to find out that T(n) = log(log(n)) + 1 where log is in base 2. Clearly this is O(log(log(n)) relation.
Most of the time the best way to deal with recurrence is to draw the recurrence tree and carefully handle the base case.
However here I will give you slight hint to solve using substitution method.
In recurrence first try substitution n = 2^k
In recurrence second try substitution n = 2^2^k
Related
def maximum(array):
max = array[0]
counter = 0
for i in array:
size +=1
if i>max:
max=i
return max
I need to analyze that algorithm which find maximum number in array with n numbers in it. the only thing I want to know how to get Recursive and General formula for Average case of this algorithm.
Not sure what you mean by "Recursive and General formula for Average case of this algorithm". Your algorithm is not recursive. So, how can it be "recursive formula"?
Recursive way to find maximum in an array:
def findMax(Array, n):
if (n == 1):
return A[0]
return max(Array[n - 1], findMax(Array, n - 1))
I guess you want Recurrence relation.
Let T(n) be time taken to find the maximum of n elements. So, for above written code.
T(n) = T(n-1) + 1 .... Equation I
In case you are interested to solve the recurrence relation:
T(n-1) = T((n-1)-1) + 1 = T(n-2) + 1 .... Equation II
If you substitute value of T(n-1) from Equation II into Equation I, you get:
T(n) = (T(n-2) + 1) + 1 = T(n-2) + 2
Similarly,
T(n) = T(n-3) + 3
T(n) = T(n-4) + 4
and so on..
Continuing the above for k times,
T(n) = T(n-k) + k
If n-k = 0, means n = k. The equation then becomes
T(n) = T(0) + n = 1 + n
Therefore, the recursive algorithm we came up with has time complexity O(n).
Hope it helped.
i want to know what the Time Complexity of my recursion method :
T(n) = 2T(n/2) + O(1)
i saw a result that says it is O(n) but i don't know why , i solved it like this :
T(n) = 2T(n/2) + 1
T(n-1) = 4T(n-1/4) + 3
T(n-2) = 8T(n-2/8) + 7
...... ………….. ..
T(n) = 2^n+1 T (n/2^n+1) + (2^n+1 - 1)
I think you have got the wrong idea about recursive relations. You can think as follows:
If T(n) represents the value of function T() at input = n then the relation says that output is one more double the value at half of the current input. So for input = n-1 output i.e. T(n-1) will be one more than double the value at half of this input, that is T(n-1) = 2*T((n-1)/2) + 1
The above kind of recursive relation should be solved as answered by Yves Daoust. For more examples on recursive relations, you can refer this
Consider that n=2^m, which allows you to write
T(2^m)=2T(2^(m-1))+O(1)
or by denoting S(m):= T(2^m),
S(m)=2 S(m-1) + O(1),
2^m S(m)=2 2^(m-1)S(m-1) + 2^(m-1) O(1)
and finally,
R(m) = R(m-1) + 2^(m-1) O(1).
Now by induction,
R(m) = R(0) + (2^m-1) O(1),
T(n) = S(m) = 2^(1-m) T(2^m) + (2 - 2^(m-1)) O(1) = 2/n T(n) + (2 - n/2) O(1).
There are a couple of rules that you might need to remember. If you can remember these easy rules then Master Theorem is very easy to solve recurrence equations. The following are the basic rules which needs to be remembered
case 1) If n^(log b base a) << f(n) then T(n) = f(n)
case 2) If n^(log b base a) = f(n) then T(n) = f(n) * log n
case 3) 1) If n^(log b base a) >> f(n) then T(n) = n^(log b base a)
Now, lets solve the recurrence using the above equations.
a = 2, b = 2, f(n) = O(1)
n^(log b base a) = n = O(n)
This is case 3) in the above equations. Hence T(n) = n^(log b base a) = O(n).
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
I am refreshing on Master Theorem a bit and I am trying to figure out the running time of an algorithm that solves a problem of size n by recursively solving 2 subproblems of size n-1 and combine solutions in constant time.
So the formula is:
T(N) = 2T(N - 1) + O(1)
But I am not sure how can I formulate the condition of master theorem.
I mean we don't have T(N/b) so is b of the Master Theorem formula in this case b=N/(N-1)?
If yes since obviously a > b^k since k=0 and is O(N^z) where z=log2 with base of (N/N-1) how can I make sense out of this? Assuming I am right so far?
ah, enough with the hints. the solution is actually quite simple. z-transform both sides, group the terms, and then inverse z transform to get the solution.
first, look at the problem as
x[n] = a x[n-1] + c
apply z transform to both sides (there are some technicalities with respect to the ROC, but let's ignore that for now)
X(z) = (a X(z) / z) + (c z / (z-1))
solve for X(z) to get
X(z) = c z^2 / [(z - 1) * (z-a)]
now observe that this formula can be re-written as:
X(z) = r z / (z-1) + s z / (z-a)
where r = c/(1-a) and s = - a c / (1-a)
Furthermore, observe that
X(z) = P(z) + Q(z)
where P(z) = r z / (z-1) = r / (1 - (1/z)), and Q(z) = s z / (z-a) = s / (1 - a (1/z))
apply inverse z-transform to get that:
p[n] = r u[n]
and
q[n] = s exp(log(a)n) u[n]
where log denotes the natural log and u[n] is the unit (Heaviside) step function (i.e. u[n]=1 for n>=0 and u[n]=0 for n<0).
Finally, by linearity of z-transform:
x[n] = (r + s exp(log(a) n))u[n]
where r and s are as defined above.
so relabeling back to your original problem,
T(n) = a T(n-1) + c
then
T(n) = (c/(a-1))(-1+a exp(log(a) n))u[n]
where exp(x) = e^x, log(x) is the natural log of x, and u[n] is the unit step function.
What does this tell you?
Unless I made a mistake, T grows exponentially with n. This is effectively an exponentially increasing function under the reasonable assumption that a > 1. The exponent is govern by a (more specifically, the natural log of a).
One more simplification, note that exp(log(a) n) = exp(log(a))^n = a^n:
T(n) = (c/(a-1))(-1+a^(n+1))u[n]
so O(a^n) in big O notation.
And now here is the easy way:
put T(0) = 1
T(n) = a T(n-1) + c
T(1) = a * T(0) + c = a + c
T(2) = a * T(1) + c = a*a + a * c + c
T(3) = a * T(2) + c = a*a*a + a * a * c + a * c + c
....
note that this creates a pattern. specifically:
T(n) = sum(a^j c^(n-j), j=0,...,n)
put c = 1 gives
T(n) = sum(a^j, j=0,...,n)
this is geometric series, which evaluates to:
T(n) = (1-a^(n+1))/(1-a)
= (1/(1-a)) - (1/(1-a)) a^n
= (1/(a-1))(-1 + a^(n+1))
for n>=0.
Note that this formula is the same as given above for c=1 using the z-transform method. Again, O(a^n).
Don't even think about Master's Theorem. You can only use Masther's Theorem when you're given master's theorem when b > 1 from the general form T(n) = aT(n/b) + f(n).
Instead, think of it this way. You have a recursive call that decrements the size of input, n, by 1 at each recursive call. And at each recursive call, the cost is constant O(1). The input size will decrement until it reaches 1. Then you add up all the costs that you used to make the recursive calls.
How many are they? n. So this would take O(2^n).
Looks like you can't formulate this problem in terms of the Master Theorem.
A good start is to draw the recursion tree to understand the pattern, then prove it with the substitution method. You can also expand the formula a couple of times and see where it leads.
See also this question which solves 2 subproblems instead of a:
Time bound for recursive algorithm with constant combination time
May be you could think of it this way
when
n = 1, T(1) = 1
n = 2, T(2) = 2
n = 3, T(3) = 4
n = 4, T(4) = 8
n = 5, T(5) = 16
It is easy to see that this is a geometric series 1 + 2+ 4+ 8 + 16..., the sum of which is
first term (ratio^n - 1)/(ratio - 1). For this series it is
1 * (2^n - 1)/(2 - 1) = 2^n - 1.
The dominating term here is 2^n, therefore the function belongs to Theta(2^n). You could verify it by doing a lim(n->inf) [2^n / (2^n - 1)] = +ve constant.
Therefore the function belongs to Big Theta (2^n)
I know how to do recurrence relations for algorithms that only call itself once, but I'm not sure how to do something that calls itself multiple times in one occurrence.
For example:
T(n) = T(n/2) + T(n/4) + T(n/8) + (n)
Use Recursion Tree. See the last example of Recursion tree at CLRS "Intro to Algorithm".
T(n) = T(n/2) + T(n/4) + T(n/8) + n. The root will be n(cost) & divided into 3 recursions. So the recursion tree looks like as follows:
T(n) = n = n
T(n/2)T(n/4)T(n/8) (n/2) (n/4) (n/8)
T(n/4)T(n/8)T(n/16) T(n/8)T(n/16)T(n/32) T(n/16)T(n/32)T(n/64)
n---------------------------------> n
(n/2) (n/4) (n/8)--------------> (7/8)n
n/4 n/8 n/16 n/8 n/16 n/32 n/16 n/32 n/64)--------> (49/64)n
...
Longest path: the leftmost left branch = n -> n/2 -> n/4 -> ... -> 1
Shortest branch: the rightmost right branch = n -> n/8 -> n->64 -> ... -> 1
The number of leaves (l): 3^log_8(n) < l < 3^log_2(n) => n^0.5 < l < n^1.585
Look at the tree - upto log_8(n) levels the tree is full, and then as we go down, more & more internal nodes are absent. By this theory we can give the bound,
T(n) = Big-Oh (Summation j=0 to log_2(n)-1 (7/8)^j n) = ... => T(n) = O(n).
T(n) = Big-Omega (Summation j=0 to log_8(n)-1 (7/8)^j n) = ... => T(n) = Big-Omega(n).
Therefore, T(n) = Theta(n).
Here the points are:
T(n/2) path has the highest length...
This must not be a complete ternary tree ... height = log base 2 of n & # of leaves must be less than n ...
Hope, likely this way u can solve the prob.
See the Image for a better explanation-
Height of Tree: We took log(n)(base 2) because n/2 make the tree longer in comparison to n/4, and n/8. And our GP series will go till k=logn(base).
There are two ways of solving this. One is unrolling recursion and finding similarities which can require inventiveness and can be really hard. Another way is to use Akra-Bazzi method.
In this case g(x) = n, a1 = a2 = a3 = 1 and b1 = 1/2, b2 = 1/4, b3 = 1/8. Solving the equation
which is 1/2^p + 1/4^p + 1/8^p = 1 you get p = 0.87915. Solving the integral you will get , which means that the complexity is: O(n)
Just like coding the Fibonacci Sequence (the hard way) as an example, you would simply type something along the lines of:
long fib(long n){
if(n <= 1) return n;
else return fib(n-1) + fib(n-2);
}
Or, better yet, memoize it using a global variable to make it much quicker. Once again, with the Fibonacci Sequence:
static ArrayList<Long>fib_global = new ArrayList(1000);
//delcare a global variable that can be appended to
long fib(long n){
if(n >= fib_global.length)fib_global.add(fib(n-1) + fib(n-2));
return fib_global.get(n);
}
The code will only execute one of these calls at a time, and most likely in the left-to-right order you typed them in, making it so that you only don't really need to worry about the amount of times you needed to call something.
As CLRS has said, T(n) can be replaced by cn by mathematical induction. This inductive assumption works for the number below n. As mentioned above, what we need to prove is that the parameter value is n. Therefore, as follows:
assume: T(n) <= cn for the number below n;
conclude:
T(n) = T(n/2) + T(n/4) + T(n/8) + n
<= c/2*n + c/4*n + c/8*n + n
= (7/8*c + 1) * n
<= cn (when c >= 8)
that's all.