0,1,3,6,10,15,21,...
each term gets incremented in the order of natural numbers
I tried to generate the nth of the series but ended with TLE
here's my code
s=0
for(int i=1;i<=n;i++)
s=s+(i-1);
Can any anybody help me with a better algorithm.
Think the problem as solving recurrence of the form
T(n) = T(n-1) + n
given T(0)=0
T(1) = T(0) + 1
T(2) = T(1) + 2
on solving recurrence you are going to obtain T(n) = n*(n+1)/2
This series, gives for n the sum of the natural number from 0 to n. There is a simple formula for calculating this (n * (n+1)) / 2.
Related
def maximum(array):
max = array[0]
counter = 0
for i in array:
size +=1
if i>max:
max=i
return max
I need to analyze that algorithm which find maximum number in array with n numbers in it. the only thing I want to know how to get Recursive and General formula for Average case of this algorithm.
Not sure what you mean by "Recursive and General formula for Average case of this algorithm". Your algorithm is not recursive. So, how can it be "recursive formula"?
Recursive way to find maximum in an array:
def findMax(Array, n):
if (n == 1):
return A[0]
return max(Array[n - 1], findMax(Array, n - 1))
I guess you want Recurrence relation.
Let T(n) be time taken to find the maximum of n elements. So, for above written code.
T(n) = T(n-1) + 1 .... Equation I
In case you are interested to solve the recurrence relation:
T(n-1) = T((n-1)-1) + 1 = T(n-2) + 1 .... Equation II
If you substitute value of T(n-1) from Equation II into Equation I, you get:
T(n) = (T(n-2) + 1) + 1 = T(n-2) + 2
Similarly,
T(n) = T(n-3) + 3
T(n) = T(n-4) + 4
and so on..
Continuing the above for k times,
T(n) = T(n-k) + k
If n-k = 0, means n = k. The equation then becomes
T(n) = T(0) + n = 1 + n
Therefore, the recursive algorithm we came up with has time complexity O(n).
Hope it helped.
How to get big-O for this?
T(N) = 2T(N − 1) + N, T(1) = 2
I got two variants of answer O(2^N) or O(N^2), but I am not sure how to solve it correctly
Divide T(N) by 2^N and name the result:
S(N) = T(N)/2^N
From the definition of T(N) we get
S(N) = S(N-1) + N/2^N (eq.1)
meaning that S(N) increases, but quickly converges to a constant (since N/2^N -> 0). So,
T(N)/2^N -> constant
or
T(N) = O(2^N)
Detailed proof
In the comment below Paul Hankin suggests how to complete the proof. Take eq.1 and sum from N=2 to N=M
sum_{N=2}^M S(N) = sum_{N=2}^M S(N-1) + sum_{N=2}^M N/2^N
= sum_{N=1}{M-1} S(N) + sum_{N=1}^{M-1} (N-1)/2^{N-1}
thus, after canceling terms with indexes N = 2, 3, ..., M-1, we get
S(M) = S(1) + sum_{N=1}^M N/2^N - M/2^M
and since the series on the right converges (because its terms are bounded by 1/N^2 for N>>1 which is known to converge), S(M) converges to a finite constant.
It's a math problem and Leandro Caniglia is right.
let b(n) = T(n) / 2^n
thus b(n) = b(n-1) + n / 2^n = b(n-2) + n / 2^n + (n-1) / 2^(n-1) ....
i / 2^i is less than 1 for every integer i
So the sum of them has limit and must smaller than some constant.
thus b(n) < C.
thus T(n) < 2^n * C.
It is obvious that T(n) >= 2^n.
So T(n) is O(2^n)
Check by plugging the answer in the equation.
2^N = 2.2^(N-1) + N = 2^N + N
or
N^2 = 2 (N-1)^2 + N
Keeping only the dominant terms, you have
2^N ~ 2^N
or
N^2 ~ 2 N^2.
Conclude.
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
I am trying to solve the following recurrence:
T(n) = T(n/3) + T(n/2) + sqrt(n)
I currently have done the following but am not sure if I am on the right track:
T(n) <= 2T(n/2) + sqrt(n)
T(n) <= 4T(n/4) + sqrt(n/2) + sqrt(n)
T(n) <= 8T(n/8) + sqrt(n/4) + sqrt(n/2) + sqrt(n)
so, n/(2^k) = 1, and the sqrt portion simplifies to: (a(1-r^n))/(1-r)
K = log2(n) and the height is 2^k, so 2^(log2(n)) but:
I am not sure how to combine the result of 2^(log2(n)) with the sqrt(n) portion.
A good initial attempt would be to identify the upper and lower bounds of the time complexity function. These are given by:
These two functions are much easier to solve for than T(n) itself. Consider the slightly more general function:
When do we stop recursing? We need a stopping condition. Since it is not given, we can assume it is n = 1 without loss of generality (you'll hopefully see how). Therefore the number of terms, m, is given by:
Therefore we can obtain the lower and upper bounds for T(n):
Can we do better than this? i.e. obtain the exact relationship between n and T(n)?
From my previous answer here, we can derive a binomial summation formula for T(n):
Where
C is such that n = C is the stopping condition for T(n). If not given, we can assume C = 1 without loss of generality.
In your example, f(n) = sqrt(n), c1 = c2 = 1, a = 3, b = 2. Therefore:
How do we evaluate the inner sum? Consider the standard formula for a binomial expansion, with positive exponent m:
Thus we replace x, y with the corresponding values in the formula, and get:
Where we arrived at the last two steps with the standard geometric series formula and logarithm rules. Note that the exponent is consistent with the bounds we found before.
Some numerical tests to confirm the relationship:
N T(N)
--------------------
500000 118537.6226
550000 121572.4712
600000 135160.4025
650000 141671.5369
700000 149696.4756
750000 165645.2079
800000 168368.1888
850000 181528.6266
900000 185899.2682
950000 191220.0292
1000000 204493.2952
Plot of log T(N) against log N:
The gradient of such a plot m is such that T(N) ∝ N^m, and we see that m = 0.863, which is quite close to the theoretical value of 0.861.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
T(n) = 2T(n/2) + 0(1)
T(n) = T(sqrt(n)) + 0(1)
In the first one I use substitution method for n, logn, etc; all gave me wrong answers.
Recurrence trees: I don't know if I can apply as the root will be a constant.
Can some one help?
Let's look at the first one. First of all, you need to know T(base case). You mentioned that it's a constant, but when you do the problem it's important that you write it down. Usually it's something like T(1) = 1. I'll use that, but you can generalize to whatever it is.
Next, find out how many times you recur (that is, the height of the recursion tree). n is your problem size, so how many times can we repeatedly divide n by 2? Mathematically speaking, what's i when n/(2^i) = 1? Figure it out, hold onto it for later.
Next, do a few substitutions, until you start to notice a pattern.
T(n) = 2(2(2T(n/2*2*2) + θ(1)) + θ(1)) + θ(1)
Ok, the pattern is that we multiply T() by 2 a bunch of times, and divide n by 2 a bunch of times. How many times? i times.
T(n) = (2^i)*T(n/(2^i)) + ...
For the big-θ terms at the end, we use a cute trick. Look above where we have a few substitutions, and ignore the T() part. We want the sum of the θ terms. Notice that they add up to (1 + 2 + 4 + ... + 2^i) * θ(1). Can you find a closed form for 1 + 2 + 4 + ... + 2^i? I'll give you that one; it's (2^i - 1). It's a good one to just memorize, but here's how you'd figure it out.
Anyway, all in all we get
T(n) = (2^i) * T(n/(2^i)) + (2^i - 1) * θ(1)
If you solved for i earlier, then you know that i = log_2(n). Plug that in, do some algebra, and you get down to
T(n) = n*T(1) + (n - 1)*θ(1). T(1) = 1. So T(n) = n + (n - 1)*θ(1). Which is n times a constant, plus a constant, plus n. We drop lower order terms and constants, so it's θ(n).
Prasoon Saurav is right about using the master method, but it's important that you know what the recurrence relation is saying. The things to ask are, how much work do I do at each step, and what is the number of steps for an input of size n?
Use Master Theorem to solve such recurrence relations.
Let a be an integer greater than or equal to 1 and b be a real number greater than
1. Let c be a positive real number and
d a nonnegative real number. Given a recurrence of the form
T (n) = a T(n/b) + nc .. if n > 1
T(n) = d .. if n = 1
then for n a power of b,
if logb a < c, T (n) = Θ(nc),
if logb a = c, T (n) = Θ(nc log n),
if logb a > c, T (n) = Θ(nlogb a).
1) T(n) = 2T(n/2) + 0(1)
In this case
a = b = 2;
logb a = 1; c = 0 (since nc =1 => c= 0)
So Case (3) is applicable. So T(n) = Θ(n) :)
2) T(n) = T(sqrt(n)) + 0(1)
Let m = log2 n;
=> T(2m) = T( 2m / 2 ) + 0(1)
Now renaming K(m) = T(2m) => K(m) = K(m/2) + 0(1)
Apply Case (2).
For part 1, you can use Master Theorem as #Prasoon Saurav suggested.
For part 2, just expand the recurrence:
T(n) = T(n ^ 1/2) + O(1) // sqrt(n) = n ^ 1/2
= T(n ^ 1/4) + O(1) + O(1) // sqrt(sqrt(n)) = n ^ 1/4
etc.
The series will continue to k terms until n ^ 1/(2^k) <= 1, i.e. 2^k = log n or k = log log n. That gives T(n) = k * O(1) = O(log log n).
Let's look at the first recurrence, T(n) = 2T(n/2) + 1. The n/2 is our clue here: each nested term's parameter is half that of its parent. Therefore, if we start with n = 2^k then we will have k terms in our expansion, each adding 1 to the total, before we hit our base case, T(0). Hence, assuming T(0) = 1, we can say T(2^k) = k + 1. Now, since n = 2^k we must have k = log_2(n). Therefore T(n) = log_2(n) + 1.
We can apply the same trick to your second recurrence, T(n) = T(n^0.5) + 1. If we start with n = 2^2^k we will have k terms in our expansion, each adding 1 to the total. Assuming T(0) = 1, we must have T(2^2^k) = k + 1. Since n = 2^2^k we must have k = log_2(log_2(n)), hence T(n) = log_2(log_2(n)) + 1.
Recurrence relations and recursive functions as well should be solved by starting at f(1). In case 1, T(1) = 1; T(2) = 3; T(4) = 7; T(8) = 15; It's clear that T(n) = 2 * n -1, which in O notation is O(n).
In second case T(1) = 1; T(2) = 2; T(4) = 3; T(16) = 4; T(256) = 5; T(256 * 256) =6; It will take little time to find out that T(n) = log(log(n)) + 1 where log is in base 2. Clearly this is O(log(log(n)) relation.
Most of the time the best way to deal with recurrence is to draw the recurrence tree and carefully handle the base case.
However here I will give you slight hint to solve using substitution method.
In recurrence first try substitution n = 2^k
In recurrence second try substitution n = 2^2^k