I want to find out the time complexity of the program using recurrence equations.
That is ..
int f(int x)
{
if(x<1) return 1;
else return f(x-1)+g(x);
}
int g(int x)
{
if(x<2) return 1;
else return f(x-1)+g(x/2);
}
I write its recurrence equation and tried to solve it but it keep on getting complex
T(n) =T(n-1)+g(n)+c
=T(n-2)+g(n-1)+g(n)+c+c
=T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c
=T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c
……………………….
……………………..
Kth time …..
=kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n-k)
Let at kth time input become 1
Then n-k=1
K=n-1
Now i end up with this..
T(n)= (n-1)c+g(n)+g(n-1)+g(n-2)+g(n-3)+….. .. g(1)
I ‘m not able to solve it further.
Any way if we count the number of function calls in this program , it can be easily seen that time complexity is exponential but I want proof it using recurrence . how can it be done ?
Explanation in Anwer 1, looks correct , similar work I did.
The most difficult task in this code is to write its recursion equation. I have drawn another diagram , I identified some patterns , I think we can get some help form this diagram what could be the possible recurrence equation.
And I came up with this equation , not sure if it is right ??? Please help.
T(n) = 2*T(n-1) + c * logn
Ok, I think I have been able to prove that f(x) = Theta(2^x) (note that the time complexity is the same). This also proves that g(x) = Theta(2^x) as f(x) > g(x) > f(x-1).
First as everyone noted, it is easy to prove that f(x) = Omega(2^x).
Now we have the relation that f(x) <= 2 f(x-1) + f(x/2) (since f(x) > g(x))
We will show that, for sufficiently large x, there is some constant K > 0 such that
f(x) <= K*H(x), where H(x) = (2 + 1/x)^x
This implies that f(x) = Theta(2^x), as H(x) = Theta(2^x), which itself follows from the fact that H(x)/2^x -> sqrt(e) as x-> infinity (wolfram alpha link of the limit).
Now (warning: heavier math, perhap cs.stackexchange or math.stackexchange is better suited)
according to wolfram alpha (click the link and see series expansion near x = infinity),
H(x) = exp(x ln(2) + 1/2 + O(1/x))
And again, according to wolfram alpha (click the link (different from above) and see the series expansion for x = infinity), we have that
H(x) - 2H(x-1) = [1/2x + O(1/x^2)]exp(x ln(2) + 1/2 + O(1/x))
and so
[H(x) - 2H(x-1)]/H(x/2) -> infinity as x -> infinity
Thus, for sufficiently large x (say x > L) we have the inequality
H(x) >= 2H(x-1) + H(x/2)
Now there is some K (dependent only on L (for instance K = f(2L))) such that
f(x) <= K*H(x) for all x <= 2L
Now we proceed by (strong) induction (you can revert to natural numbers if you want to)
f(x+1) <= 2f(x) + f((x+1)/2)
By induction, the right side is
<= 2*K*H(x) + K*H((x+1)/2)
And we proved earlier that
2*H(x) + H((x+1)/2) <= H(x+1)
Thus f(x+1) <= K * H(x+1)
Using memoisation, both functions can easily be computed in O(n) time. But the program takes at least O(2^n) time, and thus is a very inefficient way of computing f(n) and g(n)
To prove that the program takes at most O(2+epsilon)^n time for any epsilon > 0:
Let F(n) and G(n) be the number of function calls that are made in evaluating f(n) and g(n), respectively. Clearly (counting the addition as 1 function call):
F(0) = 1; F(n) = F(n-1) + G(n) + 1
G(1) = 1; G(n) = F(n-1) + G(n/2) + 1
Then one can prove:
F and G are monotonic
F > G
Define H(1) = 2; H(n) = 2 * H(n-1) + H(n/2) + 1
clearly, H > F
for all n, H(n) > 2 * H(n-1)
hence H(n/2) / H(n-1) -> 0 for sufficiently large n
hence H(n) < (2 + epsilon) * H(n-1) for all epsilon > 0 and sufficiently large n
hence H in O((2 + epsilon)^n) for any epsilon > 0
(Edit: originally I concluded here that the upper bound is O(2^n). That is incorrect,as nhahtdh pointed out, but see below)
so this is the best I can prove.... Because G < F < H they are also in O((2 + epsilon)^n) for any epsilon > 0
Postscript (after seeing Mr Knoothes solution): Because i.m.h.o a good mathematical proof gives insight, rather than lots of formulas, and SO exists for all those future generations (hi gals!):
For many algorithms, calculating f(n+1) involves twice (thrice,..) the amount of work for f(n), plus something more. If this something more becomes relatively less with increasing n (which is often the case) using a fixed epsilon like above is not optimal.
Replacing the epsilon above by some decreasing function ε(n) of n will in many cases (if ε decreases fast enough, say ε(n)=1/n) yield an upper bound O((2 + ε(n))^n ) = O(2^n)
Let f(0)=0 and g(0)=0
From the function we have,
f(x) = f(x - 1) + g(x)
g(x) = f(x - 1) + g(x/2)
Substituting g(x) in f(x) we get,
f(x) = f(x-1) + f(x -1) + g(x/2)
∴f(x) = 2f(x-1) + g(x/2)
Expanding this we get,
f(x) = 2f(x-1)+f(x/2-1)+f(x/4-1)+ ... + f(1)
Let s(x) be a function defined as follows,
s(x) = 2s(x-1)
Now clearly f(x)=Ω(s(x)).
The complexity of s(x) is O(2x).
Therefore function f(x)=Ω(2x).
I think is clear to see that f(n) > 2n, because f(n) > h(n) = 2h(n-1) = 2n.
Now I claim that for every n, there is an ε such that:
f(n) < (2+ε)n, to see this, let do it by induction, but to make it more sensible at first I'll use ε = 1, to show f(n) <= 3n, then I'll extend it.
We will use strong induction, suppose for every m < n, f(m) < 3m then we have:
f(n) = 2[f(n-1) + f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)]
but for this part:
A = f(n/2 -1) + f(n/4 -1)+ ... +f(1-1)
we have:
f(n/2) = 2[f(n/2 -1) + f(n/4 -1)+ ... +f(1-1]) ==>
A <= f(n/2) [1]
So we can rewrite f(n):
f(n) = 2f(n-1) + A < 2f(n-1) +f(n/2),
Now let back to our claim:
f(n) < 2*3^(n-1) + 2*3^(n/2)==>
f(n) < 2*3^(n-1) + 3^(n-1) ==>
f(n) < 3^n. [2]
By [2], proof of f(n)∈O(3n) is completed.
But If you want to extend this to the format of (2+ε)n, just use 1 to replace the inequality, then we will have
for ε > 1/(2+ε)n/2-1 → f(n) < (2+ε)n.[3]
Also by [3] you can say that for every n there is an ε such that f(n) < (2+ε)n actually there is constant ε such that for n > n0, f(n)∈O((2+ε)n). [4]
Now we can use wolfarmalpha like #Knoothe, by setting ε=1/n, then we will have:
f(n) < (2+1/n)n which results on f(n) < e*2n, and by our simple lower bound at start we have: f(n)∈ Θ(2^n).[5]
P.S: I didn't calculate epsilon exactly, but you can do it with pen and paper simply, I think this epsilon is not correct, but is easy to find it, and if is hard tell me is hard, and I'll write it.
Related
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
I'm having some trouble with basic runtime understanding, maybe someone can clarify for me.
How would I go about determining the runtime of this function?
I need to determine rather f = O(g) or f = omega(g) or f = theta(g)
f(n) = 100n + logn
g(n) = n + (logn)2
So 100n and n are in the same order; and linear time > log time; at this point do I still need to look at the log part? Or can I determine that f = theta(g)?
You can safely determine that they are the same order of magnitude. There is no need to look at the "log part".
Here is formal proof for this specific case, the general proof can be shown from limit arithmetic.
Let's look at the function h(n) = f(n)/g(n) as n approaches infinity, if it stays bounded above 0 and below some number m we know that f(x) = Theta(g(x)) (because of how Theta is defined).
So we have h(n) = (100n + logn)/(n + logn^2)
We know that if we show for that for any real x, it holds for Natural numbers too. So it is enough to show that for:
h(x) = (100x + logx)/(x + logx^2)
We know by l'Hospital's rule that if the derivatives of the nominator and denominator exist and converge than the limit of the original function exists and equals to the same number too. Let's apply that and get:
lim x-> infinity , h(x) = (100x + logx)/(x + logx^2) =
lim x-> infinity , (100+1/x) / (1 + (2log(x) / x) )
We know that 1/x approaches 0 as x approaches infinity, and that (2logx)/x approaches 0 as x approaches infinity (in your words (time > log time)). So we get from limit arithmetic
lim x-> infinity h(x) = 100/1 = 100
Since the limit exists in R and is nonzero we get f(x) = Theta(g(x)) which is what we wanted to show.
From what I have studied: I have been asked to determine the complexity of a function with respect to another function. i.e. Given f(n) and g(n), determine O(f(n(). In such cases, I substitute values, compare both of them and arrive at a complexity - using O(), Theta and Omega notations.
However, in the substitution method for solving recurrences, every standard document has the following lines:
• [Assume that T(1) = Θ(1).]
• Guess O(n3) . (Prove O and Ω separately.)
• Assume that T(k) ≤ ck3 for k < n .
• Prove T(n) ≤ cn3 by induction.
How am I supposed to find O and Ω when nothing else (apart from f(n)) is given? I might be wrong (I, definitely am), and any information on the above is welcome.
Some of the assumptions above are with reference to this problem: T(n) = 4T(n/2) + n
, while the basic outline of the steps is for all such problems.
That particular recurrence is solvable via the Master Theorem, but you can get some feedback from the substitution method. Let's try your initial guess of cn^3.
T(n) = 4T(n/2) + n
<= 4c(n/2)^3 + n
= cn^3/2 + n
Assuming that we choose c so that n <= cn^3/2 for all relevant n,
T(n) <= cn^3/2 + n
<= cn^3/2 + cn^3/2
= cn^3,
so T is O(n^3). The interesting part of this derivation is where we used a cubic term to wipe out a linear one. Overkill like that is often a sign that we could guess lower. Let's try cn.
T(n) = 4T(n/2) + n
<= 4cn/2 + n
= 2cn + n
This won't work. The gap between the right-hand side and the bound we want is is cn + n, which is big Theta of the bound we want. That usually means we need to guess higher. Let's try cn^2.
T(n) = 4T(n/2) + n
<= 4c(n/2)^2 + n
= cn^2 + n
At first that looks like a failure as well. Unlike our guess of n, though, the deficit is little o of the bound itself. We might be able to close it by considering a bound of the form cn^2 - h(n), where h is o(n^2). Why subtraction? If we used h as the candidate bound, we'd run a deficit; by subtracting h, we run a surplus. Common choices for h are lower-order polynomials or log n. Let's try cn^2 - n.
T(n) = 4T(n/2) + n
<= 4(c(n/2)^2 - n/2) + n
= cn^2 - 2n + n
= cn^2 - n
That happens to be the exact solution to the recurrence, which was rather lucky on my part. If we had guessed cn^2 - 2n instead, we would have had a little credit left over.
T(n) = 4T(n/2) + n
<= 4(c(n/2)^2 - 2n/2) + n
= cn^2 - 4n + n
= cn^2 - 3n,
which is slightly smaller than cn^2 - 2n.
I can clearly see than N^2 is bounded by c2^N, but how do i prove it by using formal definition of big-O. I can simply prove it by M.I.
Here is my attempt..
By definition, there for any n>n0, there exist a constant C which
f(n) <= Cg(n)
where
f(n) = n^2
and
g(n) = 2^n
Should I take log to both side and solve for C?
and one more question about fibonacci sequence, i wanna solve the recurrence relation.
int fib(int n){
if(n<=1) return n;
else return fib(n-1) + fib(n-2);
The equation is..
T(n) = T(n-1)+T(n-2)+C // where c is for the adding operation
so
T(n) = T(n-2) + 2T(n-3) + T(n-4) + 3c
and one more
T(n) = T(n-3) + 3T(n-4) + 3T(n-5) + T(n-6) + 6c
then i started to get lost to form the general equation i
The pattern is somehow like pascal triangle?
t(n) = t(n-i) + aT(n-i-1) + bT(n-i-2) + ... + kT(n-i-i) + C
As you point out, to see if f(x) ϵ O(g(x)) you need to find...
...some c > 0 and
...some x0
such that f(x) < c·g(x) for all x > x0.
In this case, you can pick c = 1 and x0 = 2. What you need to prove is that
x2 < 2x for all x > 2
At this point you can log both sides (since if log(x) > log(y), then x > y.) Assuming you're using base-2 log you get the following
log(x2) < log(2x)
and by standard laws of logarithms, you get
2·log(x) < x·log(2)
Since log(2) = 1 this can be written as
2·log(x) < x
If we set x = 2, we get
2·log(2) = 2
and since x grows faster than log(x) we know that 2·log(x) < x holds for all x > 2.
For the most part, the accepted answer (from aioobe) is correct, but there is an important correction that needs to be made.
Yes, for x=2, 2×log(x) = x or 2×log(2) = 2 is correct, but then he incorrectly implies that 2×log(x) < x is true for ALL x>2, which is not true.
Let's take x=3, so the equation becomes: 2×log(3) < 3 (an invalid equation).
If you calculate this, you get: 2×log(3) ≈ 3,16993 which is greater than 3.
You can clearly see this if you plot f(x) = x2 and g(x) = 2x or if you plot f(x)= 2×log(x) and g(x) = x (if c=1).
Between x=2 and x=4, you can see that g(x) will dip below f(x). It is only when x ≥ 4, that f(x) will remain ≤ c×g(x).
So to get the correct answer, you follow the steps described in aioobe's answer, but you plot the functions to get the last intersection where f(x) = c×g(x). The x at that intersection is your x0 (together with the choosen c) for which the following is true: f(x) ≤ c×g(x) for all x ≥ x0.
So for c=1 it should be: for all x≥4, or x0=4
To improve upon the accepted answer:
You have to prove that x^2 < 2^x for all x > 2
Taking log on both sides, we have to prove that:
2·log(x) < x for all x > 2
Thus we have to show the function h(x)=x-2·log(x)>0 for all x>2
h(2)=0
Differentiating h(x) with respect to x, we get h'(x)= 1 - 1/(x·ln(2))
For all x>2, h'(x) is always greater than 0, thus h(x) keeps increasing and since h(2)=0,
it is hence proved that h(x) > 0 for all x > 2,
or x^2 < 2^x for all x > 2
(log n)^k = O(n)? For k greater or equal to 1.
My professor presented us with this statement in class, however I am not sure what it means for a function to a have a time complexity of O(n). Even stuff like n^2 = O(n^2), how can a function f(x) have a run time complexity?
As for the statement how does it equal O(n) rather than O((logn)^k)?
(log n)^k = O(n)?
Yes. The definition of big-Oh is that a function f is in O(g(n)) if there exist positive constants N and c, such that for all n > N: f(n) <= c*g(n). In this case f(n) is (log n)^k and g(n) is n, so if we insert that into the definition we get: "there exist constants N and c, such that for all n > N: (log n)^k <= c*n". This is true so (log n)^k is in O(n).
how can a function f(x) have a run time complexity
It doesn't. Nothing about big-Oh notation is specific to run-time complexity. Big-Oh is a notation to classify the growth of functions. Often the functions we're talking about measure the run-time of certain algorithms, but we can use big-Oh to talk about arbitrary functions.
f(x) = O(g(x)) means f(x) grows slower or comparably to g(x).
Technically this is interpreted as "We can find an x value, x_0, and a scale factor, M, such that this size of f(x) past x_0 is less than the scaled size of g(x)." Or in math:
|f(x)| < M |g(x)| for all x > x_0.
So for your question:
log(x)^k = O(x)? is asking : is there an x_0 and M such that
log(x)^k < M x for all x>x_0.
The existence of such M and x_0 can be done using various limit results and is relatively simple using L'Hopitals rule .. however it can be done without calculus.
The simplest proof I can come up with that doesn't rely on L'Hopitals rule uses the Taylor series
e^z = 1 + z + z^2/2 + ... = sum z^m / m!
Using z = (N! x)^(1/N) we can see that
e^(x^(1/N)) = 1 + (N! x)^(1/N) + (N! x)^(2/N)/2 + ... (N! x)^(N/N)/N! + ...
For x>0 all terms are positive so, keeping only the Nth term we get that
e^((N! x)^(1/N)) = N! x / N! + (...)
= x + (...)
> x for x > 0
Taking logarithms of both sides (since log is monotonic increasing), then raising to Nth power (also monotonic increasing since N>0)
(N! x)^(1/N) > log x for x > 0
N! x > (log x)^n for x > 0
Which is exactly the result we need, (log x)^N < M x for some M and all x > x_0, with M = N! and x_0=0