Suppose
S(n) = Big-Oh(f(n)) & T(n) = Big-Oh(f(n))
both f(n) identically belongs from the same class.
My ques is: Why S(n)/T(n) = Big-Oh(1) is incorrect?
Consider S(n) = n^2 and T(n) = n. Then both S and T are O(n^2) but S(n) / T(n) = n which is not O(1).
Here's another example. Consider S(n) = sin(n) and T(n) = cos(n). Then S and T are O(1) but S(n) / T(n) = tan(n) is not O(1). This second example is important because it shows that even if you have a tight bound, the conclusion can still fail.
Why is this happening? Because the obvious "proof" completely fails. The obvious "proof" is the following. There are constants C_S and C_T and N_S and N_T where n >= N_S implies |S(n)| <= C_S * f(n) and n >= N_T implies |T(n)| <= C_T * f(n). Let N = max(N_S, N_T). Then for n >= N we have
|S(n) / T(n)| <= (C_S * f(n)) / (C_T * f(n)) = C_S / C_T.
This is completely and utterly wrong. It is not the case that |T(n)| <= C_T * f(n) implies that 1 / |T(n)| <= 1 / (C_T * f(n)). In fact, what is true is that 1 / |T(n)| >= 1 / (C_T * f(n)). The inequality reverses, and that suggests there is a serious problem with the "theorem." The intuitive idea is that if T is "small" (i.e., bounded) then 1 / T is "big." But we're trying to show that 1 / T is "small" and we just can't do that. As our counterexamples show, the "proof" is fatally flawed.
However, there is a theorem here that is true. Namely, if S(n) is O(f(n)) and T(n) is Ω(f(n)), then S(n) / T(n) is O(1). The above "proof" works for this theorem (thanks are due to Simone for the idea to generalize to this statement).
Here's one counter example:
Let's say f(n) = n^2, S(n) = n^2 and T(n) = n. Now both S and T are in O(f(n)) (you have to remember that O(n^2) is a superset of O(n), so everything that's in O(n) is also in O(n^2)), but U(n) = S(n)/T(n) = n^2/n = n is definitely not in O(1).
Like the others explained S(n) / T(n) is not generally O(1).
Your doubt probably derive from the confusion between O and Θ; in fact if:
S(n) = Θ(n) and T(n) = Θ(n)
then the following is true:
S(n) / T(n) = Θ(1) and thus S(n) / T(n) = O(1)
You can prove that this is wrong if you plug some actual functions into your formula, e.g.
S(n) = n^2
T(n) = n
f(n) = n^2
S(n) / T(n) = n
n != O(1)
Related
T(n)= O(f(n)), G(n)= O(h(n))
How would I prove or disprove:
T(G(n))= O(h(f(n))
I think this is false, because it should be O(f(h(n))) instead of O(h(f(n))), since G is applied before T is applied, I tried substituting polynomial functions for T and G, I think the order matters, (n^2)! is not equal to (n!)^2 , but I am not sure if this reasoning is correct?
You are correct, this is false, however I did not quite understand your counterexample.
A counterexample would be to take a function and its inverse, while keeping h very small asymptotically:
T(n) = 2^n , f(n) = 2^n This is compliant with the given fact that 2^n = O(2^n)
And
G(n) = lg(n!) , h(n) = n lg(n) This is also compliant with the given fact that lg(n!) < O(lg n^n) = O(n lg n)
However, T(G(n)) = 2^(lg(n!)) = n! and h(f(n)) = 2^n lg(2^n) = n*2^n but, n! =/= O(n*2^n) as we have a factorial function vs an exponent function (multiplied by a linear function) thus we proved it is not true.
The reason n! =/= O(n 2^n) is because: n*2^n < 2^n * 2^n = 4^n and we know that a factorial function 'beats' an exponent.
How to get big-O for this?
T(N) = 2T(N − 1) + N, T(1) = 2
I got two variants of answer O(2^N) or O(N^2), but I am not sure how to solve it correctly
Divide T(N) by 2^N and name the result:
S(N) = T(N)/2^N
From the definition of T(N) we get
S(N) = S(N-1) + N/2^N (eq.1)
meaning that S(N) increases, but quickly converges to a constant (since N/2^N -> 0). So,
T(N)/2^N -> constant
or
T(N) = O(2^N)
Detailed proof
In the comment below Paul Hankin suggests how to complete the proof. Take eq.1 and sum from N=2 to N=M
sum_{N=2}^M S(N) = sum_{N=2}^M S(N-1) + sum_{N=2}^M N/2^N
= sum_{N=1}{M-1} S(N) + sum_{N=1}^{M-1} (N-1)/2^{N-1}
thus, after canceling terms with indexes N = 2, 3, ..., M-1, we get
S(M) = S(1) + sum_{N=1}^M N/2^N - M/2^M
and since the series on the right converges (because its terms are bounded by 1/N^2 for N>>1 which is known to converge), S(M) converges to a finite constant.
It's a math problem and Leandro Caniglia is right.
let b(n) = T(n) / 2^n
thus b(n) = b(n-1) + n / 2^n = b(n-2) + n / 2^n + (n-1) / 2^(n-1) ....
i / 2^i is less than 1 for every integer i
So the sum of them has limit and must smaller than some constant.
thus b(n) < C.
thus T(n) < 2^n * C.
It is obvious that T(n) >= 2^n.
So T(n) is O(2^n)
Check by plugging the answer in the equation.
2^N = 2.2^(N-1) + N = 2^N + N
or
N^2 = 2 (N-1)^2 + N
Keeping only the dominant terms, you have
2^N ~ 2^N
or
N^2 ~ 2 N^2.
Conclude.
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
Given F(n) = θ(n)
H(n) = O(n)
G(n) = Ω(n)
then what will be order of F(n) + [G(n) . H(n)] ?
edit: F(n) = θ(n) not Q(n)
There isn't enough information to say anything about the function P(n) = G(n)*H(n). All we know is that G grows at least linearly; it could be growing quadratically, cubically, even exponentially. Likewise, we only know that H grows at most linearly; it could only be growing logarithmically, or be constant, or even be decreasing. As a result, P(n) itself could be decreasing or increasing without bound, which means the sum F(n) + P(n) could also be decreasing or increasing without bound.
Suppose, though, that we could assume that H(n) = Ω(1) (i.e., it is at least not decreasing). Now we can say the following about P(n):
P(n) = H(n) * G(n)
>= C1 * G(n)
= Ω(G(n)) = Ω(n)
P(n) <= C1*n * G(n)
= O(n*G(n))
Thus F(n) + P(n) = Ω(n) and F(n) + P(n) = O(n*G(n)), but nothing more can be said; both bounds are as tight as we can make them without more information about H or G.
I want to prove the following statement
2^(⌊lg n⌋+⌈lg n⌉)∕n ∈ Θ(n)
I know that to prove it, we have to find the constants c1>0, c2>0, and n0>0 such that
c1.g(n) <= f(n) <= c2.g(n) for all n >= n0
In other words, we have to prove f(n) <= c.g(n) and f(n) >= c.g(n).
The problem is how to prove the left hand side (2^(⌊lg n⌋+⌈lg n⌉)∕n)
Thank you
You can start by expanding the exponential. It is equal to n1*n2/n, where n1<=n<=n2, 2*n1>n and n*2>n2. The rest should be easy.
Here's a derivation for the upper bound:
2^(⌊lg n⌋+⌈lg n⌉)/n
= 2^(2⌊lg n⌋+1)/n
<= 2^(2 lg n + 1)/n
= 2^(2 lg n) 2^(1) / n
= 2 n^2 / n
= 2 n
= O(n)
So we know your function can be bounded above by 2*n. Now we do the lower bound:
2^(⌊lg n⌋+⌈lg n⌉)/n
= 2^(2⌈lg n⌉ - 1) / n
>= 2^(2 lg n - 1)/n
= 2^(2 lg n) 2^(-1) / n
= 1/2 n^2 / n
= 1/2 n
= O(n)
We now know that your function can be bounded below by n/2.
Checked on gnuplot; these answers look good and tight. This is a purely algebraic solution using the definition if floor() and ceiling() functions.