I want to prove the following statement
2^(⌊lg n⌋+⌈lg n⌉)∕n ∈ Θ(n)
I know that to prove it, we have to find the constants c1>0, c2>0, and n0>0 such that
c1.g(n) <= f(n) <= c2.g(n) for all n >= n0
In other words, we have to prove f(n) <= c.g(n) and f(n) >= c.g(n).
The problem is how to prove the left hand side (2^(⌊lg n⌋+⌈lg n⌉)∕n)
Thank you
You can start by expanding the exponential. It is equal to n1*n2/n, where n1<=n<=n2, 2*n1>n and n*2>n2. The rest should be easy.
Here's a derivation for the upper bound:
2^(⌊lg n⌋+⌈lg n⌉)/n
= 2^(2⌊lg n⌋+1)/n
<= 2^(2 lg n + 1)/n
= 2^(2 lg n) 2^(1) / n
= 2 n^2 / n
= 2 n
= O(n)
So we know your function can be bounded above by 2*n. Now we do the lower bound:
2^(⌊lg n⌋+⌈lg n⌉)/n
= 2^(2⌈lg n⌉ - 1) / n
>= 2^(2 lg n - 1)/n
= 2^(2 lg n) 2^(-1) / n
= 1/2 n^2 / n
= 1/2 n
= O(n)
We now know that your function can be bounded below by n/2.
Checked on gnuplot; these answers look good and tight. This is a purely algebraic solution using the definition if floor() and ceiling() functions.
Related
Let's say the question that I wish to answer is:
Q. Prove that O(n) + O(1) = O(n)
And I answered the question like so...
First of all we can see that f(n) < f(n) + 1 < 2f(n) thus we can state that O(n) + O(1) = 2O(n) and then by applying the coefficient rule we have O(n) + O(1) = O(n).
I am trying to understand how to correctly tackle these kind of questions and was wondering if my reasoning is correct? Thanks!
I would suggest you to elaborate more by proving:
O(n) <= O(n) + O(1)
O(n) + O(1) <= O(n)
and then concluding that both sets are the same (you can think of these objects as sets of functions and inequalities between them as inclusions).
Inequality 1
Take f(n) in O(n). We have to prove that f(n) can be expressed as a function in O(n) plus a function in O(1). This is true because f(n) <= f(n) + 1 (your argument).
Inequality 2
Take two functions f(n) in O(n) and g(n) in O(1). We have to show that f(n) + g(n) is in O(n). In other words, we have to show that f(n) + g(n) <= C*n for some constant C and n large enough.
Now, we can use that there exist a constant A and an integer N such that f(n) <= A*n for n > N. This is because f(n) is in O(n). Similarly, there exist B and M such that g(n) <= B for n > M (because g(n) is in O(1)). Therefore, for n > max(N,M) we get
f(n) + g(n) <= A*n + B
<= max(A,B)*n + max(A,B)
<= max(A,B)*n + max(A,B)*n
<= 2max(A,B)*n
which shows that there is a constant C, namely, 2max(A,B), and an integer R, namely max(N,M) such that: f(n) + g(n) <= C*n for all n > R, which means by definition that f(n) + g(n) is in O(n), as we wanted to see.
How to get big-O for this?
T(N) = 2T(N − 1) + N, T(1) = 2
I got two variants of answer O(2^N) or O(N^2), but I am not sure how to solve it correctly
Divide T(N) by 2^N and name the result:
S(N) = T(N)/2^N
From the definition of T(N) we get
S(N) = S(N-1) + N/2^N (eq.1)
meaning that S(N) increases, but quickly converges to a constant (since N/2^N -> 0). So,
T(N)/2^N -> constant
or
T(N) = O(2^N)
Detailed proof
In the comment below Paul Hankin suggests how to complete the proof. Take eq.1 and sum from N=2 to N=M
sum_{N=2}^M S(N) = sum_{N=2}^M S(N-1) + sum_{N=2}^M N/2^N
= sum_{N=1}{M-1} S(N) + sum_{N=1}^{M-1} (N-1)/2^{N-1}
thus, after canceling terms with indexes N = 2, 3, ..., M-1, we get
S(M) = S(1) + sum_{N=1}^M N/2^N - M/2^M
and since the series on the right converges (because its terms are bounded by 1/N^2 for N>>1 which is known to converge), S(M) converges to a finite constant.
It's a math problem and Leandro Caniglia is right.
let b(n) = T(n) / 2^n
thus b(n) = b(n-1) + n / 2^n = b(n-2) + n / 2^n + (n-1) / 2^(n-1) ....
i / 2^i is less than 1 for every integer i
So the sum of them has limit and must smaller than some constant.
thus b(n) < C.
thus T(n) < 2^n * C.
It is obvious that T(n) >= 2^n.
So T(n) is O(2^n)
Check by plugging the answer in the equation.
2^N = 2.2^(N-1) + N = 2^N + N
or
N^2 = 2 (N-1)^2 + N
Keeping only the dominant terms, you have
2^N ~ 2^N
or
N^2 ~ 2 N^2.
Conclude.
Hi I am having a tough time showing the run time of these three algorithms for T(n). Assumptions include T(0)=0.
1) This one i know is close to Fibonacci so i know it's close to O(n) time but having trouble showing that:
T(n) = T(n-1) + T(n-2) +1
2) This on i am stumped on but think it's roughly about O(log log n):
T(n) = T([sqrt(n)]) + n. n greater-than-or-equal to 1. sqrt(n) is lower bound.
3) i believe this one is in roughly O(n*log log n):
T(n) = 2T(n/2) + (n/(log n)) + n.
Thanks for the help in advance.
T(n) = T(n-1) + T(n-2) + 1
Assuming T(0) = 0 and T(1) = a, for some constant a, we notice that T(n) - T(n-1) = T(n-2) + 1. That is, the growth rate of the function is given by the function itself, which suggests this function has exponential growth.
Let T'(n) = T(n) + 1. Then T'(n) = T'(n-1) + T'(n-2), by the above recurrence relation, and we have eliminated the troublesome constant term. T(n) and U(n) differ by a constant factor of 1, so assuming they are both non-decreasing (they are) then they will have the same asymptotic complexity, albeit for different constants n0.
To show T'(n) has asymptotic growth of O(b^n), we would need some base cases, then the hypothesis that the condition holds for all n up to, say, k - 1, and then we'd need to show it for k, that is, cb^(n-2) + cb^(n-1) < cb^n. We can divide through by cb^(n-2) to simplify this to 1 + b <= b^2. Rearranging, we get b^2 - b - 1 > 0; roots are (1 +- sqrt(5))/2, and we must discard the negative one since we cannot use a negative number as the base for our exponent. So for b >= (1+sqrt(5))/2, T'(n) may be O(b^n). A similar thought experiment will show that for b <= (1+sqrt(5))/2, T'(n) may be Omega(b^n). Thus, for b = (1+sqrt(5))/2 only, T'(n) may be Theta(b^n).
Completing the proof by induction that T(n) = O(b^n) is left as an exercise.
T(n) = T([sqrt(n)]) + n
Obviously, T(n) is at least linear, assuming the boundary conditions require T(n) be nonnegative. We might guess that T(n) is Theta(n) and try to prove it. Base case: let T(0) = a and T(1) = b. Then T(2) = b + 2 and T(4) = b + 6. In both cases, a choice of c >= 1.5 will work to make T(n) < cn. Suppose that whatever our fixed value of c is works for all n up to and including k. We must show that T([sqrt(k+1)]) + (k+1) <= c*(k+1). We know that T([sqrt(k+1)]) <= csqrt(k+1) from the induction hypothesis. So T([sqrt(k+1)]) + (k+1) <= csqrt(k+1) + (k+1), and csqrt(k+1) + (k+1) <= c(k+1) can be rewritten as cx + x^2 <= cx^2 (with x = sqrt(k+1)); dividing through by x (OK since k > 1) we get c + x <= cx, and solving this for c we get c >= x/(x-1) = sqrt(k+1)/(sqrt(k+1)-1). This eventually approaches 1, so for large enough n, any constant c > 1 will work.
Making this proof totally rigorous by fixing the following points is left as an exercise:
making sure enough base cases are proven so that all assumptions hold
distinguishing the cases where (a) k + 1 is a perfect square (hence [sqrt(k+1)] = sqrt(k+1)) and (b) k + 1 is not a perfect square (hence sqrt(k+1) - 1 < [sqrt(k+1)] < sqrt(k+1)).
T(n) = 2T(n/2) + (n/(log n)) + n
This T(n) > 2T(n/2) + n, which we know is the recursion relation for the runtime of Mergesort, which by the Master theorem is O(n log n), s we know our complexity is no less than that.
Indeed, by the master theorem: T(n) = 2T(n/2) + (n/(log n)) + n = 2T(n/2) + n(1 + 1/(log n)), so
a = 2
b = 2
f(n) = n(1 + 1/(log n)) is O(n) (for n>2, it's always less than 2n)
f(n) = O(n) = O(n^log_2(2) * log^0 n)
We're in case 2 of the Master Theorem still, so the asymptotic bound is the same as for Mergesort, Theta(n log n).
I am doing an introductory course on algorithms. I've come across this problem which I'm unsure about.
I would like to know which of the 2 are dominant
f(n): 100n + log n or g(n): n + (log n)^2
Given the definitions of each of:
Ω, Θ, O
I assumed f(n), so fn = Ω(g(n))
Reason being that n dominates (log n)^2, is that true?
In this case,
limn → ∞[f(n) / g(n)] = 100.
If you go over calculus definitions, this means that, for any ε > 0, there exists some m for which
100 (1 - ε) g(n) ≤ f(n) ≤ 100 (1 + ε) g(n)
for any n > m.
From the definition of Θ, you can infer that these two functions are Θ of each other.
In general, if
limn → ∞[f(n) / g(n)] = c exists, and
0 < c < ∞,
then the two functions have the same order of growth (they are Θ of each other).
n dominates both log(n) and (log n)^2
A little explanation
f(n) = 100n + log n
Here n dominates log n for large values of n.
So f(n) = O(n) .......... [1]
g(n) = n + (log n)^2
Now, (log n)^2 dominates log n.
But n still dominates (log n)^2.
So g(n) = O(n) .......... [2]
Now, taking results [1] and [2] into consideration.
f(n) = Θ(g(n)) and g(n) = Θ(f(n))
since they will grow at the same rate for large values of n.
We can say that f(n) = O(g(n) if there are constants c > 0 and n0 > 0 such that
f(n) <= c*g(n), n > n0
This is the case for both directions:
# c == 100
100n + log n <= 100(n + (log n)^2)
= 100n + 100(log(n)^2) (n > 1)
and
# c == 1
n + (log n)^2 <= 100n + log n (n > 1)
Taken together, we've proved that n + (log n)^2 <= 100n + log n <= 100(n + (log n)^2), which proves that f(n) = Θ(g(n)), which is to say that neither dominates the other. Both functions are Θ(n).
g(n) dominates f(n), or equivalently, g(n) is Ω(f(n)) and the same hold vice versa.
Considering the definition, you see that you can drop the factor 100 in the definition of f(n) (since you can multiply it by any fixed number) and you can drop both addends since they are dominated by the linear n.
The above follows from n is Ω(n + logn) and n is Ω(n + log^2n.
hope that helps,
fricke
Suppose
S(n) = Big-Oh(f(n)) & T(n) = Big-Oh(f(n))
both f(n) identically belongs from the same class.
My ques is: Why S(n)/T(n) = Big-Oh(1) is incorrect?
Consider S(n) = n^2 and T(n) = n. Then both S and T are O(n^2) but S(n) / T(n) = n which is not O(1).
Here's another example. Consider S(n) = sin(n) and T(n) = cos(n). Then S and T are O(1) but S(n) / T(n) = tan(n) is not O(1). This second example is important because it shows that even if you have a tight bound, the conclusion can still fail.
Why is this happening? Because the obvious "proof" completely fails. The obvious "proof" is the following. There are constants C_S and C_T and N_S and N_T where n >= N_S implies |S(n)| <= C_S * f(n) and n >= N_T implies |T(n)| <= C_T * f(n). Let N = max(N_S, N_T). Then for n >= N we have
|S(n) / T(n)| <= (C_S * f(n)) / (C_T * f(n)) = C_S / C_T.
This is completely and utterly wrong. It is not the case that |T(n)| <= C_T * f(n) implies that 1 / |T(n)| <= 1 / (C_T * f(n)). In fact, what is true is that 1 / |T(n)| >= 1 / (C_T * f(n)). The inequality reverses, and that suggests there is a serious problem with the "theorem." The intuitive idea is that if T is "small" (i.e., bounded) then 1 / T is "big." But we're trying to show that 1 / T is "small" and we just can't do that. As our counterexamples show, the "proof" is fatally flawed.
However, there is a theorem here that is true. Namely, if S(n) is O(f(n)) and T(n) is Ω(f(n)), then S(n) / T(n) is O(1). The above "proof" works for this theorem (thanks are due to Simone for the idea to generalize to this statement).
Here's one counter example:
Let's say f(n) = n^2, S(n) = n^2 and T(n) = n. Now both S and T are in O(f(n)) (you have to remember that O(n^2) is a superset of O(n), so everything that's in O(n) is also in O(n^2)), but U(n) = S(n)/T(n) = n^2/n = n is definitely not in O(1).
Like the others explained S(n) / T(n) is not generally O(1).
Your doubt probably derive from the confusion between O and Θ; in fact if:
S(n) = Θ(n) and T(n) = Θ(n)
then the following is true:
S(n) / T(n) = Θ(1) and thus S(n) / T(n) = O(1)
You can prove that this is wrong if you plug some actual functions into your formula, e.g.
S(n) = n^2
T(n) = n
f(n) = n^2
S(n) / T(n) = n
n != O(1)