From the definition of Ω notation, this would imply that 2^(n) >= c * 2^(n + k). Taking the lg of both sides and simplifying, I see that n >= lg(c) * (n + k). If I pick c = 1, n0 = 1, and k to be some negative constant, then I can see this is true. I am wondering if this is a correct analysis, and that if I pick a positive k, then it is false. Thanks for your help.
The definition of Ω requires that there exists c such that 2^n ≥ c.2^(n+k).
Clearly c = 2^(-k) (or a smaller value) satisfies this condition and 2^n = Ω(2^(n+k)) for any k.
Related
Prove or disprove the following claims:
Exist function f(n) so f(n-k) is not equal to Big-theta (f(n)). when k>=1 and is positive constant.
Is there any function which this claim is true?
I thought about f(n)=n! but I'm not sure that's is correct answer.
Moreover, if f(n)=n! is correct, how this claim can be proved?
Exist function so (f(n))^2=Big-O(f(n)) and f(n)=Big-omega (log(log(n))).
I think there is not function which make the claim to be true.
If this is correct - how it could be proved?
Correct for f(n) = n!. It suffices to show that for any fixed k >= 1, (n - k)! is not Omega(n!), as for any constant c > 0, it holds for all n large enough that c * n! > (n - k)!.
There is no f(n) such that both f(n)^2 = O(f(n)) and f(n) = Omega(log log n). The latter implies that for some constant c > 0 and all n large enough, f(n) > c log log n, and in particular f(n) > 1 for all n large enough. If we now assume that f(n)^2 = O(f(n)), then there exists a constant r > 0 so that for all n large enough, f(n)^2 < r * f(n), namely f(n) < r. But this implies that log log n < (r / c) for all n large enough, which is false for all n > e^(e^(r / c)) (where e is the basis of log).
Here are the steps that are used to prove the above
|f(x)| = |4x^2 – 5x + 3|
<= |4x^2|+ |- 5x| + |3|
<= 4x^2 + 5x + 3, for all x > 0
<= 4x^2 + 5x^2 + 3x^2, for all x > 1
<= 12x^2, for all x > 1
Hence we conclude that f(x) is O(x^2)
I referred this But it does not help
Can someone explain the above proof step by step?
Why the absolute value of f(x) is taken ?
Why and how were all the term replaced by x^2 term?
Preparations
We start by loosely stating the definition of a function or algorithm f being in O(g(n)):
If a function f is in O(g(n)), then c · g(n) is an upper
bound on f(n), for some non-negative constant c such that f(n) ≤ c · g(n)
holds, for sufficiently large n (i.e. , n ≥ n0 for some constant
n0).
Hence, to show that f ∈ O(g(n)), we need to find a set of (non-negative) constants (c, n0) that fulfils
f(n) ≤ c · g(n), for all n ≥ n0, (+)
We note, however, that this set is not unique; the problem of finding the constants (c, n0) such that (+) holds is degenerate. In fact, if any such pair of constants exists, there will exist an infinite amount of different such pairs.
Analysis
For common convention, we'll analyse your example using variable name n rather than x.
f(n) = 4n^2 - 5n + 3 (++)
Now, for your example, we may assume, without loss of generality (since we're studying asymptotic complexity: function/algorithm behavior for "large" n) that n > n0 where n0 > 0. This would correspond to the analysis you've shown in your question analyzing absolute values of x. Anyway, given this assumption, the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3, for all n > n0
Now let, again without loss of generality, n0 equal 2 (we could choose any value, but lets choose 2 here). For n0 = 2, naturally n^2 > 3 holds for n > n0, which means the following holds:
f(n) = 4n^2 - 5n + 3 < 4n^2 + 3 < 4n^2 + n^2, for all n > n0 = 2
f(n) < 5n^2, for all n > n0 = 2
Now choose c = 5 and let g(n) = n^2:
f(n) < c · g(n), for all n > n0,
with c = 5, n0 = 2, g(n) = n^2
Hence, from (+), we've shown that f as defined in (++) is in O(g(n)) = O(n^2).
A TA told me that this is true today but I was unable to verify this by googling. This is saying functions like log(n)^2, log(n)^3, ... , log(n)^m are all O(n).
Is this true?
Claim
The function f(n) = log(n)^m, for any natural number m > 2 (m ∈ ℕ+) is in
O(n).
I.e. there exists a set of positive constants c and n0 such that
the following holds:
log(n)^m < c · n, for all n > n0, { m > 2 | m ∈ ℕ+ } (+)
Proof
Assume that (+) does not hold, and denote this assumption as (*).
I.e., given (*), there exists no set of positive constants c and n0 such that (+) holds for any value of m > 2. Under this assumption, the following holds, that for all positive constants c and n0, there exists a n > n0 such that (thanks #Oriol):
log(n)^m ≥ c · n, { m > 2 | m ∈ ℕ+ } (++)
Now, if (++) holds, then the inequality in (++) will hold also after applying any monotonically increasing function to both sides of the inequality. One such function is, conveniently, the log function itself
Hence, under the assumption that (++) holds, then, for all positive constants c and n0, there exists a n > n0 such that the following holds
log(log(n)^m) ≥ log(c · n), { m > 2 | m ∈ ℕ+ }
m · log(log(n)) ≥ log(c · n), { m > 2 | m ∈ ℕ+ } (+++)
However, (+++) is obviously a contradiction: since log(n) dominates (w.r.t. growth) over log(log(n)),
we can—for any given value of m > 2—always find a set of constants c and n0 such that (+++) (and hence (++)) is violated for all n > n0.
Hence, assumption (*) is proven false by contradiction, and hence, (+) holds.
=> for f(n) = log(n)^m, for any finite integer m > 2, it holds that f ∈ O(n).
Yes. If the function it's f(n), it means m is a parameter and f does not depend on it. In fact, we have a different f_m function for each m.
f_m(n) = log(n)^m
Then it's easy. Given m ∈ ℕ, use L'Hôpital's rule repeatively
f_m(n) log(n)^m m * log(n)^(m-1)
limsup ──────── = limsup ────────── = limsup ────────────────── =
n→∞ n n→∞ n n→∞ n
m*(m-1) * log(n)^(m-2) m!
= limsup ──────────────────────── = … = limsup ──── = 0
n n→∞ n
Therefore, f_m ∈ O(n).
Of course, it would be different if we had f(m,n) = log(n)^m. For example, taking m=n,
f(n,n) log(n)^n
limsup ──────── = limsup ────────── = ∞
n→∞ n n→∞ n
Then f ∉ O(n)
In many ways it is more intuitive that for any positive integer m we have:
x^m = O(e^x)
This says that exponential growth dominates polynomial growth (which is why exponential time algorithms are bad news in computer programming).
Assuming that this is true, simply take x = log(n) and use the fact that then x tends to infinity if and only if n tends to infinity and that e^x and log(x) are inverses:
log(n)^m = O(e^log(n)) = O(n)
Finally, since for any natural number m, the root function n => n^(1/m) is increasing, we can rewrite the result as
log(n) = O(n^(1/m))
This way of writing it says that log(n) grows slower than any root (square, cube, etc.) of n, which more obviously corresponds to e^n growing faster than any power of n.
On Edit: the above showed that log(n)^m = O(n) followed from the more familiar x^m = O(e^x). To convert it to a more self-contained proof, we can show the later somewhat directly.
Start with the Taylor series for e^x:
e^x = 1 + x/1! + x^2/2! + x^3/3! + ... + x^n/n! + ...
This is known to converge for all real numbers x. If a positive integer m is given, let K = (m+1)!. Then, if x > K we have 1/x < 1/(m+1)!, hence
x^m = x^(m+1)/x < x^(m+1)/(m+1)! < e^x
which implies x^m = O(e^x). (The last inequality in the above is true since all terms in the expansion for e^x are strictly positive if x>0 and x^(m+1)/(m+1)! is just one of those terms.)
The great people at MyCodeSchool.com have this introductory video on YouTube, covering the basics of Big-O, Theta, and Omega notation.
The following definition of Big-O notation is provided:
O(g(n) ) := { f(n) : f(n) ≤ cg(n) }, for all n ≥ n0
My casual understanding of the equation is as follows:
Given a function f(), which takes as its input n, there exists another function g(), whose output is always greater than or equal to the output of f()--given two conditions:
g() is multiplied by some constant c
n is greater than some lower bound n0
Is my understanding correct?
Furthermore, the following specific example was provided to illustrate Big-O:
Given:
f(n) = 5n2 + 2n + 1
Because all of the following are true:
5n2 > 2n2, for all n
5n2 > 1n2, for all n
It follows that:
c = 5 + 2 + 1 = 8
Therefore, the video concludes, f(n) ≤ 8n2 for all n ≥ 1, and g(n) = 8n2
I think maybe the video concluded that n0 must be 1, because 1 is the only positive root of the equality 8n2 = 5n2 + 2n + 1 ( Negative one-third is also a root, but n is limited to whole numbers. So, no dice there. )
Is this the standard way of computing n0 for Big-O notation?
Take the largest powered factor in your polynomial
Multiply it by the sum of the coefficients in your time function
Set their product equal to your time function
Solve for zero
Reject all roots that are not in the set of whole numbers
Any help would be greatly appreciated. Thanks in advance.
Your understanding is mostly correct, but from your wording - "I think maybe the video concluded that n0 must be 1", I have to point out that it is also valid to take n0 to be 2, or 3 etc. In fact, any number greater than 1 will satisfy the required condition, there are actually infinitely many choices for the pair (c, n0)!
The important point to note is that the values of the constant c and n0 does not really matter, all we care is the existence a pair of constants (c, n0).
The Basics
Big-O notation describes the asymptotic behavior of a given function f, it essential describes the upper bound of f when the its input value is sufficiently large.
Formally, we say that f is big-O of another function g, i.e. f(x) = O(g(x)), if there exists a positive constant c and a constant n0 such that the following inequality holds:
f(n) ≤ c g(n), for all n ≥ n0
Note that the inequality captures the idea of upper bound: f is upper-bounded by a positive multiple of g. Moreover, the "for all" condition satisfies that the upper bound holds when the input n is sufficiently large (e.g. larger than n0).
How to Pick (c, n0)?
In order to prove f(x) = O(g(x)) for given functions f, g, all we need is to pick any pair of (c, n0) such that the inequality holds, and then we are done!
There is no standard way of finding (c, n0), just use whatever mathematical tools you find helpful. For example, you may fix n0, and then find c by using Calculus to compute the maximum value of f(x) / g(x) in the interval [n0, +∞).
In your case, it appears that you are trying to prove that a polynomial of degree d is big-O of xd, the proof of the following lemma gives a way to pick (c, n0):
Lemma
If f is a polynomial of degree d, then f(x) = O(xd).
Proof: We have f(x) = ad xd + ad-1 xd-1 + ... + a1 x + a0, for each coefficient ai, we have ai ≤ |ai| (absolute value of ai).
Take c = (|ad| + |ad-1| + ... + |a1| + |a0|) , and n0 = 1, then we have:
f(x) = ad xd + ad-1 xd-1 + ... + a1 x + a0
≤ |ad| xd + |ad-1| xd-1 + ... + |a1| x + |a0|
≤ (|ad| + |ad-1| + ... + |a1| + |a0|) xd
= c xd, for all x ≥ 1
Therefore we have f(x) = O(xd)
Consider the following recurrence
T(n) = 3T(n/5) + lgn * lgn
What is the value of T(n)?
(A) Theta(n ^ log_5{3})
(B) Theta(n ^ log_3{5})
(c) Theta(n Log n )
(D) Theta( Log n )
Answer is (A)
My Approach :
lgn * lgn = theta(n) since c2lgn < 2*lglgn < c1*lgn for some n>n0
Above inequality is shown in this picture for c2 = 0.1 and c1 = 1
log_5{3} < 1,
Hence by master theorem answer has to be theta(n) and none of the answers match. How to solve this problem??
Your claim that lg n * lg n = Θ(n) is false. Notice that the limit of (lg n)2 / n tends toward 0 as n goes to infinity. You can see this using l'Hopital's rule:
limn → ∞ (lg n)2 / n
= lim n → ∞ 2 lg n / n
= lim n → ∞ 2 / n
= 0
More generally, using similar reasoning, you can prove that lg n = o(nε) for any ε > 0.
Let's try to solve this recurrence using the master theorem. We see that there are three subproblems of size n / 5 each, so we should look at the value of log5 3. Since (lg n)2 = o(nlog5 3), we see that the recursion is bottom-heavy and can conclude that the recurrence solves to O(nlog5 3), which is answer (A) in your list up above.
Hope this helps!
To apply Master Theorem we should check the relation between
nlog5(3) ~= n0.682 and (lg(n))2
Unfortunately lg(n)2 != 2*lg(n): it is lg(n2) that's equal to 2*lg(n)
Also, there is a big difference, in Master Theorem, if f(n) is O(nlogb(a)-ε), or instead Θ(nlogba): if the former holds we can apply case 1, if the latter holds case 2 of the theorem.
With just a glance, it looks highly unlikely (lg(n))2 = Ω(n0.682), so let's try to prove that (lg(n))2 = O(n0.682), i.e.:
∃ n0, c ∈ N+, such that for n>n0, (lg(n))2 < c * n0.682
Let's take the square root of both sides (assuming n > 1, the inequality holds)
lg(n) < c1 * n0.341 , (where c1 = sqrt(c))
now we can assume, that lg(n) = log2(n) (otherwise the multiplicative factor could be absorbed by our constant - as you know constant factors don't matter in asymptotic analysis) and exponentiate both sides:
2lg(n) < 2c2 * n0.341 <=> n < 2c2 * n0.341 <=> n < (n20.341)c2 <=> n < (n20.341)c2 <=> n < (n1.266)c2
which is immediately true choosing c2 = 1 and n0 = 1
Therefore, it does hold true that f(n) = O(nlogb(a)-ε), and we can apply case 1 of the Master Theorem, and conclude that:
T(n) = O(nlog53)
Same result, a bit more formally.