I have a couple of questions regarding some algebra using big O notation:
if f(n)=O(g(n))
is log(f(n)) = O(log(g(n)))?
is N^{f(n)}=O(N^{g(n)})? (where N is any real number)
Is log(f(n)) = O(log(g(n))) ? No, it is not essential, for example:
f(n)= n and g(n) = n^2. Here f(n) = O(g(n))
Is N^{f(n)}=O(N^{g(n)}) ? No, this is also not true as
for two algorithms , the ratio may remain constant, but the ratio of each raised to certain power will never be constant.
Take
f
(
n
) = 2
n
and
g
(
n
) =
n
.
It is true that 2n is O(n). But consider
This limit is not bounded - it goes to infinity as n goes to infinity. So,
2^2n is not O(2n) i.e. 2f(n) is not O(2g(n)) in this case.
Related
I struggle to fill this table in even though I took calculus recently and good at math. It is only specified in the chapter how to deal with lim(n^k/c^n), but I have no idea how to compare other functions. I checked the solution manual and no info on that, only a table with answers which provides little insight.
When I solve these I don't really think about limits -- I lean on a couple facts and some well-known properties of big-O notation.
Fact 1: for all functions f and g and all exponents p > 0, we have f(n) = O(g(n)) if and only if f(n)p = O(g(n)p), and likewise with o, Ω, ω, and Θ respectively. This has a straightforward proof from the definition; you just have to raise the constant c to the power p as well.
Fact 2: for all exponents ε > 0, the function lg(n) is o(nε). This follows from l'Hôpital's rule for limits: lim lg(n)/nε = lim (lg(e)/n)/(ε nε−1) = (lg(e)/ε) lim n−ε = 0.
Fact 3:
If f(n) ≤ g(n) + O(1), then 2f(n) = O(2g(n)).
If f(n) ≤ g(n) − ω(1), then 2f(n) = o(2g(n)).
If f(n) ≥ g(n) − O(1), then 2f(n) = Ω(2g(n)).
If f(n) ≥ g(n) + ω(1), then 2f(n) = ω(2g(n)).
Fact 4: lg(n!) = Θ(n lg(n)). The proof uses Stirling's approximation.
To solve (a), use Fact 1 to raise both sides to the power of 1/k and apply Fact 2.
To solve (b), rewrite nk = 2lg(n)k and cn = 2lg(c)n, prove that lg(c) n − lg(n) k = ω(1), and apply Fact 3.
(c) is special. nsin(n) ends up anywhere between 0 and n. Since 0 is o(√n) and n is ω(√n), that's a solid row of NO.
To solve (d), observe that n ≥ n/2 + ω(1) and apply Fact 3.
To solve (e), rewrite nlg(c) = 2lg(n)lg(c) = 2lg(c)lg(n) = clg(n).
To solve (f), use Fact 4 and find that lg(n!) = Θ(n lg(n)) = lg(nn).
I am trying to figure out if f(n) = O(g(n)).
I understand that:
O(g(n)) = { f(n) there exists constants c, n0 > 0 such that 0 ≤ f(n) ≤ c × g(n) for all n ≥ n0 }
So I have:
f(n) = 2^(logn)
g(n) = n^1000
I understand that f(n) most closely resembles O(n) usually. However it is less than g(n) therefore would the proof hold true even though the Big O is much much larger than expected?
If
f(n) = n
g(n) = n^1000
then f = O(g). Broadly, if f=O(g) and h is "bigger" than g, then f=O(h).
But there's a catch here:
f(n) = 2^(logn)
This log has what base? Ordinarily We write something like O(logn) and we don't care about the base; O(log2n) and O(log99n) are the same thing, because logan = k logbn, where k is constant. But what is 2logbn?
2^(logb n) = 2^((log2 n)(logb 2))
= (2^(log2 n))^(logb 2)
= n^(logb 2)
How does that compare to n1000?
Suppose:
b = 2^(1/2000) (just slightly more than 1)
2 = b^2000
logb 2 = 2000
So in some cases, it is not true that 2logn = O(n1000).
I am taking a class and we're reviewing time complexity information.
I understand that big o is an upper bound, and omega is a lower bound, and if those are the same then the function is theta(that bound).
Let's say I have the function f(n) = n. Can we say that is is theta(n)?
I think it is because it is O(n) and Omega(n) for C=1 for k>=1, but I wanted to ask to be sure.
Yes that is correct. It is a common definition to say that f \in \Theta(g) iff f \in \Omega(g) and f \in O(g).
Here f(n) = n and g(n) = n.
To prove both individual parts, liminf f(n)/g(n) = liminf 1 = 1 > 0 and limsup g(n)/f(n) = limsup 1 = 1 < \infty.
In particular f \in Theta(f) for all functions f.
Note however, that the notation usually uses a big \Theta, not a small one.
I've been trying to find some examples that this is wrong because it is supposed to be wrong(?).
if g = O(f) and s = O(r) then g/s = O(f/r)
Is there a counterexample that proves that this big-o division is wrong?
Take g(n) = n^2, f(n)=n^3 and s(n) = 1, r(n)=n^2.
You can see that g = O(f) and s = O(r) but n^2 = g/s ≠ O(f/r = n^3/n^2 = n)
Easy:
g(n) = n²
f(n) = n²
s(n) = n
r(n) = n²
Then:
g ∈ Ο(n²) is true
s ∈ Ο(n²) is true
g/s ∈ Ο(n²/n²) = Ο(1) is false
The idea is: if f is O(g) then f is bounded above by g, but not necessarily tightly bounded. I.e. if f is O(n) then it is also O(n^2), O(n^3), etc....
You can use this idea to find examples by making the denominator function large using a larger bound than necessary, thus making the fraction small.
Big-O is just an upper bound.
Let f=g=1 and let s grow slower than r. Then 1/s grows faster than 1/r.
Let g = n^2, r = n^100, s = n and f = n^2. Apparently the statement is not true: g/s=n and f/r=1/n^98. And 1/n is not in O(1/n^98).
One intuitive way to think of big-oh is "grows not faster than". However when you take the reciprocal, if "f grows no faster than g", then 1/f will grow no slower than 1/g.
A correct statement would be that if g=O(f) and s=O(r), then g/r=O(f/s). Note that in this case I swapped the pairs.
I'm studing orders of growth "big oh", "big omega", and "big theta". Since I can't type the little symbols for these I will denote them as follows:
ORDER = big oh
OMEGA = big omega
THETA = big theta
For example I'll say n = ORDER(n^2) to mean that the function n is in the order of n^2 (n grows at most as fast n^2).
Ok for the most part I understand these:
n = ORDER(n^2) //n grows at most as fast as n^2
n^2 = OMEGA(n) //n^2 grows atleast as fast as n
8n^2 + 1000 = THETA(n^2) //same order of growth
Ok here comes the example that confuses me:
what is n(n+1) vs n^2
I realize that n(n+1) = n^2 + n; I would say it has the same order of growth as n^2; therefore I would say
n(n+1) = THETA(n^2)
but my question is, would it also be correct to say:
n(n+1) = ORDER(n^2)
please help because this is confusing to me. thanks.
Thank you guys!!
just to make sure I understand correctly, are these all true:
n^2+n = ORDER(2000n^2)
n^2+n = THETA(2000n^2)
n^2+n = OMEGA(2000n^2)
2000n^2 = ORDER(n^2+n)
2000n^2 = THETA(n^2+n)
2000n^2 = OMEGA(n^2+n)
So if f = THETA(g) then f=ORDER(g) and f=OMEGA(g) are also true.
Yes, n(n+1) = Order(n^2) is correct.
If f = Theta(g) then f = Order(g) and g = Order(f) are both true.
Moron is correct, and that is the easiest way to think about it.
But to understand it, return to the definition for f(n) = O(g(n)): there exists a positive M and n0 such that, for all n > n0, f(n) <= Mg(n).
Suppose M=2. Can you find a value, n0, such that for all n > n0, n^2+n <= M(n^2)?
(Plot both functions with pen and paper to understand how they grow in relation to one another.)
You can use this simple table to get an easy and intuitive understanding of what these symbols mean:
If f(n) and g(n) are two functions then
Growth Rate
if f(n) = Θ(g(n)) then growth rate of f(n) = growth rate of g(n)
if f(n) = O(g(n)) then growth rate of f(n) ≤ growth rate of g(n)
if f(n) = Ω(g(n)) then growth rate of f(n) ≥ growth rate of g(n)
if f(n) = o(g(n)) then growth rate of f(n) < growth rate of g(n)
if f(n) = ω(g(n)) then growth rate of f(n) > growth rate of g(n)
Also, the order is always written in terms of the highest order i.e if the order is O(n^2 + n + 1) then we simply write it as O(n^2) as n^2 is of the highest order.