I am having a hard time comparing these two functions,
(logn)!
and
2^n
Any good mathematical proof?
You cannot compare O((logn)!) and O(2^n) since big O notation represents a set. O(g(n)) is the set of of all function f such that f does not grows faster than g, formally is the same is saying that there exists C and n0 such that we have |f(n)| <= C|g(n)| for every n >= n0. The expression f(n) = O(g(n)) is a shorthand for saying that f(n) is in the set O(g(n)). what we can do is check if 2^n=O((logn)!) or (log n)!=O(2^n) (note that it could be that both are not true). Luckily, if we use the Stirling approximation we get that
log((logn)!) = (logn)*(log (logn)) - logn + O(log(log n)) = O(n*(log 2))
since n * cost grows faster than (logn)*(log (logn)) and (logn)*(log (logn)) is the leading term in (logn)*(log (logn)) - logn + O(log(log n)). So we get that log((logn)!) = O(log(2^n)) which is same as saying that (log n)! = O(2^n)
One can easily show that for sufficiently large n it holds that:
log(n)! <= log(n)^{log(n)} <= n^{log(n)} = 2^{log^2(n)}
We can now only consider exponents of 2 in the 2^n and the expression above - n and log^2(n) respectively (we can do that since we consider only sufficiently large n and 2^x is strictly rising for positive x). It is sufficient to show that the limit below diverges to prove that log(n)! is, in fact, o(2^n):
lim[n -> inf] (n)/(log^2(n))
Now we apply L'Hospital rule:
= lim [n -> inf] `n/(2log(n))`
And again:
= lim [n -> inf] `n/(2)`
Which diverges to infinity.
Related
I was given the function 5n^3+2n+8 to prove for big-O and big-Omega. I finished big-O, but for big-Omega I end up with a single-term function. I canceled out 2n and 8 because they're positive and make my function larger, so I just end up with 5n^3. How do I choose C and n_0? or is it simply trivial in this case?
From Big-Ω (Big-Omega) notation (slightly modified):
If a running time of some function f(n) is Ω(g(n)), then for large
enough n, say n > n_0 > 0, the running time of f(n) is at least
C⋅g(n), for some constant C > 0.
Hence, if f(n) is in Ω(g(n)), then there exists some positive constants C and n_0 such at the following holds
f(n) ≥ C⋅g(n), for all n > n_0 (+)
Now, the choice of C and n_0 is not unique, it suffices that you can show one such set of constants (such that (+) holds) to be able to describe the running time using the Big-Omega notation, as posted above.
Hence, you are indeed almost there
f(n) = 5n^3+2n+8 > 5n^3 holds for all n larger than say, 1
=> f(n) ≥ 5⋅n^3 for all n > n_0 = 1 (++)
Finally, (++) is just (+) for g(n) = n^3 and C=5, and hence, by (+), f(n) is in Ω(n^3).
Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.
I know the definitions of both of them, but what is the reason sometimes I see O(1) and other times Θ(1) written in textbooks?
Thanks.
O(1) and Θ(1) aren't necessarily the same if you are talking about functions over real numbers. For example, consider the function f(n) = 1/n. This function is O(1) because for any n ≥ 1, f(n) ≤ 1. However, it is not Θ(1) for the following reason: one definition of f(n) = Θ(g(n)) is that the limit of |f(n) / g(n)| as n goes to infinity is some finite value L satisfying 0 < L. Plugging in f(n) = 1/n and g(n) = 1, we take the limit of |1/n| as n goes to infinity and get that it's 0. Therefore, f(n) ≠ Θ(1).
Hope this helps!
Big-O notation expresses an asymptotic upper bound, whereas Big-Theta notation additionally expresses an asymptotic lower bound. Often, the upper bound is what people are interested in, so they write O(something), even when Theta(something) would also be true. For example, if you wanted to count the number of things that are equal to x in an unsorted list, you might say that it can be done in linear time and is O(n), because what matters to you is that it won't take any longer than that. However, it would also be true that it's Omega(n) and therefore Theta(n), since you have to examine all of the elements in the list - it can't be done in sub-linear time.
UPDATE:
Formally:
f in O(g) iff there exists a c and an n0 such that for all n > n0, f(n) <= c * g(n).
f in Omega(g) iff there exists a c and an n0 such that for all n > n0, f(n) >= c * g(n).
f in Theta(g) iff f in O(g) and f in Omega(g), i.e. iff there exist a c1, a c2 and an n0 such that for all n > n0, c1 * g(n) <= f(n) <= c2 * g(n).
(log n)^k = O(n)? For k greater or equal to 1.
My professor presented us with this statement in class, however I am not sure what it means for a function to a have a time complexity of O(n). Even stuff like n^2 = O(n^2), how can a function f(x) have a run time complexity?
As for the statement how does it equal O(n) rather than O((logn)^k)?
(log n)^k = O(n)?
Yes. The definition of big-Oh is that a function f is in O(g(n)) if there exist positive constants N and c, such that for all n > N: f(n) <= c*g(n). In this case f(n) is (log n)^k and g(n) is n, so if we insert that into the definition we get: "there exist constants N and c, such that for all n > N: (log n)^k <= c*n". This is true so (log n)^k is in O(n).
how can a function f(x) have a run time complexity
It doesn't. Nothing about big-Oh notation is specific to run-time complexity. Big-Oh is a notation to classify the growth of functions. Often the functions we're talking about measure the run-time of certain algorithms, but we can use big-Oh to talk about arbitrary functions.
f(x) = O(g(x)) means f(x) grows slower or comparably to g(x).
Technically this is interpreted as "We can find an x value, x_0, and a scale factor, M, such that this size of f(x) past x_0 is less than the scaled size of g(x)." Or in math:
|f(x)| < M |g(x)| for all x > x_0.
So for your question:
log(x)^k = O(x)? is asking : is there an x_0 and M such that
log(x)^k < M x for all x>x_0.
The existence of such M and x_0 can be done using various limit results and is relatively simple using L'Hopitals rule .. however it can be done without calculus.
The simplest proof I can come up with that doesn't rely on L'Hopitals rule uses the Taylor series
e^z = 1 + z + z^2/2 + ... = sum z^m / m!
Using z = (N! x)^(1/N) we can see that
e^(x^(1/N)) = 1 + (N! x)^(1/N) + (N! x)^(2/N)/2 + ... (N! x)^(N/N)/N! + ...
For x>0 all terms are positive so, keeping only the Nth term we get that
e^((N! x)^(1/N)) = N! x / N! + (...)
= x + (...)
> x for x > 0
Taking logarithms of both sides (since log is monotonic increasing), then raising to Nth power (also monotonic increasing since N>0)
(N! x)^(1/N) > log x for x > 0
N! x > (log x)^n for x > 0
Which is exactly the result we need, (log x)^N < M x for some M and all x > x_0, with M = N! and x_0=0
f(n)=(log(n))^log(n)
g(n)= n/log(n)
f = O(g(n))?
Take the log of both sides:
log(f(n)) = log(log n) * log n
log(g(n)) = log(n) - log(log(n)) = log(n)(1 - log(log(n))/log(n))
Clearly log(log(n)) dominates (1 - log(log(n))/log(n)), so g is O(f). f is not O(g). Since it's homework, you may need to fill in the details.
It's also fairly easily to get an idea what the answer should be just by trying it with a large number. 1024 is 2^10, so taking n=1024:
f(n) = 10^10
g(n) = 1024/10.
Obviously that's not a proof, but I think we can see who's winning this race.
f(n) grows faster than g(n) if and only if f(en) also grows faster than g(en) since exp is strictly increasing to infinity (prove it yourself).
Now f(en) = nn and g(en) = en / n, and you can quote the known results.
If Limit[f[x] / g[x], x -> Infinity] = Infinity, then f[x] grows faster than g[x].
Limit[Log[x] ^ Log[x] / (x / Log[x]), x -> Infinity] = + Infinity
So, Log[x] ^ Log[x] grows faster than x / Log[x]
Mathematica gives the limit of f(n) / g(n) as n tends towards infinity as infinity, which means that f grows faster. This means that g(n) belongs to (=) O(f(n)).
You can use this for example if you don't have Mathematica.
f is vastly bigger. By n^loglog(n) -1 . log n