"If f= BigOmega(g) then g=o(f)"
Is this true?
My understanding is that f is Big Omega bounded by g. So it's at least g(n) on a graph or more. So then examining g, if it is little-oh of f - then it should be at most but not inclusive bounded by f. Seems true to me?
Not necessarily. Let f(x) = g(x) = x.
Then f = BigOmega(g). Proof: let k = 1/2, n_0=1, then for all n > n_0, f(n) >= k * g(n) (since x >= x/2 when x > 1).
However, g != o(f). If you let k=1/2, then |g(n)| <= k * |f(n)| simply isn't true for all n.
Related
In other words, is o(f(n)) = O(f(n)) - Θ(f(n))?
f ∈ O(g) [big O] says, essentially
For at least one choice of a constant k > 0, you can find a constant y such that the inequality 0 <= f(x) <= k g(x) holds for all x > y.
f ∈ Θ(g) [theta] says, essentially
For at least one choice of constants k1, k2 > 0, you can find a constant y such that the inequality 0 <= k1 g(x) <= f(x) <= k2 g(x) holds for all x > y.
f ∈ o(g) [little o] says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > y.
By the definition, it is easy to realize that o(g) ⊆ O(g), and Θ(g) ⊆ O(g). And it makes sense to one complement each other. I couldn't find any counter example of function that is in O(f(n)) and not in Θ(f(n)) that is not in o(f(n)).
Surprisingly, no, this isn’t the case. Intuitively, big-O notation gives an upper bound without making any claims about lower bounds. If you subtract out the big-Θ class, you’ve removed functions that are bounded from above and below by the function. That leaves you with some extra functions that are upper-bounded by the function but not lower bounded by it.
As an example, let f(n) be n if n is even and 0 otherwise. Then f(n) = O(n) but f(n) ≠ Θ(n). However, it’s not true that f(n) = o(n).
My Algorithms textbook has the following excerpt:
I am struggling to understand their proof that there exists a tight bound IF the limit as n goes to infinity of the ratio of two functions is a constant.
Specifically, where do they get 0.5c and 2c from?
My thoughts: A tight bound means that a function T(n) is bounded above by f(n) and below by g(n). Now lets say T(n) = n^2, f(n) = an^2, and g(n) = bn^2. Then we know the tightbound of T(n) is Theta(n^2) since the ratio of f(n) and g(n) is a constant, a/b.
The formal definition of the statement "lim w(x) = c as x -> infinity" is the following:
For all epsilon > 0, there exists some N such that for all x > N, |w(x) - c| < epsilon.
Now we are given that lim f(x) / g(x) = c as x -> infinity, and that c > 0. Then c / 2 > 0.
Consider epsilon = c / 2. Then epsilon > 0, so there exists some N such that for all x > N, we have |f(x) / g(x) - c| < epsilon = c / 2. This is equivalent to saying -c/2 < f(x) / g(x) - c < c / 2, which is in turn equivalent to saying c/2 < f(x) / g(x) < 3c / 2.
Now since for all x > N, we have c/2 < f(x) / g(x), then (since we always assume that f and g are positive valued) we can conclude that for all x > N, f(x) > g(x) c/2. Thus, we have shown that f(x) = Omega(g(x)).
And similarly, since for all x > N, we have f(x) / g(x) < 3/2 c, we see that for all x > N, f(x) < g(x) (3/2 c). Then we have shown that f(x) = O(g(x)).
Thus, we see that f(x) = Theta(g(x)), as required.
You are given functions f and g such that f(n)=Ω(g(n)). Is log(f(n)^c)=Ω(log(g(n)))? (Here c is some positive constant.) You should assume that f and g are non-decreasing and always bigger than 1.
This a question in my algorithm course and i cant figure out if it's true or false or depending on the constant or depending on the functions f and g
It's straightforward to prove. As f(n) = Omega(g(n)), it means lim{n -> ∞} f(n)/g(n) > 0.
As f and g are non-decreasing and greater than 1, and log is an increasing function, lim{n -> ∞} log(f(n))/log(g(n)) > 0. Hence, log(f(n)) = Omega(log(g(n)).
On the other hand log(f(n)^c) = c log(f(n)). As c is a constant factor, log(f(n)^c) is Omega(log(g(n)) as well anf your claim is correct.
First, I point out that instead of this notation f(n) = Ω(g(n)) I use this f(n) ∈ Ω(g(n))
From Omega definition we have:
f(n) ∈ g(n) <=> ∃s,k > 0 | f(n) >= s*g(n) ∀n >= k
So for log(f(n)^c) ∈ Ω(c*log(g(n))) we can say:
∃s > 0 (s=c for easiness) | log(f(n)^c) >= c*log(g(n)) ∀n >= k
Then we have:
c*log(f(n)) >= c*log(g(n)) ∀n >= k
f(n) >= g(n) ∀n >= k
And since we know that f(n) ∈ Ω(g(n)) we can state that log(f(n)^c) ∈ Ω(c*log(g(n))) .
A TA told me that this is true today but I was unable to verify this by googling. This is saying functions like log(n)^2, log(n)^3, ... , log(n)^m are all O(n).
Is this true?
Claim
The function f(n) = log(n)^m, for any natural number m > 2 (m ∈ ℕ+) is in
O(n).
I.e. there exists a set of positive constants c and n0 such that
the following holds:
log(n)^m < c · n, for all n > n0, { m > 2 | m ∈ ℕ+ } (+)
Proof
Assume that (+) does not hold, and denote this assumption as (*).
I.e., given (*), there exists no set of positive constants c and n0 such that (+) holds for any value of m > 2. Under this assumption, the following holds, that for all positive constants c and n0, there exists a n > n0 such that (thanks #Oriol):
log(n)^m ≥ c · n, { m > 2 | m ∈ ℕ+ } (++)
Now, if (++) holds, then the inequality in (++) will hold also after applying any monotonically increasing function to both sides of the inequality. One such function is, conveniently, the log function itself
Hence, under the assumption that (++) holds, then, for all positive constants c and n0, there exists a n > n0 such that the following holds
log(log(n)^m) ≥ log(c · n), { m > 2 | m ∈ ℕ+ }
m · log(log(n)) ≥ log(c · n), { m > 2 | m ∈ ℕ+ } (+++)
However, (+++) is obviously a contradiction: since log(n) dominates (w.r.t. growth) over log(log(n)),
we can—for any given value of m > 2—always find a set of constants c and n0 such that (+++) (and hence (++)) is violated for all n > n0.
Hence, assumption (*) is proven false by contradiction, and hence, (+) holds.
=> for f(n) = log(n)^m, for any finite integer m > 2, it holds that f ∈ O(n).
Yes. If the function it's f(n), it means m is a parameter and f does not depend on it. In fact, we have a different f_m function for each m.
f_m(n) = log(n)^m
Then it's easy. Given m ∈ ℕ, use L'Hôpital's rule repeatively
f_m(n) log(n)^m m * log(n)^(m-1)
limsup ──────── = limsup ────────── = limsup ────────────────── =
n→∞ n n→∞ n n→∞ n
m*(m-1) * log(n)^(m-2) m!
= limsup ──────────────────────── = … = limsup ──── = 0
n n→∞ n
Therefore, f_m ∈ O(n).
Of course, it would be different if we had f(m,n) = log(n)^m. For example, taking m=n,
f(n,n) log(n)^n
limsup ──────── = limsup ────────── = ∞
n→∞ n n→∞ n
Then f ∉ O(n)
In many ways it is more intuitive that for any positive integer m we have:
x^m = O(e^x)
This says that exponential growth dominates polynomial growth (which is why exponential time algorithms are bad news in computer programming).
Assuming that this is true, simply take x = log(n) and use the fact that then x tends to infinity if and only if n tends to infinity and that e^x and log(x) are inverses:
log(n)^m = O(e^log(n)) = O(n)
Finally, since for any natural number m, the root function n => n^(1/m) is increasing, we can rewrite the result as
log(n) = O(n^(1/m))
This way of writing it says that log(n) grows slower than any root (square, cube, etc.) of n, which more obviously corresponds to e^n growing faster than any power of n.
On Edit: the above showed that log(n)^m = O(n) followed from the more familiar x^m = O(e^x). To convert it to a more self-contained proof, we can show the later somewhat directly.
Start with the Taylor series for e^x:
e^x = 1 + x/1! + x^2/2! + x^3/3! + ... + x^n/n! + ...
This is known to converge for all real numbers x. If a positive integer m is given, let K = (m+1)!. Then, if x > K we have 1/x < 1/(m+1)!, hence
x^m = x^(m+1)/x < x^(m+1)/(m+1)! < e^x
which implies x^m = O(e^x). (The last inequality in the above is true since all terms in the expansion for e^x are strictly positive if x>0 and x^(m+1)/(m+1)! is just one of those terms.)
I'm trying to prove that this is correct for any function f and g with domain and co-domain N. I have seen it proven using limits, but apparently you can also prove it without them.
Essentially what I'm trying to prove is "If f(n) doesn't have a big-O of g(n) then g(n) must have a big-O of f(n). What I'm having trouble is trying to understand what "f doesn't have a big-O of g" means.
According to the formal definition of big-O, if f(n) = O(g(n)) then n>=N -> f(n) <= cg(n) for some N and a constant c. If f(n) != O(g(n)) I think that means there is no c that fulfills this inequality for all values of n. Yet I don't see what I can do to use that fact to prove g(n) = O(f(n)). That doesn't prove that a c' exists for g(n) <= c'f(n), which would successfully prove the question.
Not true. Let f(n) = 1 if n is odd and zero otherwise, and g(n) = 1 if n is even and zero otherwise.
To say that f is O(g) would say there is a constant C > 0 and N > 0 such that n > N implies f(n) <= C g(n). Let n = 2 * N + 1, so that n is odd. Then f(n) = 1 but g(n) = 0 so that f(n) <= C * g(n) is impossible. Thus, f is O(g) is not true.
Similarly, we can show that g is O(f) is not true.
First of all, your definition of big-O is a little bitt off. You say:
I think that means there is no c that fulfills this inequality for all values of n.
In actuality, you need to pick a value c that fulfills the inequality for any value of n.
Anyway, to answer the question:
I don't believe the statement in the question is true... Let's see if we can think of a counter-example, where f(n) ≠ O(g(n)) and g(n) ≠ O(f(n)).
note: I'm going to use n and x interchangeably, since it's easier for me to think that way.
We'd have to come up with two functions that continually cross each other as they go towards infinity. Not only that, but they'd have to continue to cross each other regardless of the constant c that we multibly them by.
So that leaves me thinking that the functions will have to alternate between two different time complexities.
Let's look at a function that alternates between y = x and y = x^2:
f(x) = .2 (x * sin(x) + x^2 * (1 - sin(x)) )
Now, if we create a similar function with a slightly offset oscillation:
g(x) = .2 (x * cos(x) + x^2 * (1 - cos(x)) )
Then these two functions will continue to cross each others' paths out to infinity.
For any number N that you select, no matter how high, there will be an x1 greater than N such that f(x) = x^2 and g(x) = x. Similarly, there will be an x2 such that g(x) = x^2 and f(x) = x.
At these points, you won't be able to choose any c1 or c2 that will ensure that f(x) < c1 * g(x) or that g(x) < c2 * f(x).
In conclusion, f(n) ≠ O(g(n)) does not imply g(n) = O(f(n)).