I came across this statement, as per my understanding Theta lies between Big O and Omega but I am unable to understand why intersection comes here. Can I get a mathematical as well as analytical understanding for Θ(g(n)) = O(g(n)) ∩ Ω(g(n))
Θ(g(n)) means that the function is bound both above and below by g(n).
Mathematically, if a function f(n) is Θ(g(n)), then
0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n) for all n greater than some constant k
Now,
O(g(n)) is upper bound on g(n), so a function that is O(g(n)) is upper bounded by g(n).
Ω(g(n)) is lower bound on g(n), so a function that's Ω(g(n)) is lower bound by g(n).
O(g(n)) ∩ Ω(g(n)) is representative of a function sandwiched between g(n) from both above and below, as shown in the image below, which by definition would be Θ(g(n)).
Mathematically, that means the function is 0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n).
Related
Is the following equation true? :
O(f(n)) + Ω(g(n)) = Ω(f(n)) + O(g(n))
I know that Big O means no better than (function), and Big Omega means no worse than (function). But I don't know if that makes the above statement true or false.
We have three general cases (for increasing functions):
Case 1: f(n) ∈ o(g(n)) (note the "little-oh").
In this case, O(f(n)) ⊂ O(g(n)) and Ω(g(n)) ⊂ Ω(f(n)). Hence, O(f(n)) + Ω(g(n)) is a proper subset of O(g(n)) + Ω(f(n)).
For example, if f(n) = n and g(n) = n3, then n2 is in Ω(f(n)) + O(g(n)), but it is not in O(f(n)) + Ω(g(n)).
Case 2: f(n) ∈ Θ(g(n))
In this case, O(f(n)) = O(g(n)) and Ω(g(n)) = Ω(g(n)), so the two sets are equal.
Case 3: f(n) ∈ ω(g(n))
This case is equivalent to case 1, just with f and g flipped. So by symmetry, we have that O(f(n)) + Ω(g(n)) is a proper superset of O(g(n)) + Ω(f(n)).
In sum, these two sets are not equal in general.
Let $f(n) = n, g(n)= 2^n$
$t(n) = 2n$ is in second set, because $n = Ω(f(n))$ and $n = O(g(n))$, but not in the first
If a function f(n) grows more slowly than a function g(n), why is f(n) = O(g(n))?
e.g. if f(n) is 4n^4 and g(n) is log(4n^n^4)
My book says f=O(g(n)) because g=n^4*log(4n)=n^4(logn + log4)=O(n^4*logn). I understand why g=O(n^4*logn), but I'm not sure how they reached the conclusion that f=O(g(n)) from big O of g.
I understand that f(n) grows more slowly than g(n) just by thinking about the graphs, but I'm having trouble understanding asymptotic behavior and why f=O(g(n)) in general.
The formal definition of big-O notation is that
f(n) = O(g(n)) if there are constants n0 and c such that for any n ≥ n0, we have f(n) ≤ c · g(n).
In other words, f(n) = O(g(n)) if for sufficiently large values of n, the value of f(n) is upper-bounded by some constant multiple of g(n).
Notice that this just says that f(n) is upper-bounded by g(n), not that f(n)'s rate of growth is the same as g(n)'s rate of growth. In that sense, you can think of f(n) = O(g(n)) as akin to saying something like "f ≤ g," that f doesn't grow faster than g, leaving open the possibility that g grows a lot faster than f does.
If f(n) is O(g(n)) but not o(g(n)), is it true that f(n) is theta(g(n))?
Similarly, f(n) is Omega(g(n)) but not omega(g(n)) implies f(n) is theta(g(n)).
If not, can you provide an explanation/counter-example please?
*NOTE : think of O as <= and o as <.
If f(n) is O(g(n)) but not o(g(n)), is it true that f(n) is
theta(g(n))?
Yes, f(n) ∈ Θ(g(n)).
f(n) = O(g(n)) means f(n) ≤ Cg(n).
f(n) = o(g(n)) is possible if and only if f(n) = O(g(n)), but f(n) ≠ Θ(g(n)).
So, since f(n) is not o(g(n)), but it is O(g(n)), hence, f(n) ∈ Θ(g(n)).
*NOTE : think of Ω as >= and ω as >.
Similarly, f(n) is Omega(g(n)) but not omega(g(n)) implies f(n) is
theta(g(n)).
Yes, f(n) ∈ Θ(g(n)). Following the similar logic :
f(n) = Ω(g(n)) means f(n) ≥ cg(n).
f(n) = ω(g(n)) is possible if and only if f(n) = Ω(g(n)), but f(n) ≠ Θ(g(n)).
So, since f(n) is not ω(g(n)), but it is Ω(g(n)), hence, f(n) ∈ Θ(g(n)).
It is given that a certain computation has a performance lowerbound of Ω(g(n)) on an input size n. What would be the negation of this statement ? I.e what would be the statement which corresponds to the above statement being false.
My intuition says -
There exists an algorithm using which we can do the computation in O(g(n)).
I am confused if it should be Big-O or small-o.
It should be small o.
Ω(g(n)) [intersection] O(g(n)) = Theta(g(n))
So, the fact that it is Ω(g(n)) is not contradicting O(g(n)).
On the other hand, o(g(n)) = O(g(n)) \ Theta(g(n))
And thus:
Ω(g(n)) [intersection] o(g(n)) = Ω(g(n)) [intersection] (O(g(n)) \ Theta(g(n)))
= Theta(g(n)) \ Theta(g(n)) = {}
So, the set that is the intersection of o(g(n)) and Ω(g(n)) is an empty set - it means if some function is in o(g(n)), it cannot be in Ω(g(n)).
Ω(g(n)) means at least g(n). The negation is it's not true that we need at least g(n), which means that we can do better than g(n), so the negation is that the task can be done in o(g(n)).
I have two functions, f(n)=log2n and g(n)=log10n. I am trying to decide whether f(n) is O(g(n)), or Ω(g(n)) or Θ(g(n)). I thinks i should take the limit f(n)/g(n) as n goes to infinity, and I think that limit is constant so f(n) must be Θ(n).
Am I right?
log2n = log10n / log102 (from here)
So f(n) = g(n) / log102
So f(n) and g(n) only differ by a constant factor (since log102 is constant).
So, from the definitions of O(x), Ω(x) and Θ(x), I'd say:
f(n) ∈ O(g(n)),
f(n) ∈ Ω(g(n)),
f(n) ∈ Θ(g(n))
Yes, you are right. From complexity point of view (at least big O point of view) doesn't matter if it is log2 or log10.
f(n) is both O(g(n)) and f(n) is Ω(g(n)), f(n) is Θ(g(n))
As the limit is constant, you are right that f(n) ∈ Θ(g(n)) (assuming you have a typo in the question). Also of course g(n) ∈ Θ(f(n)).
BTW: Not only the limit of the ratio is constant but log2n/log10n is always a constant(log210).