Is little O the complement of Theta to Big O - algorithm

In other words, is o(f(n)) = O(f(n)) - Θ(f(n))?
f ∈ O(g) [big O] says, essentially
For at least one choice of a constant k > 0, you can find a constant y such that the inequality 0 <= f(x) <= k g(x) holds for all x > y.
f ∈ Θ(g) [theta] says, essentially
For at least one choice of constants k1, k2 > 0, you can find a constant y such that the inequality 0 <= k1 g(x) <= f(x) <= k2 g(x) holds for all x > y.
f ∈ o(g) [little o] says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > y.
By the definition, it is easy to realize that o(g) ⊆ O(g), and Θ(g) ⊆ O(g). And it makes sense to one complement each other. I couldn't find any counter example of function that is in O(f(n)) and not in Θ(f(n)) that is not in o(f(n)).

Surprisingly, no, this isn’t the case. Intuitively, big-O notation gives an upper bound without making any claims about lower bounds. If you subtract out the big-Θ class, you’ve removed functions that are bounded from above and below by the function. That leaves you with some extra functions that are upper-bounded by the function but not lower bounded by it.
As an example, let f(n) be n if n is even and 0 otherwise. Then f(n) = O(n) but f(n) ≠ Θ(n). However, it’s not true that f(n) = o(n).

Related

Asymptotic bounds and Big Θ notation

Suppose that f(n)=4^n and g(n)=n^n, will it be right to conclude that f(n)=Θ(g(n)).
In my opinion it's a correct claim but I'm not 100% sure.
It is incorrect. f(n) = Theta(g(n)) if and only if both f(n) = O(g(n)) and g(n) = O(f(n)). It is true that f(n) = O(g(n)). We will show that it is not the case that g(n) = O(f(n)).
Assume g(n) = O(f(n)). Then there exists a positive real constant c and a positive natural number n0 such that for all n > n0, g(n) <= c * f(n). For our functions, this implies n^n <= c * 4^n. If we take the nth root of both sides of this inequality we get n <= 4c^(1/n). We are free to assume c >= 1 and n0 >= since if we found a smaller value that worked a larger value would work too. For all c > 1 and n > 1, 4c^(1/n) is strictly less than 4c. But then if we choose n > 4c, the inequality is false. So, there cannot be an n0 such that for all n at least n0 the condition holds. This is a contradiction; our initial assumption is disproven.

Prove small-oh with big-oh

If I have already known f(n) is O(g(n)). From the definition of little-oh, how to prove that f(n) is o(n * g(n))?
Given: f(n) is in O(g(n)).
Using the definition of big-O notation, we can write this as:
f(n) is in O(g(n))
=> |f(n)| ≤ k*|g(n)|, for some constant k>0 (+)
for n sufficiently large (say, n>N)
For the definition of big-O used as above, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
Prove: Given (+), then f(n) is in o(n*g(n)).
Lets first state what little-o notation means:
Formally, f(n) = o(g(n)) (or f(n) ∈ o(g(n))) as n → ∞ means that
for every positive constant ε there exists a constant N such that
|f(n)| ≤ ε*|g(n)|, for all n > N (++)
From https://en.wikipedia.org/wiki/Big_O_notation#Little-o_notation.
Now, using (+), we can write
|f(n)| ≤ k*|g(n)|, som k>0, n sufficiently large
<=> { n > 0 } <=> n*|f(n)| ≤ k*n*|g(n)|
<=> n*|f(n)| ≤ k*|n*g(n)|
<=> |f(n)| ≤ (k/n)*|n*g(n)| (+++)
Return to the definition of little-o, specifically (++), and let, without loss of generality, k be fixed. Now, every positive constant ε can be described as
ε = k/C, for some constant C>0 (with k fixed, k>0) (*)
Now, assume, without loss of generality, that n is larger than this C, i.e., n>C. Then, (*) and (+++) yields
|f(n)| ≤ (k/n)*|n*g(n)| < (k/C)*|n*g(n)| = ε*|n*g(n)| (**)
^ ^
| |
since `n>C` (*)
Since we're studying asymptotic behaviour, we can choose to to assign a lower bound to n to any value larger than C (in fact, that's in the definition of both big-O and little-o, "n sufficiently large"), and hence---by the definition of little-oh above---, we have:
- As shown above, (+) implies (**)
- By the definition of little-o, (**) shows that f(n) is in o(n*g(n))
- Subsequently, we've shown that, given (+), then: f(n) is in o(n*g(n))
Result: If f(n) is in O(g(n)), then f(n) is in o(n*g(n)), where these two relations refer big-O and litte-O asymptotic bounds, respectively.
Comment: The result is, in fact, quite trivial. The big-O and little-o notation differ only in one of the two constants used in proving the upper bounds, i.e., we can write the definitions of big-O and little-O as:
f(n) is said to be in O(g(n)) if we can find a set of positive constants (k, N), such that f(n) < k*g(n) holds for all n>N.
f(n) is said to be in o(g(n)) if we can find a positive constant N, such that f(n) < ε*g(n) holds for all n>N, and for every positive constant ε.
The latter is obvious a stricter constraint, but if we can make use of one extra power of n in the left-hand-side of f(n) < ε*g(n) (i.e., f(n) < ε*n*g(n)), then even for infinitesimal values of ε, we can always choose the other constant N freely to be sufficiently large for ε*n to provide us any constant k that can be used to show that f(n) is in O(g(n)) (as, recall, n>N).

Big O vs Small omega

Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.

What is the difference between O(1) and Θ(1)?

I know the definitions of both of them, but what is the reason sometimes I see O(1) and other times Θ(1) written in textbooks?
Thanks.
O(1) and Θ(1) aren't necessarily the same if you are talking about functions over real numbers. For example, consider the function f(n) = 1/n. This function is O(1) because for any n ≥ 1, f(n) ≤ 1. However, it is not Θ(1) for the following reason: one definition of f(n) = Θ(g(n)) is that the limit of |f(n) / g(n)| as n goes to infinity is some finite value L satisfying 0 < L. Plugging in f(n) = 1/n and g(n) = 1, we take the limit of |1/n| as n goes to infinity and get that it's 0. Therefore, f(n) ≠ Θ(1).
Hope this helps!
Big-O notation expresses an asymptotic upper bound, whereas Big-Theta notation additionally expresses an asymptotic lower bound. Often, the upper bound is what people are interested in, so they write O(something), even when Theta(something) would also be true. For example, if you wanted to count the number of things that are equal to x in an unsorted list, you might say that it can be done in linear time and is O(n), because what matters to you is that it won't take any longer than that. However, it would also be true that it's Omega(n) and therefore Theta(n), since you have to examine all of the elements in the list - it can't be done in sub-linear time.
UPDATE:
Formally:
f in O(g) iff there exists a c and an n0 such that for all n > n0, f(n) <= c * g(n).
f in Omega(g) iff there exists a c and an n0 such that for all n > n0, f(n) >= c * g(n).
f in Theta(g) iff f in O(g) and f in Omega(g), i.e. iff there exist a c1, a c2 and an n0 such that for all n > n0, c1 * g(n) <= f(n) <= c2 * g(n).

Either f(n) = O(g(n)) or g(n) = O(f(n))

I'm trying to prove that this is correct for any function f and g with domain and co-domain N. I have seen it proven using limits, but apparently you can also prove it without them.
Essentially what I'm trying to prove is "If f(n) doesn't have a big-O of g(n) then g(n) must have a big-O of f(n). What I'm having trouble is trying to understand what "f doesn't have a big-O of g" means.
According to the formal definition of big-O, if f(n) = O(g(n)) then n>=N -> f(n) <= cg(n) for some N and a constant c. If f(n) != O(g(n)) I think that means there is no c that fulfills this inequality for all values of n. Yet I don't see what I can do to use that fact to prove g(n) = O(f(n)). That doesn't prove that a c' exists for g(n) <= c'f(n), which would successfully prove the question.
Not true. Let f(n) = 1 if n is odd and zero otherwise, and g(n) = 1 if n is even and zero otherwise.
To say that f is O(g) would say there is a constant C > 0 and N > 0 such that n > N implies f(n) <= C g(n). Let n = 2 * N + 1, so that n is odd. Then f(n) = 1 but g(n) = 0 so that f(n) <= C * g(n) is impossible. Thus, f is O(g) is not true.
Similarly, we can show that g is O(f) is not true.
First of all, your definition of big-O is a little bitt off. You say:
I think that means there is no c that fulfills this inequality for all values of n.
In actuality, you need to pick a value c that fulfills the inequality for any value of n.
Anyway, to answer the question:
I don't believe the statement in the question is true... Let's see if we can think of a counter-example, where f(n) ≠ O(g(n)) and g(n) ≠ O(f(n)).
note: I'm going to use n and x interchangeably, since it's easier for me to think that way.
We'd have to come up with two functions that continually cross each other as they go towards infinity. Not only that, but they'd have to continue to cross each other regardless of the constant c that we multibly them by.
So that leaves me thinking that the functions will have to alternate between two different time complexities.
Let's look at a function that alternates between y = x and y = x^2:
f(x) = .2 (x * sin(x) + x^2 * (1 - sin(x)) )
Now, if we create a similar function with a slightly offset oscillation:
g(x) = .2 (x * cos(x) + x^2 * (1 - cos(x)) )
Then these two functions will continue to cross each others' paths out to infinity.
For any number N that you select, no matter how high, there will be an x1 greater than N such that f(x) = x^2 and g(x) = x. Similarly, there will be an x2 such that g(x) = x^2 and f(x) = x.
At these points, you won't be able to choose any c1 or c2 that will ensure that f(x) < c1 * g(x) or that g(x) < c2 * f(x).
In conclusion, f(n) ≠ O(g(n)) does not imply g(n) = O(f(n)).

Resources