complexity -big O notation , theta and omega - complexity-theory

can anyone help me verifying the following complexities:
10^12 = O(1)?
2^(n+3) + log(n) = O(2^n)?
f(n) = Omega(n) and f(n) = theta(n) <=> f(n) = O(n)
thanks

The first two are right, the last is wrong.
In particular, any value that has no variable attached will be "a constant" and therefore O(1). As for why you're correct on the second, 2^n strictly beats log(n) asymptotically, and 2^(n+3) is equivalent to 8*2^n, or O(1)*O(2^n), and it's generally best to simplify big-O notation to the simplest-looking correct form.
The third condition is wrong because f(n) = O(n) does not imply either of the first two statements.

Related

Explanation for Big-O notation comparison in different complexity class

Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.

How to solve equations using asymptotic notations?

I'm stuck on whether or not the asymptotic notations (options 1-5) are correct or not.
The big-O notation rule I got (from a YouTube video) was that for O(f(n)) is the set of all functions with smaller or same order of grown as f(n), which means that option 2 would be correct because the leading term has the same order of grown as t(n).
The little-o notation rule I got was that for O(f(n)) is the set of all functions with smaller rate of grown than f(n), which means that option 1 is correct because the leading term n^3 is smaller than o(n^4).
How would I solve this problem for the rest (Omega, Theta, and little-Omega)? I have trouble finding the explanation or rule for those.
Given t(n) = 53n^3+ 32n^2+ 28, which of the following is(are) correct
1) t(n) = o(n^4) (Correct?)
2) t(n) = O(n^3) (Correct?)
3) t(n) = Ɵ(n^4)
4) t(n) = Ω(n^3) (Correct?)
5) t(n) = ɯ(n^2)
Your understanding of O and o is correct.
Roughly speaking, for Omega and omega, they are sort of the opposite. They are kind of bounds from below. So the growth of t(n) must be larger [larger or equal] than that of f(n) to be in omega(f(n)) [Omega(f(n)].
Theta is the same as O and Omega at the same time.
So 4 and 5 are correct and 3 is false.
The mathematically exact definitions are more involved see for example https://en.wikipedia.org/wiki/Big_O_notation
Given t(n) = 53n^3+ 32n^2+ 28, which of the following is(are) correct
1)t(n) = o(n^4)
==>Correct as n^4 is bigger by Function n.
2)t(n) = O(n^3) (Correct?)
==>correct :::take large C constant
3)t(n) = Ɵ(n^4)
==>false because Omega does not satisfy here.
4)t(n) = Ω(n^3)<br/>==> correct
5)t(n) = ɯ(n^2)
true as it is strictly smaller than n^3

Asymptotic Notations: (an + b) ∈ O(n^2)

I was reading Intro to Algorithms, by Thomas H. Corman when I encountered this statement (in Asymptotic Notations)
when a>0, any linear function an+b is in O(n^2) which is essentially verified by taking c = a + |b| and no = max(1, -b/a)
I can't understand why O(n^2) and not O(n). When will O(n) upper bound fail.
For example, for 3n+2, according to the book
3n+2 <= (5)n^2 n>=1
but this also holds good
3n+2 <= 5n n>=1
So why is the upper bound in terms of n^2?
Well I found the relevant part of the book. Indeed the excerpt comes from the chapter introducing big-O notation and relatives.
The formal definition of the big-O is that the function in question does not grow asymptotically faster than the comparison function. It does not say anything about whether the function grows asymptotically slower, so:
f(n) = n is in O(n), O(n^2) and also O(e^n) because n does not grow asymptotically faster than any of these. But n is not in O(1).
Any function in O(n) is also in O(n^2) and O(e^n).
If you want to describe the tight asymptotic bound, you would use the big-Θ notation, which is introduced just before the big-O notation in the book. f(n) ∊ Θ(g(n)) means that f(n) does not grow asymptotically faster than g(n) and the other way around. So f(n) ∊ Θ(g(n)) is equivalent to f(n) ∊ O(g(n)) and g(n) ∊ O(f(n)).
So f(n) = n is in Θ(n) but not in Θ(n^2) or Θ(e^n) or Θ(1).
Another example: f(n) = n^2 + 2 is in O(n^3) but not in Θ(n^3), it is in Θ(n^2).
You need to think of O(...) as a set (which is why the set theoretic "element-of"-symbol is used). O(g(n)) is the set of all functions that do not grow asymptotically faster than g(n), while Θ(g(n)) is the set of functions that neither grow asymptotically faster nor slower than g(n). So a logical consequence is that Θ(g(n)) is a subset of O(g(n)).
Often = is used instead of the ∊ symbol, which really is misleading. It is pure notation and does not share any properties with the actual =. For example 1 = O(1) and 2 = O(1), but not 1 = O(1) = 2. It would be better to avoid using = for the big-O notation. Nonetheless you will later see that the = notation is useful, for example if you want to express the complexity of rest terms, for example: f(n) = 2*n^3 + 1/2*n - sqrt(n) + 3 = 2*n^3 + O(n), meaning that asymptotically the function behaves like 2*n^3 and the neglected part does asymptotically not grow faster than n.
All of this is kind of against the typically usage of big-O notation. You often find the time/memory complexity of an algorithm defined by it, when really it should be defined by big-Θ notation. For example if you have an algorithm in O(n^2) and one in O(n), then the first one could actually still be asymptotically faster, because it might also be in Θ(1). The reason for this may sometimes be that a tight Θ-bound does not exist or is not known for given algorithm, so at least the big-O gives you a guarantee that things won't take longer than the given bound. By convention you always try to give the lowest known big-O bound, while this is not formally necessary.
The formal definition (from Wikipedia) of the big O notation says that:
f(x) = O(g(x)) as x → ∞
if and only if there is a positive constant M such that for all
sufficiently large values of x, f(x) is at most M multiplied by g(x)
in absolute value. That is, f(x) = O(g(x)) if and only if there exists
a positive real number M and a real number x0 such that
|f(x)|≤ M|g(x)| for all x > x₀ (mean for x big enough)
In our case, we can easily show that
|an + b| < |an + n| (for n sufficiently big, ie when n > b)
Then |an + b| < (a+1)|n|
Since a+1 is constant (corresponds to M in the formal definition), definitely
an + b = O(n)
Your were right to doubt.

Big-O notation 1/O(n) = Omega(n)

I have received the assignment to prove 1/O(n) = Ω(n)
However, this would mean that n element of O(n) => 1/n element of Ω(n) which is clearly wrong.
So my question is: Is the statement 1/O(n) = Ω(n) correct?
EDIT: I send an Email to the assistant who wrote the questions. And used the example of f(n) = 1.
He then said, that the statement is indeed incorrect.
The notation 1/O(n) = Ω(n) is a bit vague. There is really no O(n) on it's own, there is only f(n) ~ O(n), which is a statement about values of function f (there is a constant C so that f(n) < Cn for each n).
And the statement to prove, if I understand it correctly, is "if function f(n) is O(n) than 1/f(n) is Ω(n)", formally:
f(n) ~ O(n) => 1/f(n) ~ Ω(n)
Edit: Except I don't think I understand it correctly, because f(n) = 1 ~ O(n), but 1/f(n) = f(n) = 1 is clearly isn't Ω(n). Weren't the assignment f(n) ~ O(n) => 1/f(n) ~ Ω(1/n) instead?
Edit: Different people tend to use different operators. Most common is f(n) = O(n), but that is confusing because the right hand side is not a function, so it can't be normal equality. We usually used f(n) ~ O(n) in school, which is less confusing, but still inconsistent with common use of that operator for general equivalence relations. Most consistent operator would be f(n) ∈ O(n), because the right hand side can reasonably be treated as set of functions.
O(n) more or less implies the following, for some polynomial function f(x), some polynomial function g(x) and O(f(x)):
In terms of magnitude, we have |f(x)| <= M|g(x)|, for some M. Basically, f is bounded above by a constant times g.
Ω(n) implies that, for some polynomial h(x), some polynomial k(x) and Ω(h(x)):
In terms of magnitude, |h(x)| >= M|k(x)|, for some M. Basically, h is bounded below by a constant times k.
So, for (O(f(x)))^-1, 1/|f(x)| <= 1/(M|g(x)|).
A bit of re-arranging gives M|g(x)| <= |f(x)| - i.e. f(x) is bounded below by a constant times g, which is exactly the same as our definition for Ω(n) above.
It's a tad scratchy to be a formal proof, but it should get you started.

Big-Oh Notation

if T(n) is O(n), then it is also correct to say T(n) is O(n2) ?
Yes; because O(n) is a subset of O(n^2).
Assuming
T(n) = O(n), n > 0
Then both of the following are true
T(n) = O(2n)
T(n) = O(n2)
This is because both 2n and n2 grow as quickly as or more quickly than just plain n. EDIT: As Philip correctly notes in the comments, even a value smaller than 1 can be the multiplier of n, since constant terms may be dropped (they become insignificant for large values of n; EDIT 2: as Oli says, all constants are insignificant per the definition of O). Thus the following is also true:
T(n) = O(0.2n)
In fact, n2 grows so quickly that you can also say
T(n) = o(n2)
But not
T(n) = Θ(n2)
because the functions given provide an asymptotic upper bound, not an asymptotically tight bound.
if you mean O(2 * N) then yes O(n) == O(2n). The time taken is a linear function of the input data in both cases
I disagree with the other answer that says O(N) = O(N*N). It is true that the O(N) function will finish in less time than O(N*N), but the completion time is not a function of n*n so it really isnt true
I suppose the answer depends on why u r asking the question
O also known as Big-Oh is a upper bound. We can say that there exists a C such that, for all n > N, T(n) < C g(n). Where C is a constant.
So until an unless the large co-efficient in T(n) is smaller or equal to g(n) then that statement is always valid.

Resources