Big O notation changing the power - big-o

I want to Express each of the following functions using Big-O notation.
a(n) = 2n + 3n^2 + nlog(n)
b(n) = 5nlog(n) + 10n^3 + n^2
for a(n) I assumed that the answer would be O(n^2) However apparently it is O(n^3)
this is the same for b(n) where I assumed the notation would be O(n^3) however it is O(n^4). Is it a rule to round up the power when writing the notation? Why would this be the case? Isn't the notation supposed to take the upper-bound?

You are right, a(n) = O(n2) and b(n) = O(n3).
However, notice that a(n) is also O(n3) and indeed O(n1000). Usually though, we want to express the tightest bound we can find.

Related

Explanation for Big-O notation comparison in different complexity class

Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.

How to solve equations using asymptotic notations?

I'm stuck on whether or not the asymptotic notations (options 1-5) are correct or not.
The big-O notation rule I got (from a YouTube video) was that for O(f(n)) is the set of all functions with smaller or same order of grown as f(n), which means that option 2 would be correct because the leading term has the same order of grown as t(n).
The little-o notation rule I got was that for O(f(n)) is the set of all functions with smaller rate of grown than f(n), which means that option 1 is correct because the leading term n^3 is smaller than o(n^4).
How would I solve this problem for the rest (Omega, Theta, and little-Omega)? I have trouble finding the explanation or rule for those.
Given t(n) = 53n^3+ 32n^2+ 28, which of the following is(are) correct
1) t(n) = o(n^4) (Correct?)
2) t(n) = O(n^3) (Correct?)
3) t(n) = Ɵ(n^4)
4) t(n) = Ω(n^3) (Correct?)
5) t(n) = ɯ(n^2)
Your understanding of O and o is correct.
Roughly speaking, for Omega and omega, they are sort of the opposite. They are kind of bounds from below. So the growth of t(n) must be larger [larger or equal] than that of f(n) to be in omega(f(n)) [Omega(f(n)].
Theta is the same as O and Omega at the same time.
So 4 and 5 are correct and 3 is false.
The mathematically exact definitions are more involved see for example https://en.wikipedia.org/wiki/Big_O_notation
Given t(n) = 53n^3+ 32n^2+ 28, which of the following is(are) correct
1)t(n) = o(n^4)
==>Correct as n^4 is bigger by Function n.
2)t(n) = O(n^3) (Correct?)
==>correct :::take large C constant
3)t(n) = Ɵ(n^4)
==>false because Omega does not satisfy here.
4)t(n) = Ω(n^3)<br/>==> correct
5)t(n) = ɯ(n^2)
true as it is strictly smaller than n^3

Can you do addition/multiplication with Big O notations?

I'm currently taking an algorithm class, and we're covering Big O notations and such. Last time, we talked about how
O (n^2 + 3n + 5) = O(n^2)
And I was wondering, if the same rules apply to this:
O(n^2) + O(3n) + O(5) = O(n^2)
Also, do the following notations hold ?
O(n^2) + n
or
O(n^2) + Θ (3n+5)
The later n is outside of O, so I'm not sure what it should mean. And in the second notation, I'm adding O and Θ .
At least for practical purposes, the Landau O(...) can be viewed as a function (hence the appeal of its notation). This function has properties for standard operations, for example:
O(f(x)) + O(g(x)) = O(f(x) + g(x))
O(f(x)) * O(g(x)) = O(f(x) * g(x))
O(k*f(x)) = O(f(x))
for well defined functions f(x) and g(x), and some constant k.
Thus, for your examples,
Yes: O(n^2) + O(3n) + O(5) = O(n^2)
and:
O(n^2) + n = O(n^2) + O(n) = O(n^2),
O(n^2) + Θ(3n+5) = O(n^2) + O(3n+5) = O(n^2)
The notation:
O(n^2) + O(3n) + O(5) = O(n^2)
as well as, for example:
f(n,m) = n^2 + m^3 + O(n+m)
is abusing the equality symbol, as it violates the axiom of equality. To be more formally correct, you would need to define O(g(x)) as a set-valued function, the value of which is all functions that do not grow faster than g(x), and use set membership notation to indicate that a specific function is a member of the set.
Addition and multiplication is not defined for Landau's symbol (Big O).
In complexity theory, the Landau symbols are used for sets of functions. Therefore O(*) does not represent a single function but an entire set. The + operator is not defined for sets, however, the following is commonly used when analyzing functions:
O(*) + g(n)
This usually represents a set of functions where g(n) is added to every function in O(*). The resulting set can be represented in big-O notation again.
O(*) + O(**)
This is similar. However, it behaves like a kind of cartesian product. Every function from O(**) is added to every function from O(*).
O(*) + Θ(*)
The same rules apply here. However, the result can usually not be expressed as Θ(**) because of the loosening by O(*). Expressing it as O(**) is still possible.
Also the following notations hold
O(n^2) + n = O(n^2)
and
O(n^2) + Θ(3n+5) = O(n^2), Θ(n)
Hope it makes sense...

complexity -big O notation , theta and omega

can anyone help me verifying the following complexities:
10^12 = O(1)?
2^(n+3) + log(n) = O(2^n)?
f(n) = Omega(n) and f(n) = theta(n) <=> f(n) = O(n)
thanks
The first two are right, the last is wrong.
In particular, any value that has no variable attached will be "a constant" and therefore O(1). As for why you're correct on the second, 2^n strictly beats log(n) asymptotically, and 2^(n+3) is equivalent to 8*2^n, or O(1)*O(2^n), and it's generally best to simplify big-O notation to the simplest-looking correct form.
The third condition is wrong because f(n) = O(n) does not imply either of the first two statements.

Difference between Big-Theta and Big O notation in simple language

While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n)) and Theta(n*log(n)), but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n) is asymptotic lower bound. If T(n) is Omega(f(n)), it means that from a certain n0, there is a constant C1 such that T(n) >= C1 * f(n). Whereas big-O says there is a constant C2 such that T(n) <= C2 * f(n)).
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Big O gives upper bound
Big Omega gives lower bound and
Big Theta gives both lower and upper bounds
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
I will just quote from Knuth's TAOCP Volume 1 - page 110 (I have the Indian edition). I recommend reading pages 107-110 (section 1.2.11 Asymptotic representations)
People often confuse O-notation by assuming that it gives an exact order of Growth; they use it as if it specifies a lower bound as well as an upper bound. For example, an algorithm might be called inefficient because its running time is O(n^2). But a running time of O(n^2) does not necessarily mean that running time is not also O(n)
On page 107,
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^4) and
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^3) and
1^2 + 2^2 + 3^2 + ... + n^2 = (1/3) n^3 + O(n^2)
Big-Oh is for approximations. It allows you to replace ~ with an equals = sign. In the example above, for very large n, we can be sure that the quantity will stay below n^4 and n^3 and (1/3)n^3 + n^2 [and not simply n^2]
Big Omega is for lower bounds - An algorithm with Omega(n^2) will not be as efficient as one with O(N logN) for large N. However, we do not know at what values of N (in that sense we know approximately)
Big Theta is for exact order of Growth, both lower and upper bound.
I am going to use an example to illustrate the difference.
Let the function f(n) be defined as
if n is odd f(n) = n^3
if n is even f(n) = n^2
From CLRS
A function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be "sandwiched" between c1g(n)
and c2g(n), for sufficiently large n.
AND
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0}.
AND
Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0}.
The upper bound on f(n) is n^3. So our function f(n) is clearly O(n^3).
But is it Θ(n^3)?
For f(n) to be in Θ(n^3) it has to be sandwiched between two functions one forming the lower bound, and the other the upper bound, both of which grown at n^3. While the upper bound is obvious, the lower bound can not be n^3. The lower bound is in fact n^2; f(n) is Ω(n^2)
From CLRS
For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Hence f(n) is not in Θ(n^3) while it is in O(n^3) and Ω(n^2)
If the running time is expressed in big-O notation, you know that the running time will not be slower than the given expression. It expresses the worst-case scenario.
But with Theta notation you also known that it will not be faster. That is, there is no best-case scenario where the algorithm will retun faster.
This gives are more exact bound on the expected running time. However for most purposes it is simpler to ignore the lower bound (the possibility of faster execution), while you are generally only concerned about the worst-case scenario.
Here's my attempt:
A function, f(n) is O(n), if and only if there exists a constant, c, such that f(n) <= c*g(n).
Using this definition, could we say that the function f(2^(n+1)) is O(2^n)?
In other words, does a constant 'c' exist such that 2^(n+1) <= c*(2^n)? Note the second function (2^n) is the function after the Big O in the above problem. This confused me at first.
So, then use your basic algebra skills to simplify that equation. 2^(n+1) breaks down to 2 * 2^n. Doing so, we're left with:
2 * 2^n <= c(2^n)
Now its easy, the equation holds for any value of c where c >= 2. So, yes, we can say that f(2^(n+1)) is O(2^n).
Big Omega works the same way, except it evaluates f(n) >= c*g(n) for some constant 'c'.
So, simplifying the above functions the same way, we're left with (note the >= now):
2 * 2^n >= c(2^n)
So, the equation works for the range 0 <= c <= 2. So, we can say that f(2^(n+1)) is Big Omega of (2^n).
Now, since BOTH of those hold, we can say the function is Big Theta (2^n). If one of them wouldn't work for a constant of 'c', then its not Big Theta.
The above example was taken from the Algorithm Design Manual by Skiena, which is a fantastic book.
Hope that helps. This really is a hard concept to simplify. Don't get hung up so much on what 'c' is, just break it down into simpler terms and use your basic algebra skills.

Resources