Why complexity of linear function is same as that of quadratic equation - algorithm

I am learning algorithms.. So, I came along with something very interesting.
The asymptotic bound of linear equation ( (a*n)+b ) is O(n^2).. for all a>0.
This is same that of not so surprising.. a* n^2 + b* n + c
Why?

Because big-oh gives you an upper bound. Your first function is also O(n^3), O(n^4), O(n^2012) etc.
The definition of big-oh basically says that f(n) is O(g(n)) if there exists some k such that, for all n > k, we have g(n) > f(n).
Look into big-theta for stronger / tight bounds.

Related

Explanation for Big-O notation comparison in different complexity class

Why Big-O notation can not compare algorithms in the same complexity class. Please explain, I can not find any detailed explanation.
So, O(n^2) says that this algorithm requires less or equal number of operations to perform. So, when you have algorithm A which requires f(n) = 1000n^2 + 2000n + 3000 operations and algorithm B which requires g(n) = n^2 + 10^20 operations. They're both O(n^2)
For small n the first algorithm will perform better than the second one. And for big ns second algorithm looks better since it has 1 * n^2, but first has 1000 * n^2.
Also, h(n) = n is also O(n^2) and k(n) = 5 is O(n^2). So, I can say that k(n) is better than h(n) because I know how these functions look like.
Consider the case when I don't know how functions k(n) and h(n) look like. The only thing I'm given is k(n) ~ O(n^2), h(n) ~ O(n^2). Can I say which function is better? No.
Summary
You can't say which function is better because Big O notation stays for less or equal. And following is true
O(1) is O(n^2)
O(n) is O(n^2)
How to compare functions?
There is Big Omega notation which stays for greater or equal, for example f(n) = n^2 + n + 1, this function is Omega(n^2) and Omega(n) and Omega(1). When function has complexity equal to some asymptotic, Big Theta is used, so for f(n) described above we can say that:
f(n) is O(n^3)
f(n) is O(n^2)
f(n) is Omega(n^2)
f(n) is Omega(n)
f(n) is Theta(n^2) // this is only one way we can describe f(n) using theta notation
So, to compare asymptotics of functions you need to use Theta instead of Big O or Omega.

How can an algorithm that is O(n) also be O(n^2), O(n^1000000), O(2^n)?

So the answer to this question What is the difference between Θ(n) and O(n)?
states that "Basically when we say an algorithm is of O(n), it's also O(n2), O(n1000000), O(2n), ... but a Θ(n) algorithm is not Θ(n2)."
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Perhaps I have some fundamental misunderstandings. Please help me understand this as I have been struggling for a while.
Thanks.
It's helpful to think of what big-Oh means: if a function is O(n), then c*n, where c is some positive number, is the upper-bound. If c*n is an upper-bound, it's clear that for integers, c*n^2 would also be an upper-bound. Also c*n^3, c*n^4, c*n^1000, etc.
The below graph shows the growth of functions, which are upper bounds of the function "to the right" of it; i.e., it grows faster on smaller n.
Suppose the running time of your algorithm is T(n) = 3n + 6 (i.e., an arbitrary polynomial of order 1).
It's true that T(n) = O(n) because 3n + 6 < 4n for all n > 5 (to use the definition of big-oh notation). It's also true that T(n) = O(n^2) because 3n + 6 < n^2 for all n > 5 (to use the defintion again).
It's also true that T(n) = Θ(n) because, in addition to the proof that it was O(n), it is true that 3n + 6 > n for all n > 1. However, you cannot prove that 3n + 6 > c n^2 for any value of c for arbitrarily large n. (Proof sketch: lim (cn^2 - 3n - 6) > 0 as n -> infinity).
I understand Big O to represent upper bound or worst case with that I don't understand how O(n) is also O(n2) and the other cases worse than O(n).
Intuitively, an "upper bound of x" means that something will always be less than or equal to x. If something is less than or equal to x, it is also less than or equal to x^2 and x^1000, for large enough values of x. So x^2 and x^1000 can also be upper bounds.
This is what Big-oh represents: upper bounds.
When we say that f(n) = O(g(n)), we mean only that for all sufficiently large n, there exists a constant c such that f(n) <= cg(n). Note that if f(n) = O(g(n)), we can always choose a function h(n) bigger than g(n) and since g(n) is eventually less than h(n), we have f(n) <= cg(n) <= ch(n), so f(n) = O(h(n)) as well.
Note that the O bound is not tight. The theta bound is the intersection of O(g(n)) and Omega(g(n)), where Omega gives the lower bound (it's like O, the upper bound, but bounds from below instead). If f(n) is bounded below by g(n), and h(n) is bigger than g(n), then if follows that f(n) is not (necessarily) bounded below by h(n).

Asymptotic Notations: (an + b) ∈ O(n^2)

I was reading Intro to Algorithms, by Thomas H. Corman when I encountered this statement (in Asymptotic Notations)
when a>0, any linear function an+b is in O(n^2) which is essentially verified by taking c = a + |b| and no = max(1, -b/a)
I can't understand why O(n^2) and not O(n). When will O(n) upper bound fail.
For example, for 3n+2, according to the book
3n+2 <= (5)n^2 n>=1
but this also holds good
3n+2 <= 5n n>=1
So why is the upper bound in terms of n^2?
Well I found the relevant part of the book. Indeed the excerpt comes from the chapter introducing big-O notation and relatives.
The formal definition of the big-O is that the function in question does not grow asymptotically faster than the comparison function. It does not say anything about whether the function grows asymptotically slower, so:
f(n) = n is in O(n), O(n^2) and also O(e^n) because n does not grow asymptotically faster than any of these. But n is not in O(1).
Any function in O(n) is also in O(n^2) and O(e^n).
If you want to describe the tight asymptotic bound, you would use the big-Θ notation, which is introduced just before the big-O notation in the book. f(n) ∊ Θ(g(n)) means that f(n) does not grow asymptotically faster than g(n) and the other way around. So f(n) ∊ Θ(g(n)) is equivalent to f(n) ∊ O(g(n)) and g(n) ∊ O(f(n)).
So f(n) = n is in Θ(n) but not in Θ(n^2) or Θ(e^n) or Θ(1).
Another example: f(n) = n^2 + 2 is in O(n^3) but not in Θ(n^3), it is in Θ(n^2).
You need to think of O(...) as a set (which is why the set theoretic "element-of"-symbol is used). O(g(n)) is the set of all functions that do not grow asymptotically faster than g(n), while Θ(g(n)) is the set of functions that neither grow asymptotically faster nor slower than g(n). So a logical consequence is that Θ(g(n)) is a subset of O(g(n)).
Often = is used instead of the ∊ symbol, which really is misleading. It is pure notation and does not share any properties with the actual =. For example 1 = O(1) and 2 = O(1), but not 1 = O(1) = 2. It would be better to avoid using = for the big-O notation. Nonetheless you will later see that the = notation is useful, for example if you want to express the complexity of rest terms, for example: f(n) = 2*n^3 + 1/2*n - sqrt(n) + 3 = 2*n^3 + O(n), meaning that asymptotically the function behaves like 2*n^3 and the neglected part does asymptotically not grow faster than n.
All of this is kind of against the typically usage of big-O notation. You often find the time/memory complexity of an algorithm defined by it, when really it should be defined by big-Θ notation. For example if you have an algorithm in O(n^2) and one in O(n), then the first one could actually still be asymptotically faster, because it might also be in Θ(1). The reason for this may sometimes be that a tight Θ-bound does not exist or is not known for given algorithm, so at least the big-O gives you a guarantee that things won't take longer than the given bound. By convention you always try to give the lowest known big-O bound, while this is not formally necessary.
The formal definition (from Wikipedia) of the big O notation says that:
f(x) = O(g(x)) as x → ∞
if and only if there is a positive constant M such that for all
sufficiently large values of x, f(x) is at most M multiplied by g(x)
in absolute value. That is, f(x) = O(g(x)) if and only if there exists
a positive real number M and a real number x0 such that
|f(x)|≤ M|g(x)| for all x > x₀ (mean for x big enough)
In our case, we can easily show that
|an + b| < |an + n| (for n sufficiently big, ie when n > b)
Then |an + b| < (a+1)|n|
Since a+1 is constant (corresponds to M in the formal definition), definitely
an + b = O(n)
Your were right to doubt.

Difference between Big-Theta and Big O notation in simple language

While trying to understand the difference between Theta and O notation I came across the following statement :
The Theta-notation asymptotically bounds a function from above and below. When
we have only an asymptotic upper bound, we use O-notation.
But I do not understand this. The book explains it mathematically, but it's too complex and gets really boring to read when I am really not understanding.
Can anyone explain the difference between the two using simple, yet powerful examples.
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound.
Everything that is Theta(f(n)) is also O(f(n)), but not the other way around.
T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n))
For this reason big-Theta is more informative than big-O notation, so if we can say something is big-Theta, it's usually preferred. However, it is harder to prove something is big Theta, than to prove it is big-O.
For example, merge sort is both O(n*log(n)) and Theta(n*log(n)), but it is also O(n2), since n2 is asymptotically "bigger" than it. However, it is NOT Theta(n2), Since the algorithm is NOT Omega(n2).
Omega(n) is asymptotic lower bound. If T(n) is Omega(f(n)), it means that from a certain n0, there is a constant C1 such that T(n) >= C1 * f(n). Whereas big-O says there is a constant C2 such that T(n) <= C2 * f(n)).
All three (Omega, O, Theta) give only asymptotic information ("for large input"):
Big O gives upper bound
Big Omega gives lower bound and
Big Theta gives both lower and upper bounds
Note that this notation is not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.
I will just quote from Knuth's TAOCP Volume 1 - page 110 (I have the Indian edition). I recommend reading pages 107-110 (section 1.2.11 Asymptotic representations)
People often confuse O-notation by assuming that it gives an exact order of Growth; they use it as if it specifies a lower bound as well as an upper bound. For example, an algorithm might be called inefficient because its running time is O(n^2). But a running time of O(n^2) does not necessarily mean that running time is not also O(n)
On page 107,
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^4) and
1^2 + 2^2 + 3^2 + ... + n^2 = O(n^3) and
1^2 + 2^2 + 3^2 + ... + n^2 = (1/3) n^3 + O(n^2)
Big-Oh is for approximations. It allows you to replace ~ with an equals = sign. In the example above, for very large n, we can be sure that the quantity will stay below n^4 and n^3 and (1/3)n^3 + n^2 [and not simply n^2]
Big Omega is for lower bounds - An algorithm with Omega(n^2) will not be as efficient as one with O(N logN) for large N. However, we do not know at what values of N (in that sense we know approximately)
Big Theta is for exact order of Growth, both lower and upper bound.
I am going to use an example to illustrate the difference.
Let the function f(n) be defined as
if n is odd f(n) = n^3
if n is even f(n) = n^2
From CLRS
A function f(n) belongs to the set Θ(g(n)) if there exist positive
constants c1 and c2 such that it can be "sandwiched" between c1g(n)
and c2g(n), for sufficiently large n.
AND
O(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
f(n) ≤ cg(n) for all n ≥ n0}.
AND
Ω(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0}.
The upper bound on f(n) is n^3. So our function f(n) is clearly O(n^3).
But is it Θ(n^3)?
For f(n) to be in Θ(n^3) it has to be sandwiched between two functions one forming the lower bound, and the other the upper bound, both of which grown at n^3. While the upper bound is obvious, the lower bound can not be n^3. The lower bound is in fact n^2; f(n) is Ω(n^2)
From CLRS
For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and
only if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Hence f(n) is not in Θ(n^3) while it is in O(n^3) and Ω(n^2)
If the running time is expressed in big-O notation, you know that the running time will not be slower than the given expression. It expresses the worst-case scenario.
But with Theta notation you also known that it will not be faster. That is, there is no best-case scenario where the algorithm will retun faster.
This gives are more exact bound on the expected running time. However for most purposes it is simpler to ignore the lower bound (the possibility of faster execution), while you are generally only concerned about the worst-case scenario.
Here's my attempt:
A function, f(n) is O(n), if and only if there exists a constant, c, such that f(n) <= c*g(n).
Using this definition, could we say that the function f(2^(n+1)) is O(2^n)?
In other words, does a constant 'c' exist such that 2^(n+1) <= c*(2^n)? Note the second function (2^n) is the function after the Big O in the above problem. This confused me at first.
So, then use your basic algebra skills to simplify that equation. 2^(n+1) breaks down to 2 * 2^n. Doing so, we're left with:
2 * 2^n <= c(2^n)
Now its easy, the equation holds for any value of c where c >= 2. So, yes, we can say that f(2^(n+1)) is O(2^n).
Big Omega works the same way, except it evaluates f(n) >= c*g(n) for some constant 'c'.
So, simplifying the above functions the same way, we're left with (note the >= now):
2 * 2^n >= c(2^n)
So, the equation works for the range 0 <= c <= 2. So, we can say that f(2^(n+1)) is Big Omega of (2^n).
Now, since BOTH of those hold, we can say the function is Big Theta (2^n). If one of them wouldn't work for a constant of 'c', then its not Big Theta.
The above example was taken from the Algorithm Design Manual by Skiena, which is a fantastic book.
Hope that helps. This really is a hard concept to simplify. Don't get hung up so much on what 'c' is, just break it down into simpler terms and use your basic algebra skills.

What does the big-O notation mean? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Plain English explanation of Big O
I need to figure out O(n) of the following:
f(n) = 10n^2 + 10n + 20
All I can come up with is 50, and I am just too embarrassed to state how I came up with that.
Can someone explain what it means and how I should calculate it for f(n) above?
Big-O notation is to do with complexity analysis. A function is O(g(n)) if (for all except some n values) it is upper-bounded by some constant multiple of g(n) as n tends to infinity. More formally:
f(n) is in O(g(n)) iff there exist constants n0 and c such that for all n >= n0, f(n) <= c.g(n)
In this case, f(n) = 10n^2 + 10n + 20, so f(n) is in O(n^2), O(n^3), O(n^4), etc. The tightest upper bound is O(n^2).
In layman's terms, what this means is that f(n) grows no worse than quadratically as n tends to infinity.
There's a corresponding Big-Omega notation which can be used to lower-bound functions in a similar manner. In this case, f(n) is also Omega(n^2): that is, it grows no better than quadratically as n tends to infinity.
Finally, there's a Big-Theta notation which combines the two, i.e. iff f(n) is in O(g(n)) and f(n) is in Omega(g(n)) then f(n) is in Theta(g(n)). In this case, f(n) is in Theta(n^2): that is, it grows exactly quadratically as n tends to infinity.
--> The point of all this is that as n gets big, the linear (10n) and constant (20) terms become essentially irrelevant, as the value of the function is far more affected by the quadratic term. <--

Resources