Big-O and Omega Notations - algorithm

I was reading this question Big-O notation's definition.
But I have less than 50 reputation to comment, so I hope someone help me.
My question is about this sentence:
There are many algorithms for which there is no single function g such that the complexity is both O(g) and Ω(g). For instance, insertion sort has a Big-O lower bound of O(n²) (meaning you can't find anything smaller than n²) and an Ω upper bound of Ω(n).
for large n the O(n²) is an upper bound and Ω(n) is a lower bound, or maybe I have misunderstood?
could someone help me?

has a Big-O lower bound of O(n²)
I don't really agree with the confusing way this was phrased (since big-O is itself an upper bound), but what I'm reading here is the following:
Big-O is an upper bound.
That is to say, f(n) ϵ O(g(n)) is true if |f(n)| <= k|g(n)| as n tends to infinity (by definition).
So let's say we have a function f(n) = n2 (which is, if we ignore constant factors, the worst-case for insertion sort). We can say n2 ϵ O(n2), but we can also say n2 ϵ O(n3) or n2 ϵ O(n4) or n2 ϵ O(n5) or ....
So the smallest g(n) we can find is n2.
But the answer you linked to is, as a whole, incorrect - insertion sort itself does not have upper or lower bounds, but rather it has best, average and worst cases, which have upper and lower bounds.
See the answer I posted there.

maybe I have misunderstood?
No, you are right.
In general, the Big-O is for the upper bound and big-Ω for the lower bound.
For Insertion sort the worst case scenario, the upper bound is O(n2). Ω(n) is a lower bound.
It seems like you find a mistake in the other answer.

Related

What is Big O of n^2 x logn?

Is it n^2 x logn or n^3? I know both of these answers act as upper bounds, I’m just torn between choosing a tighter but more complex bound (option 1), or a “worse” yet simpler bound (option 2).
Are there general rules to big O functions such as big O functions can never be too complex/a product of two functions?
You already seem to have an excellent understanding of the fact that big-O notation is an upper bound, and also that a function with runtime complexity n^2 logn falls in both O(n^2 logn) and O(n^3), so I'll spare you the mathematics of that. It's immediately clear (from the fact that n^2 logn is in O(n^3)) that O(n^2 logn) is a subset of O(n^3), so the former is a at least as good of a bound. It turns out to be a strictly tighter bound (that can be seen with some basic algebra), which is a definite point in its favor. I do understand your concern about the complexity of bounds, but I wouldn't worry about that. Mathematically, it's best to favor accuracy over simplicity, when the two are at odds, and n^2 logn is not that complex of an expression. So in my mind, O(n^2 logn) is a much better bound to state.
Other examples of similar or greater complexity:
As indicated in the comments, merge sort and quicksort have average time complexity O(n logn).
Interpolation search has an average time complexity of O(log logn).
The average case of Dijkstra's algorithm is stated on Wikipedia to be the absolute mouthful O(E + V log(E/V) log(V)).

Asymptotic Notation for algorithms

I finally thought I understood what it means when a function f(n) is sandwiched between a lower and upper bound which are the same class and so can be described as theta(n).
As an example:
f(n) = 2n + 3
1n <= 2n + 3 <= 5n for all values of n >= 1
In the example above it made perfect sense, the order of n is on both sides, so f(n) is sandwiched between 1 * g(n) and 5 * g(n).
It was also a lot clearer when I tried not to use the notations to think about best or worst case and instead as an upper, lower or average bound.
So now thinking I finally understood this and the maths around it I went back to visit this page: https://www.bigocheatsheet.com/ to look at the run times of various search functions and was suddenly confused again about how many of the algorithms there, for example bubble sort, do not have the same order on both sides (upper and lower bound) yet theta is used to describe them.
Bubble sort has Ω(n) and O(n^2) but the theta value is given as Θ(n^2). How is that it can have Θ(n^2) if the upper bound of the function is in the order of N^2 but the lower bound of the function is in the order of n?
Actually, the page you referred to is highly misleading - even if not completely wrong. If you analyze the complexity of an algorithm, you first have to specify the scenario: i.e. whether you are talking about worst-case (the default case), average case or best-case. For each of the three scenarios, you can then give a lower bound (Ω), upper bound (O) or a tight bound (Θ).
Take insertion sort as an example. While the page is, strictly speaking, correct in that the best case is Ω(n), it could just as well (and more precisely) have said that the best case is Θ(n). Similarly, the worst case is indeed O(n²) as stated on that page (as well as Ω(n²) or Ω(n) or O(n³)), but more precisely it's Θ(n²).
Using Ω to always denote the best case and O to always denote the worst-case is, unfortunately, an often made mistake. Takeaway message: the scenario (worst, average, best) and the type of the bound (upper, lower, tight) are two independent dimensions.

Why is an algorithm complexity given in the Big O notation instead of Theta?

I know what the Big O, Theta and Omega notations are, but for example, if my algorithm is a for inside of a for, looping n times, my complexity would be O(n²), but why O(n²) instead of ϴ(n²)? Since the complexity IS in fact O(n²) and Ω(n²), then it would also be ϴ(n²), and I just can't see any reason to not use ϴ(n²) instead of O(n²), since ϴ(n²) restricts my complexity with an upper and bottom value, not only upper in the case of O(n²).
If f(n) = Θ(g(n)) then f(n) = O(g(n)). This because Θ(g(n)) ⊆ O (g(n)).
In your specific case if a loop runs exactly n^2 time the time complexity is in both O(n^2) and Θ(n^2).
The main reason why big-O is typically enough is that we are more interested in the worst case time complexity when analyzing the algorithm's performance, and knowing the worst case scenario is usually enough.
Also, not always is possible to find a tight bound.

How are the following functions O(N^3)?

I'm taking the "Intro To Algorithms" course on Coursera, and I've arrived at the video which deals with Big-Theta, Big-Omega and Big-O notation. The end-of-video quiz presents the following question:
Q: Which of the following functions is O(N^3)?
a) 11N + 15lgN + 100
b) (N^2)/3
c) 25,000*(N^3)
d) All of the above
I answered "c" and was told my answer was incorrect, and that the correct answer is actually "d". The explanation provided by the course wasn't much help:
Recall that big-Oh notation provides only an upper bound on the growth
rate of a function as N gets large. In this course, we primarily use
tilde notation because it more accurately describes the function—it
provides both an upper and lower bound on the function as well as the
coefficient of the leading term.
I was under the impression that one should drop the lesser-order terms (i.e. "15lgN + 100") and focus only on the highest-order terms. Furthermore, I can't see how N^3 could be the upper bound on a quadratic (as opposed to a cubic) function like N^2.
So my question is, why are "a" and "b" classified as O(N^3) in this case?
Do you know, f(n) = O(g(n)) implies f(n) <= constant* g(n), right?
In other words, it means, when you plot the graph of f(n) and g(n) then after some value of, g(n) will always be more than f(n).
Here g(n) is N^3 and remaining comes in f(n). Now, N^3 is always >= options a, b, c. hence answer id D :)
Edit:
Following statements are true,
n=O(n)
n=O(n^2)
n=O(n^3)
But only n = O(n) is tight upper bound and that is what we should use in time complexity derivation of algorithms. If we are using 2nd and 3rd option, then we are misusing the Big-O notation or let's say they are upper bounds but not tightly bounded!
Edit 2: See following image
G(x) is tight upper bound for F(x) and H(x) is upper of F(x) but not tight! Still we would say, F(x)=O(G(x)) & F(x)=O(H(x)). When somebody in exam/interview ask for time complexity they are asking for tight bounds, but not an upper bound. Unfortunately, tight upper bound and upper bound terms are used in exams/interviews interchangeably.
The explanation says it: "Recall that big-Oh notation provides only an upper bound on the growth
rate of a function as N gets large."
In this particular context, the upper bound can be read as "does not grow faster than N³".
It is true that 11N + 15lgN + 100 does not grow faster than N³.
Think of O(N^2) also being O(n^3), O(n^4) and so on. O(N^2) is always bound under O(n^3), therefore O(n^2) is indeed O(n^3).
http://en.wikipedia.org/wiki/Big_O_notation#/media/File:Big-O-notation.png
As many have already quoted a function f(n) which has upper bound say O(n) complexity is also O(n^2) , O(n^3), O(n^4)... etc
Does that make sense or if it gets confused, think in absolute layman terms.
Suppose an process takes max upper bound of 10 secs to execute, come whatever be the input we can conclude this :-
Whatever be the input the execution will complete in less or equal to 10 seconds.
If that is true, even the following is true :-
Whatever be the input the execution will complete in less or equal to 100 seconds.
Whatever be the input the execution will complete in less or equal to 1000 seconds.
and so on.......
And thus you can correlate the answer. Hope that gave you a glimpse.

The big O notation

I am reading about Big O notation. In the book I have, there is an example in which the complexity of n2 is in class of O(n3). That doesn't seem logical to me because n3 depends on n and it isn't just a plain constant multiplier that we can "get rid of."
Please explain to me why those two are of the same complexity. I can't find an answer on this forum or any other.
Big O determines an upper bound for large values of n. O(n3) is larger than O(n2) and so an n2 program is still O(n3). It's also O(n4), O(*n5), ..., O(ninfinity).
The reverse is not true, however. An n^3 program is not O(n2). Rather it would be Omega(n2), as Omega determines a lower bound (how much work we have to do at least).
Big O says nothing of this upper bound being "tight", it just needs to be higher than the actual complexity. So while an n*n complexity program is bounded by O(n3), that's not a very tight bound. O(n2) is tighter and more informative.

Resources