The big O notation - big-o

I am reading about Big O notation. In the book I have, there is an example in which the complexity of n2 is in class of O(n3). That doesn't seem logical to me because n3 depends on n and it isn't just a plain constant multiplier that we can "get rid of."
Please explain to me why those two are of the same complexity. I can't find an answer on this forum or any other.

Big O determines an upper bound for large values of n. O(n3) is larger than O(n2) and so an n2 program is still O(n3). It's also O(n4), O(*n5), ..., O(ninfinity).
The reverse is not true, however. An n^3 program is not O(n2). Rather it would be Omega(n2), as Omega determines a lower bound (how much work we have to do at least).
Big O says nothing of this upper bound being "tight", it just needs to be higher than the actual complexity. So while an n*n complexity program is bounded by O(n3), that's not a very tight bound. O(n2) is tighter and more informative.

Related

In asymptotic notation why we not use all possible function to describe growth rate of our function?

if f(n) = 3n + 8,
for this we say or prove that f(n) = Ω(n)
Why we not use Ω(1) or Ω(logn) or .... for describing growth rate of our function?
In the context of studying the complexity of algorithms, the Ω asymptotic bound can serve at least two purposes:
check if there is any chance of finding an algorithm with an acceptable complexity;
check if we have found an optimal algorithm, i.e. such that its O bound matches the known Ω bound.
For theses purposes, a tight bound is preferable (mandatory).
Also note that f(n)=Ω(n) implies f(n)=Ω(log(n)), f(n)=Ω(1) and all lower growth rates, and we needn't repeat that.
You can actually do that. Check the Big Omega notation here and let's take Ω(log n) as an example. We have:
f(n) = 3n + 8 = Ω(log n)
because:
(according to the 1914 Hardy-Littlewood definition)
or:
(according to the Knuth definition).
For the definition of liminf and limsup symbols (with pictures) please check here.
Perhaps what was really meant is Θ (Big Theta), that is, both O() and Ω() simultaneously.

Why is an algorithm complexity given in the Big O notation instead of Theta?

I know what the Big O, Theta and Omega notations are, but for example, if my algorithm is a for inside of a for, looping n times, my complexity would be O(n²), but why O(n²) instead of ϴ(n²)? Since the complexity IS in fact O(n²) and Ω(n²), then it would also be ϴ(n²), and I just can't see any reason to not use ϴ(n²) instead of O(n²), since ϴ(n²) restricts my complexity with an upper and bottom value, not only upper in the case of O(n²).
If f(n) = Θ(g(n)) then f(n) = O(g(n)). This because Θ(g(n)) ⊆ O (g(n)).
In your specific case if a loop runs exactly n^2 time the time complexity is in both O(n^2) and Θ(n^2).
The main reason why big-O is typically enough is that we are more interested in the worst case time complexity when analyzing the algorithm's performance, and knowing the worst case scenario is usually enough.
Also, not always is possible to find a tight bound.

How to know if algorithm is big O or big theta

Can someone shortly explain why an algorithm would be O(f(n)) and not Θ(f(n). I get that to be Θf(n)) it must be O(f(n)) and Ω(f(n)) but how do you know if a particular algorithm is Θf(n)) or O(f(n)). I think it's hard for me to not see big O as a worst case run time. I know it's just a bound but how is the bound determined. Like I can see a search in a binary search tree as running in constant time if the element is in the root, but I think this has nothing to do with big O.
I think it is very important to distinguish bounds from cases.
Big O, Big Ω, and Big Θ all concern themselves with bounds. They are ways of defining behavior of an algorithm, and they give information on the growth of the number of operations as n (number of inputs) grows increasingly large.
In class the Big O of a worst case is often discussed and I think that can be confusing sometimes because it conflates the idea of asymptotic behavior with a singular worst case. Big O is concerned with behavior as n approaches infinity, not a singular case. Big O(f(x)) is an upper bound. It is a guarantee that regardless of input, the algorithm will having a running time no worse than some positive constant multiplied by f(x).
As you mentioned Θ(f(x)) only exists if Big O is O(f(x)) and Big Ω is Ω(f(x)). In the question title you asked how to determine if an algorithm is Big O or Big Θ. The answer is that the algorithm could be both Θ(f(x)) and O(f(x)). In cases where Big Θ exists, the lower bound is some positive constant A multiplied by f(x) and the upper bound is some positive constant C multiplied by f(x). This means that when Θ(f(x)) exists, the algorithm can perform no worse than C*f(x) and no better than A*f(x), regardless of any input. When Θ(f(x)) exists, you are given a guarantee of how the algorithm will behave no matter what kind of input you feed it.
When Θ(f(x)) exists, so does O(f(x)). In these cases, it is valid to state that the running time of the algorithm is O(f(x)) or that the running time of the algorithm is Θ(f(x)). They are both true statements. Giving the Big Θ notation is just more informative, since it provides information on both the upper and lower bound. Big O only provides information on the upper bound.
When Big O and Big Ω have different functions for their bounds (i.e when Big O is O(g(x)) and Big Ω is Ω(h(x)) where g(x) does not equal h(x)), then Big Θ does not exist. In these cases, if you want to provide a guarantee of the upper bound on the algorithm, you must use Big O notation.
Above all, it is imperative that you differentiate between bounds and cases. Bounds give guarantees on the behavior of the algorithm as n becomes very large. Cases are more on an individual basis.
Let's make an example here. Imagine an algorithm BestSort that sorts a list of numbers by first checking if it is sorted and if it is not, sort it by using MergeSort. This algorithm BestSort has a best case complexity of Ω(n) since it may discover a sorted list and it has a worst case complexity of O(n log(n)) which it inherits from Mergesort. Thus this algorithm has no Theta complexity. Compare this to pure Mergesort which is always Θ(n log(n)) even if the list is already sorted.
I hope this helps you a bit.
EDIT
Since there has been some confusion I will provide some kind of pseudo code:
BestSort(A) {
If (A is sorted) // this check has a complexity of O(n)
return A;
else
return mergesort(A); // mergesort has a complexity of Θ(n log(n))
}

Big-O and Omega Notations

I was reading this question Big-O notation's definition.
But I have less than 50 reputation to comment, so I hope someone help me.
My question is about this sentence:
There are many algorithms for which there is no single function g such that the complexity is both O(g) and Ω(g). For instance, insertion sort has a Big-O lower bound of O(n²) (meaning you can't find anything smaller than n²) and an Ω upper bound of Ω(n).
for large n the O(n²) is an upper bound and Ω(n) is a lower bound, or maybe I have misunderstood?
could someone help me?
has a Big-O lower bound of O(n²)
I don't really agree with the confusing way this was phrased (since big-O is itself an upper bound), but what I'm reading here is the following:
Big-O is an upper bound.
That is to say, f(n) ϵ O(g(n)) is true if |f(n)| <= k|g(n)| as n tends to infinity (by definition).
So let's say we have a function f(n) = n2 (which is, if we ignore constant factors, the worst-case for insertion sort). We can say n2 ϵ O(n2), but we can also say n2 ϵ O(n3) or n2 ϵ O(n4) or n2 ϵ O(n5) or ....
So the smallest g(n) we can find is n2.
But the answer you linked to is, as a whole, incorrect - insertion sort itself does not have upper or lower bounds, but rather it has best, average and worst cases, which have upper and lower bounds.
See the answer I posted there.
maybe I have misunderstood?
No, you are right.
In general, the Big-O is for the upper bound and big-Ω for the lower bound.
For Insertion sort the worst case scenario, the upper bound is O(n2). Ω(n) is a lower bound.
It seems like you find a mistake in the other answer.

Plain English explanation of Theta notation?

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)

Resources