Theta vs. Omega - asymptotic-complexity

I'm trying to understand time complexity.
If you have an algorithm with a running time of θ(n^2), is it possible to have a best case running time of Ω(n)? Or is the fastest running time only some constant factor c * n^2?

Theta is a tight bound, meaning that it represents the worst case and the best case. So in your case, the fastest running time will be a constant of the tight bound.

Related

Best asymptotic notation

If an algorithm worst case running time is 6n^4 + 2, and its best case running time is 67+ 6n^3. What is the most appropriate asymptotic notation.
I'm trying learn about Big O notation.
is it Θ(n^2) ?
Essentially, Big-Oh time complexity analysis is defined for best case, worst case or average number of operations algorithm performs. "is it Θ(n^2) ?" So, you should specify which case are you looking for? Or do you mean to say is it Θ(n^2) for all cases? (which is obviously not correct)
Having said that, we know that algorithm performs 6n^4 + 2 operations in worst case. So it has Θ(n^4) worst case complexity. I've used theta here because I know exactly how many operations are going to be performed. In the best case, it performs 67+ 6n^3 operations. So it has Θ(n^3) time complexity for the best case.
How about average time complexity? Well, I can't know as long as I am not provided with the probability distribution of the inputs. It's maybe the case that best-case-like scenario rarely occurs and average time complexity is Θ(n^4), or vice versa. So we cannot directly infer the average time complexity from the worst/best case time complexities as long as we are not provided with input probability distribution, the algorithm itself or the recurrence relation. (Well, if best and worst time complexities are the same, then of course we can conclude that average time complexity is equal to them)
If algorithm is provided, we can calculate average time complexity making some very basic assumptions on the input (like equally likely distribution etc.). For example in linear search, best case is O(1). Worst case is O(n). Assuming equally likely distribution, you can conclude that average time complexity is O(n) using expectation formula. [sum of (probability of input i) * (number of operations for that input)]
Lastly, your average time complexity CANNOT be Θ(n^2) because your best and worst time complexities are worst than quadratic. It doesn't make sense to wait this algorithm perform n^2 operations in average, while it performs n^3 operations in best case.
Time complexity for best case <= time complexity for average <= time complexity for worst case

Understanding when to use theta for time complexity

I (believe) I understand the definitions of Big-O, Big-Ω and Big-Θ; in that Big-O is the asymptotic upper bound, Big-Ω is the asymptotic lower bound and Big-Θ is the asymptotic tight bound. However, I keep getting confused with the usage of Θ in certain situations, such as in an insertion sort:
From what I understand this says that the insertion sort will:
Take at least linear time (it won't run any faster than linear time); according to Big-Ω.
Take at most n^2 time (it won't take any longer than n^2); according to Big-O.
The confusion arises from my understanding of when to use Big-Θ. To do so, I was lead to believe that you can only use Big-Θ when the values of Big-O and Big-Ω are the same. If that's the case, why is insertion sort considered to be Θ(n^2) when the Ω and O values are different?
Basically, you can only use Big-Θ when there is no asymptotic gap between the upper bound and the lower bound on the running time of the algorithm:
In your example, insertion-sort takes at most O(n^2) time (in the worst-case) and it takes Ω(n) time (in the best-case). So, O(n^2) is the time upper bound of the algorithm, and Ω(n) is the lower-bound on the algorithm. Since these two are not the same you cannot use Big-Θ to describe the running time of the insertion-sort algorithm.
However, consider Selection-Sort algorithm. Its worst-case running time is O(n^2), and its best-case running time is Ω(n^2). Therefore, since the upper bound and the lower bound are the same (asymptotically), you can say that the running time of the selection-sort algorithm is Θ(n^2).

When to use big O notation and when to use big Theta notation

I understand that Big O is an upper bound and Big Theta is a tight bound when for example we consider the functions f(n)=O(g(n)) or similarly for Big Theta.
But how do we know that a particular algorithm will be better represented using the Big theta notation instead of Big O?
For example, the time complexity of selection sort is given as Big Theta of N^2 rather than Big O of N^2, why?
It's not a question of which is better but rather what do you want to studies.
If you want to study the worst case scenario then you can use the upper bound notation. Keep in mind that the tighter the bound the better, but in some cases it's difficult to calculate the tight bound.
Generally, when people speak about big-O or big-teta they mean the same think so in Selection sort you can also use big-O notation
I agree with the answer above that it is more about what you are trying to study about the algorithm. If you just want the worst case running time, meaning in the worst case the algorithm will run in at least a certain time, then it is best to use Big-O. For example, for selection sort the worst case would be O(n^2). If you want to study the worst and the best case running times, then you would also want to find the Big-Omega. If these are the same (meaning in the best and worst case the algorithm will run in the same amount of time), then you would want to use Big theta to state that this time is a tight bound for the algorithm being studied. This is usually more descriptive because it gives a better outlook on how the algorithm will run with a large dataset. For selection sort, because the best and worst case running time is n^2 (because of the nested loops), the average will be theta(n^2).

Is "best case performance Θ(1) -> running time ≠ Θ(log n)" valid?

This is an argument for justifying that the running time of an algorithm can't be considered Θ(f(n)) but should be O(f(n)) instead.
E.g. this question about binary search: Is binary search theta log (n) or big O log(n)
MartinStettner's response is even more confusing.
Consider *-case performances:
Best-case performance: Θ(1)
Average-case performance: Θ(log n)
Worst-case performance: Θ(log n)
He then quotes Cormen, Leiserson, Rivest: "Introduction to Algorithms":
What we mean when we say "the running time is O(n^2)" is that the worst-case running time (which is a function of n) is O(n^2) ...
Doesn't this suggest the terms running time and worst-case running time are synonymous?
Also, if running time refers to a function with natural input f(n), then there has to be Θ class which contains it, e.g. Θ(f(n)), right? This indicates that you are obligated to use O notation only when the running time is not known very precisely (i.e. only an upper bound is known).
When you write O(f(n)) that means that the running time of your algorithm is bounded above by a function c*f(n) where c is a constant. That also means that that your algorithm can complete in much less number of steps than c*f(n). We often use the Big-O notation because we want to include the possibility that the algorithm completes faster than we indicate. On the other hand, Theta(f(n)) means that the algorithm always completes in c*f(n) steps. Binary search is O(log(n)) because usually it will complete in log(n) steps, but could complete in one step if you get lucky (best-case performance).
I get always confused, if I read about running times.
For me a running time is the time an implementation of an algorithm needs to be executed on a computer. This can be differ in many ways and so is a complicated thing.
So I think complexity of an algorithm is a better word.
Now, the complexity is (in most cases) a worst-case-complexity. If you know an upper bound for the worst case, you also know that it can only get better in other cases.
So, if you know, that there exist some (maybe trivial) cases, where your algorithm does only a few (constant number) steps and stops, you don't have to care about an lower bound and so you (normaly) use an upper bound in big-O or little-o notation.
If you do your calculations carfully, you can also use the Θ notation.
But notice: all complexities only hold on the cases they are attached to. This means: If you make assumtions like "Input is a best case", this effects your calculation and also the resulting complexity. In the case of binary search you posted the complexity for three different assumtions.
You can generalize it by saying: "The complexity of Binary Search is in O(log n)", since Θ(log n) means "Ω(log n) and O(log n)" and O(1) is a subset of O(log n).
To summerize:
- If you know the very precise function for the complexity, you can give the complexity in Θ-notation
- If you want to get an overall upper bound, you have to use O-notation, until the lower bound over all input cases does not differ from the upper bound.
- In most cases you have to use the O-notation, since the algorithms are too complex to get a close upper and lower bound.

How to determine whether to put big O,theta or omega notation in the time complexity of algorithm

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based on the definition.
For most algorithms, you are basically concerned on the upper bound on its running time. For example, you have some algorithm to sort an array of numbers. Now you would most likely be concerned that how fast will the algorithm run in the worst possible case.
Hence the complexity of merge sort is mostly written as O(nlogn) even when it will be better to express it as theta(nlogn) because theta notation is a more tighter bound. And merge sort runs in theta(nlogn) time because it will always consume this much time no matter what the input is.
You will not find omega notation again mostly because we are concerned with the upper bounds on running time and not the lower bound.

Resources