Can we estimate Big Omega just like we do with Big O? - algorithm

I keep reading how easy it is to estimate Big O: drop least dominant terms and constants. My question is, can we do the same for Big Omega?
I know input dependency has nothing to do with asymptotic analysis: we can have an upper (Big O) and lower (Big Omega) on best, average and worst case analysis. But I am confused on how I can quickly estimate Big Omega of my algorithm, in the worst case for example.
If you could provide examples to clarify my confusion I would truly appreciate it.

Just two examples should enlighten you.
n² + n is O(n²) (it is also O(n³)).
n² + n is Ω(n²) (it is also Ω(n)).
You consider the dominant term.
n + (n cos n)² is O(n²) and Ω(n).
Upper and lower bound.

Related

In asymptotic notation why we not use all possible function to describe growth rate of our function?

if f(n) = 3n + 8,
for this we say or prove that f(n) = Ω(n)
Why we not use Ω(1) or Ω(logn) or .... for describing growth rate of our function?
In the context of studying the complexity of algorithms, the Ω asymptotic bound can serve at least two purposes:
check if there is any chance of finding an algorithm with an acceptable complexity;
check if we have found an optimal algorithm, i.e. such that its O bound matches the known Ω bound.
For theses purposes, a tight bound is preferable (mandatory).
Also note that f(n)=Ω(n) implies f(n)=Ω(log(n)), f(n)=Ω(1) and all lower growth rates, and we needn't repeat that.
You can actually do that. Check the Big Omega notation here and let's take Ω(log n) as an example. We have:
f(n) = 3n + 8 = Ω(log n)
because:
(according to the 1914 Hardy-Littlewood definition)
or:
(according to the Knuth definition).
For the definition of liminf and limsup symbols (with pictures) please check here.
Perhaps what was really meant is Θ (Big Theta), that is, both O() and Ω() simultaneously.

BigO Notation, understanding

I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.

What is difference between different asymptotic notations?

I am really very confused in asymptotic notations. As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case. However, I have always seen Big O being used everywhere, even for best case. For e.g. in the following link, see the table where time complexities of different sorting algorithms are mentioned-
https://en.wikipedia.org/wiki/Best,_worst_and_average_case
Everywhere in the table, big O notation is used irrespective of whether it is best case, worst case or average case. Then what is the use of other two notations and where do we use it?
As far as I know, Big-O notation is for worst cast, omega is for best case and theta is for average case.
They aren't. Omicron is for (asymptotic) upper bound, omega is for lower bound and theta is for tight bound, which is both an upper and a lower bound. If the lower and upper bound of an algorithm are different, then the complexity cannot be expressed with theta notation.
The concept of upper,lower,tight bound are orthogonal to the concept of best,average,worst case. You can analyze the upper bound of each case, and you can analyze different bounds of the worst case (and also any other combination of the above).
Asymptotic bounds are always in relation to the set of variables in the expression. For example, O(n) is in relation to n. The best, average and worst cases emerge from everything else but n. For example, if n is the number of elements, then the different cases might emerge from the order of the elements, or the number of unique elements, or the distribution of values.
However, I have always seen Big O being used everywhere, even for best case.
That's because the upper bound is almost always the one that is the most important and interesting when describing an algorithm. We rarely care about the lower bound. Just like we rarely care about the best case.
The lower bound is sometimes useful in describing a problem that has been proven to have a particular complexity. For example, it is proven that worst case complexity of all general comparison sorting algorithms is Ω(n log n). If the sorting algorithm is also O(n log n), then by definition, it is also Θ(n log n).
Big O is for upper bound, not for worst case! There is no notation specifically for worst case/best case. The examples you are talking about all have Big O because they are all upper bounded by the given value. I suggest you take another look at the book from which you learned the basics because this is immensely important to understand :)
EDIT: Answering your doubt- because usually, we are bothered with our at-most performance i.e. when we say, our algorithm performs in O(logn) in the best case-scenario, we know that its performance will not be worse than logarithmic time in the given scenario. It is the upper bound that we seek to reduce usually and hence we usually mention big O to compare algorithms. (not to say that we never mention the other two)
O(...) basically means "not (much) slower than ...".
It can be used for all three cases ("the worst case is not slower than", "the best case is not slower than", and so on).
Omega is the oppsite: You can say, something can't be much faster than ... . Again, it can be used with all three cases. Compared to O(...), it's not that important, because telling someone "I'm certain my program is not faster than yours" is nothing to be proud of.
Theta is a combination: It's "(more or less) as fast as" ..., not just slower/faster.
The Big-O notation is somethin like this >= in terms of asymptotic equality.
For example if you see this :
x = O(x^2) it does say x <= x^2 (in asymptotic terms).
And it does mean "x is at most as complex as x^2", which is something that you are usually interesting it.
Even when you compare Best/Average case, you can say "At best possible input, I will have AT MOST this complexity".
There are two things mixed up: Big O, Omega, Theta, are purely mathematical constructions. For example, O (f (N)) is the set of functions which are less than c * f (n), for some c > 0, and for all n >= some minimum value N0. With that definition, n = O (f (n^4)), because n ≤ n^4. 100 = O (f (n)), because 100 <= n for n ≥ 100, or 100 <= 100 * n for n ≥ 1.
For an algorithm, you want to give worst case speed, average case speed, rarely the best case speed, sometimes amortised average speed (that's when running an algorithm once does work that can be used when it's run again. Like calculating n! for n = 1, 2, 3, ... where each calculation can take advantage of the previous one). And whatever speed you measure, you can give a result in one of the notations.
For example, you might have an algorithm where you can prove that the worst case is O (n^2), but you cannot prove whether there are faster special cases or not, and you also cannot prove that the algorithm isn't actually faster, like O (n^1.9). So O (n^2) is the only thing that you can prove.

Plain English explanation of Theta notation?

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)

Asymptotic Notations and forming Recurrence relations by analysing the algorithms

I went through many lectures, videos and sources regarding Asymptotic notations. I understood what O, Omega and Theta were. But in algorithms, why do we use only Big Oh notation always, why not Theta and Omega (I know it sounds noobish, but please help me with this). What exactly is this upperbound and lowerbound in accordance with Algorithms?
My next question is, how do we find the complexity from an algorithm. Say I have an algorithm, how do I find the recurrence relation T(N) and then compute the complexity out of it? How do I form these equations? Like in the case of Linear Search using Recursive way, T(n)=T(N-1) + 1. How?
It would be great if someone can explain me considering me a noob so that I can understand even better. I found some answers but wasn't convincing enough in StackOverFlow.
Thank you.
Why we use big-O so much compared to Theta and Omega: This is partly cultural, rather than technical. It is extremely common for people to say big-O when Theta would really be more appropriate. Omega doesn't get used much in practice both because we frequently are more concerned about upper bounds than lower bounds, and also because non-trivial lower bounds are often much more difficult to prove. (Trivial lower bounds are usually the kind that say "You have to look at all of the input, so the running time is at least equal to the size of the input.")
Of course, these comments about lower bounds also partly explain Theta, since Theta involves both an upper bound and a lower bound.
Coming up with a recurrence relation: There's no simple recipe that addresses all cases. Here's a description for relatively simple recursive algorithmms.
Let N be the size of the initial input. Suppose there are R recursive calls in your recursive function. (Example: for mergesort, R would be 2.) Further suppose that all the recursive calls reduce the size of the initial input by the same amount, from N to M. (Example: for mergesort, M would be N/2.) And, finally, suppose that the recursive function does W work outside of the recursive calls. (Example: for mergesort, W would be N for the merge.)
Then the recurrence relation would be T(N) = R*T(M) + W. (Example: for mergesort, this would be T(N) = 2*T(N/2) + N.)
When we create an algorithm, it's always in order to be the fastest and we need to consider every case. This is why we use O, because we want to major the complexity and be sure that our algorithm will never overtake this.
To assess the complexity, you have to count the number of step. In the equation T(n) = T(n-1) + 1, there is gonna be N step before compute T(0), then the complixity is linear. (I'm talking about time complexity and not space complexity).

Resources