Big Omega and Big Theta - algorithm

A function like f(n)=3n^2+2 is O(n^2) because n^2 is the biggest exponent in the function. However, the function 1f(n)= n^31 is not O(n^2) because the biggest exponent is 3, not 2.
So in order to make a guess like this on Big Omega or Big Theta, what should we look for in the function? Can we do something analogous to what we did for Big O notation above?
For example, let's say the questions asks us to find the Big Omega or Big Theta of the function f(n)= 3n^2 +1. Is f(n)= O(n), Big Omega(n) or a Big Theta(n)? If I am about to take an educated guess on whether this function is Big O(n), I would say no (because the biggest exponent of the function is 2, not 1). I would prove this more formally using induction.
So, can we do something analogous to what we did with Big O notation in the first example? What should I look for in the function to guess what the Big Omega and Theta will be, and to determine if the "educated guess" is correct?

Your example uses polynomials, so I will assume that.
your polynomial is O(n^k) if k is greater than or equal to the order of your polynomial.
your polynomial is Omega(n^k) if k is less than or equal to the order of your polynomial.
your polynomial is Theta(n^k) if it is both O(n^k) and Omega(n^k).

So, can we do something analogous to what we did with Big O notation
in the first example?
If you're looking for something that allows you to eyeball if something is Big Omega, Big O, or Big Theta for polynomials, you can use the Theorem of Polynomial Orders (pretty much what Patrick87 said).
Basically, the Theorem of Polynomial Orders allows you to solely look at the highest order term and use that as your desired bound for Big O, Big Omega, and Big Theta.
What should I look for in the function to guess what the Big Omega and
Theta will be, and to determine if the "educated guess" is correct?
Ideally, your function would be a polynomial as that would make it the problem much simpler. But, it can also be a logarithmic function or an exponential function.
To determine if the "educated guess" is correct, you have to first understand what kind of runtime you are looking for. Ask yourself: am I looking for the worst case running time for this algorithm? or am I looking for the best case running time for this algorithm? or am I looking for the general running time of the algorithm?
If you are looking at the worst-case running time of the algorithm, you can simply prove Big Omega using the theorem of polynomial order (if it's a polynomial function) or through an example. However, you must analyze the algorithm to be able to prove Big O and Big Theta.
If you are looking at the best-case running time of the algorithm, you can prove Big O using an example or through the theorem of polynomial order (if it's a polynomial). However, Big Omega and Big Theta can only be proved by analyzing the algorithm.
Basically, you can only prove the least informative bounds for best-case running time and worst-case running time of the algorithm with an example.
For proving the general running time of an algorithm, you have to make sure that the function for the algorithm's running time you have been given is for all input - a single example is not sufficient in this case. When you don't have a function for all input, you have to analyze the algorithm to prove any of the three (Big O, Big Omega, Big Theta) for all inputs for the algorithm.

Related

BigO Notation, understanding

I had seen in one of the videos (https://www.youtube.com/watch?v=A03oI0znAoc&t=470s) that, If suppose f(n)= 2n +3, then BigO is O(n).
Now my question is if I am a developer, and I was given O(n) as upperbound of f(n), then how I will understand, what exact value is the upper bound. Because in 2n +3, we remove 2 (as it is a constant) and 3 (because it is also a constant). So, if my function is f(n) where n = 1, I can't say g(n) is upperbound where n = 1.
1 cannot be upperbound for 1. I find hard understanding this.
I know it is a partial (and probably wrong answer)
From Wikipedia,
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
In your example,
f(n) = 2n+3 has the same growth rate as f(n) = n
If you plot the functions, you will see that both functions have the same linear growth; and as n -> infinity, the difference between the 2 gets minimal.
In Big O notation, f(n) = 2n+3 when n=1 means nothing; you need to look at the trend, not discreet values.
As a developer, you will consider big-O as a first indication for deciding which algorithm to use. If you have an algorithm which is say, O(n^2), you will try to understand whether there is another one which is, say, O(n). If the problem is inherently O(n^2), then the big-O notation will not provide further help and you will need to use other criterion for your decision. However, if the problem is not inherently O(n^2), but O(n), you should discard any algorithm that happen to be O(n^2) and find an O(n) one.
So, the big-O notation will help you to better classify the problem and then try to solve it with an algorithm whose complexity has the same big-O. If you are lucky enough as to find 2 or more algorithms with this complexity, then you will need to ponder them using a different criterion.

How to know if algorithm is big O or big theta

Can someone shortly explain why an algorithm would be O(f(n)) and not Θ(f(n). I get that to be Θf(n)) it must be O(f(n)) and Ω(f(n)) but how do you know if a particular algorithm is Θf(n)) or O(f(n)). I think it's hard for me to not see big O as a worst case run time. I know it's just a bound but how is the bound determined. Like I can see a search in a binary search tree as running in constant time if the element is in the root, but I think this has nothing to do with big O.
I think it is very important to distinguish bounds from cases.
Big O, Big Ω, and Big Θ all concern themselves with bounds. They are ways of defining behavior of an algorithm, and they give information on the growth of the number of operations as n (number of inputs) grows increasingly large.
In class the Big O of a worst case is often discussed and I think that can be confusing sometimes because it conflates the idea of asymptotic behavior with a singular worst case. Big O is concerned with behavior as n approaches infinity, not a singular case. Big O(f(x)) is an upper bound. It is a guarantee that regardless of input, the algorithm will having a running time no worse than some positive constant multiplied by f(x).
As you mentioned Θ(f(x)) only exists if Big O is O(f(x)) and Big Ω is Ω(f(x)). In the question title you asked how to determine if an algorithm is Big O or Big Θ. The answer is that the algorithm could be both Θ(f(x)) and O(f(x)). In cases where Big Θ exists, the lower bound is some positive constant A multiplied by f(x) and the upper bound is some positive constant C multiplied by f(x). This means that when Θ(f(x)) exists, the algorithm can perform no worse than C*f(x) and no better than A*f(x), regardless of any input. When Θ(f(x)) exists, you are given a guarantee of how the algorithm will behave no matter what kind of input you feed it.
When Θ(f(x)) exists, so does O(f(x)). In these cases, it is valid to state that the running time of the algorithm is O(f(x)) or that the running time of the algorithm is Θ(f(x)). They are both true statements. Giving the Big Θ notation is just more informative, since it provides information on both the upper and lower bound. Big O only provides information on the upper bound.
When Big O and Big Ω have different functions for their bounds (i.e when Big O is O(g(x)) and Big Ω is Ω(h(x)) where g(x) does not equal h(x)), then Big Θ does not exist. In these cases, if you want to provide a guarantee of the upper bound on the algorithm, you must use Big O notation.
Above all, it is imperative that you differentiate between bounds and cases. Bounds give guarantees on the behavior of the algorithm as n becomes very large. Cases are more on an individual basis.
Let's make an example here. Imagine an algorithm BestSort that sorts a list of numbers by first checking if it is sorted and if it is not, sort it by using MergeSort. This algorithm BestSort has a best case complexity of Ω(n) since it may discover a sorted list and it has a worst case complexity of O(n log(n)) which it inherits from Mergesort. Thus this algorithm has no Theta complexity. Compare this to pure Mergesort which is always Θ(n log(n)) even if the list is already sorted.
I hope this helps you a bit.
EDIT
Since there has been some confusion I will provide some kind of pseudo code:
BestSort(A) {
If (A is sorted) // this check has a complexity of O(n)
return A;
else
return mergesort(A); // mergesort has a complexity of Θ(n log(n))
}

How to estimate the time complexity for insertion sort by measuring the gradient of a log-log plot?

This is the graph which I am expected to analyze. I have to find the gradient (slope) and from that I am expected to deduce the time complexity.
I have found that the slope is equal to 1,91. If that is true what else should I do?
Quotient of logarithms is approximately 2. What does it mean when removing the logarithms?
log(T(n)) / log(n) = 2
log(T(n)) = 2 * log(n)
log(T(n)) = log(n²)
T(n) = n²
T(n) denotes algorithm’s time complexity. Of course we are talking in asymptotic terms, i.e. using Big O notation we say that
T(n) ∈ O(n²).
You measured the value 2 for large inputs and you are assuming it will remain the same even for all bigger ones.
You can read more at a page by one of the tutors at University of Toronto. It uses basic calculus to explain how it works. Still, the idea behind all this is that logarithms make multiplicative constants from constant exponents and additive constants from multiplicative constants.
Also regarding interpretation of the plot, a similar question popped up here on Stack Overflow recently: Log-log plot/graph of algorithm time complexity
But note that this is really just an estimation of time complexity. You cannot prove time complexity of an algorithm by just running it on a finite set of inputs. This method can give you a good guess on what to try to prove using analysis of the algorithm, though.

Plain English explanation of Theta notation?

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics.
How theta notation is different from the Big O notation ? Could anyone explain in plain English?
In algorithm analysis how there are used? I am confused?
If an algorithm's run time is Big Theta(f(n)), it is asymptotically bounded above and below by f(n). Big O is the same except that the bound is only above.
Intuitively, Big O(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time never exceeds f(n)." In rough words, if you think of run time as "bad", then Big O is a worst case. Big Theta(f(n)) says "we can be sure that, ignoring constant factors and terms, the run time always varies as f(n)." In other words, Big Theta is a known tight bound: it's both worst case and best case.
A final try at intuition: Big O is "one-sided." O(n) run time is also O(n^2) and O(2^n). This is not true with Big Theta. If you have an algorithm run time that's O(n), then you already have a proof that it's not Big Theta(n^2). It may or may not be Big Theta(n)
An example is comparison sorting. Information theory tells us sorting requires at least ceiling(n log n) comparisons, and we have actually invented O(n log n) algorithms (where n is number of comparisons), so sorting comparisons are Big Theta(n log n).
I have always wanted to put this down in Simple words. Here is my try.
If an algorithm's time or space complexity is expressed in
Big O : Ex O(n) - means n is the upper limit. Final Value could be less than or equal to n.
Big Omega : Ex Ω(n) - means n is the lower limit. Final Value could be equal to or more than n.
Theta : Ex Θ(n) - means n is the only possible value. (both upper limit & lower limit)

Asymptotic Notations and forming Recurrence relations by analysing the algorithms

I went through many lectures, videos and sources regarding Asymptotic notations. I understood what O, Omega and Theta were. But in algorithms, why do we use only Big Oh notation always, why not Theta and Omega (I know it sounds noobish, but please help me with this). What exactly is this upperbound and lowerbound in accordance with Algorithms?
My next question is, how do we find the complexity from an algorithm. Say I have an algorithm, how do I find the recurrence relation T(N) and then compute the complexity out of it? How do I form these equations? Like in the case of Linear Search using Recursive way, T(n)=T(N-1) + 1. How?
It would be great if someone can explain me considering me a noob so that I can understand even better. I found some answers but wasn't convincing enough in StackOverFlow.
Thank you.
Why we use big-O so much compared to Theta and Omega: This is partly cultural, rather than technical. It is extremely common for people to say big-O when Theta would really be more appropriate. Omega doesn't get used much in practice both because we frequently are more concerned about upper bounds than lower bounds, and also because non-trivial lower bounds are often much more difficult to prove. (Trivial lower bounds are usually the kind that say "You have to look at all of the input, so the running time is at least equal to the size of the input.")
Of course, these comments about lower bounds also partly explain Theta, since Theta involves both an upper bound and a lower bound.
Coming up with a recurrence relation: There's no simple recipe that addresses all cases. Here's a description for relatively simple recursive algorithmms.
Let N be the size of the initial input. Suppose there are R recursive calls in your recursive function. (Example: for mergesort, R would be 2.) Further suppose that all the recursive calls reduce the size of the initial input by the same amount, from N to M. (Example: for mergesort, M would be N/2.) And, finally, suppose that the recursive function does W work outside of the recursive calls. (Example: for mergesort, W would be N for the merge.)
Then the recurrence relation would be T(N) = R*T(M) + W. (Example: for mergesort, this would be T(N) = 2*T(N/2) + N.)
When we create an algorithm, it's always in order to be the fastest and we need to consider every case. This is why we use O, because we want to major the complexity and be sure that our algorithm will never overtake this.
To assess the complexity, you have to count the number of step. In the equation T(n) = T(n-1) + 1, there is gonna be N step before compute T(0), then the complixity is linear. (I'm talking about time complexity and not space complexity).

Resources