When and Where to use which asymptotic notation - algorithm

I have been through this Big-Oh explanation, understanding the complexity of two loops, difference between big theta and big-oh and also through this question.
I understand that we cannot say that Big-oh is used as the worst case, Omega as Best case and theta as average case. Big-oh has its own best, worst and average cases. But how we find out that any specific algorithm belongs from Big-oh, Big-theta or Big-Omega. or how we can check that if any algorithm belongs from all of these.

A function f(n) is Big-Oh of a function g(n), written f(n) = O(g(n)), if there exist a positive constant c and a natural number n0 such that for n > n0, f(n) <= c * g(n). A function f(n) is Big-Omega of g(n), written f(n) = Omega(g(n)), if and only if g(n) = O(f(n)). A function f(n) is Theta of a function g(n), written f(n) = Theta(g(n)), if and only if f(n) = O(g(n)) and f(n) = Omega(g(n)).
To prove any of the free, you do it by showing some function(s) are Big-Oh of some other functions. To show that one function is Big-Oh of another is a difficult problem in the general case. Any form of mathematical proof may be helpful. Induction proofs in conjunction with intuition for the base cases are not uncommon. Basically, guess at values for c and n0 and see if they work. Other options involve choosing one of the two and working out a reasonable value for another.
Note that a function may not be Big-Theta of any other function, if its tightest bounds from above and below are functions with different asymptotic rates of growth. However, I think it's usually a safe bet that most functions are going to be Big-Oh of something reasonably uncomplicated, and all functions typically looked at from this perspective are at least constant-time in the best case - Omega(1).

Related

In asymptotic notation why we not use all possible function to describe growth rate of our function?

if f(n) = 3n + 8,
for this we say or prove that f(n) = Ω(n)
Why we not use Ω(1) or Ω(logn) or .... for describing growth rate of our function?
In the context of studying the complexity of algorithms, the Ω asymptotic bound can serve at least two purposes:
check if there is any chance of finding an algorithm with an acceptable complexity;
check if we have found an optimal algorithm, i.e. such that its O bound matches the known Ω bound.
For theses purposes, a tight bound is preferable (mandatory).
Also note that f(n)=Ω(n) implies f(n)=Ω(log(n)), f(n)=Ω(1) and all lower growth rates, and we needn't repeat that.
You can actually do that. Check the Big Omega notation here and let's take Ω(log n) as an example. We have:
f(n) = 3n + 8 = Ω(log n)
because:
(according to the 1914 Hardy-Littlewood definition)
or:
(according to the Knuth definition).
For the definition of liminf and limsup symbols (with pictures) please check here.
Perhaps what was really meant is Θ (Big Theta), that is, both O() and Ω() simultaneously.

Understanding asymptotic notation homework

This is a problem from Steven Skiena's Algorithm Design Manual book. This is FOR HOMEWORK and I am not looking for a solution. I just want to know if I understand the concept and am approaching it the right way.
Find two functions f(n) and g(n) that satisfy the following relationship. If no such f and g exist, write None.
a) f(n)=o(g(n)) and f(n)≠Θ(g(n))
So I'm reading this as g(n) is strictly (little-oh) larger than f(n) and the average is not the same. If I'm reading this correctly then my answer is:
f(n) = n^2 and g(n) = n^3
b) f(n)=Θ(g(n)) and f(n)=o(g(n))
I'm taking this to mean that f(n) is on average the same as g(n) but g(n) is also larger, so my answer is:
f(n)=n+2 and g(n)=n+10
c) f(n)=Θ(g(n)) and f(n)≠O(g(n))
f(n) is on average the same as g(n) and g(n) is not larger:
f(n)=n^2+10 and g(n)=n^2
d) f(n)=Ω(g(n)) and f(n)≠O(g(n))
g(n) is the lower bound of f(n):
f(n)=n^2+10 and g(n)=n^2
Now is my understanding of the problem correct? If not, what am I doing wrong? If it is correct, do my solutions make sense?

Big O Notation: Definition

I've been watching MIT lectures for the algorithms course and the definition for the Big O notation says
f(n) = O(g(n)) such that for some constants c and n0
0 < f(n) < c.g(n) for all n>n0
Then the instructor proceeded to give an example,
2n2=O(n3)
Now I get that Big O gives the upper bound on the function but I am confused as to what exactly does the function f(n) correspond to here? What is its significance? As per my understanding goes, g(n) is the function representing the algorithm we are trying to analyse, but what is the purpose of f(n) or as in the example 2n2?
Need some clarification on this, I've been stuck here for hours.
In the formal definition of big-O notation, the functions f(n) and g(n) are placeholders for other functions, the same way that, say, in the quadratic formula, the letters a, b, and c are placeholders for the actual coefficients in the quadratic equation.
In your example, the instructor was talking about how 2n2 = O(n3). You have a formal definition that talks about what it means, in general, for f(n) = O(g(n)) to be true. So let's pattern-match that against the math above. It looks like f(n) is the thing on the left and g(n) is the thing on the right, so in this example f(n) = 2n2 and g(n) = n3.
The previous paragraph gives a superficial explanation of what f(n) and g(n) are by just looking at one example, but it's better to talk about what they really mean. Mathematically, f(n) and g(n) really can be any functions you'd like, but typically when you're using big-O notation in the context of the analysis of algorithms, you'll usually let f(n) be the true amount of work done by the algorithm in question (or its runtime, or its space usage, or really just about anything else) and will pick g(n) to be some "nice" function that's easier to reason about. For example, it might be the case that some function you're analyzing has a true runtime, as a function of n, as 16n3 - 2n2 - 9n + 137. That would be your function f(n). Since the whole point behind big-O notation is to be able to (mathematically rigorously and safely) discard constant factors and low-order terms, we'll try to pick a g(n) that grows at the same rate as f(n) but is easier to reason about - say, g(n) = n3. So now we can try to determine whether f(n) = O(g(n)) by seeing whether we can find the constants c and n0 talked about in the formal definition of big-O notation.
So to recap:
f(n) and g(n) in the definition given are just placeholders for other functions.
In practical usage, f(n) will be the true runtime of the algorithm in question, and g(n) will be something a lot simpler that grows at the same rate.
f(n) is the function that gives you the exact values of the thing you are trying to measure (be that time, number of processor instructions, number of iterations steps, amount of memory used, whatever).
g(n) is another function that approximates the growth of f(n).
In the usual case you don't really know f(n) or it's really hard to compute. For example for time it depends on the processor speed, memory access patterns, system load, compiler optimizations and other. g(n) is usually really simple and it's easier to understand if f(n) = O(N) that if you double n you will roughly double the runtime, in the worst case. Since it's an upper bound g(n) doesn't have to be the minimum, but usually people try to avoid inflating it if it's not necessary. In your example O(n^3) is an upper bound for 2n^2, but so is O(n^2) and O(n!).

Big O,theta and omega notation

I am really confused what big O,big theta and big omega represent: best case, worst case and average case or upper bound and lower bound.
If the answer is upper bound and lower bound, then whose upper bound and lower bound? For example let's consider an algorithm. Then does it have three different expressions or rate of growths for best case, lower case and average case and for every case can we find it's big O, theta and omega.
Lastly, we know merge sort via divide and conquer algorithm has a rate of growth or time complexity of n*log(n), then is it the rate of growth of best case or worst case and how do we relate big O, theta and omega to this. please can you explain via a hypothetical expression.
The notations are all about asymptotic growing. If they explain the worst or the average case depends only on what you say it should express.
E.g. quicksort is a randomized algorithm for sorting. Lets say we use it deterministic and always choose the first element in the list as pivot. Then there exists an input of length n (for all n) such that the worst case is O(n²). But on random lists the average case is O(n log n).
So here I used the big O for average and worst case.
Basically this notation is for simplification. If you have an algorithm that does exactly 5n³-4n²-3logn steps you can simply write O(n³) and get rid of all the crap after n³ and also forget about the constants.
You can use big O to get rid of all monomials except for the one with the biggest exponent and all constant factors (constant means they don't grow, but 10100 is also constant)
At the end you get with O(f(n)) a set of functions, that all have the upper bound f(n) (this means g(n) is in O(f(n)), if you can find a constant number c such that g(n) ≤ c⋅f(n))
To make it a little easier:
I have explained that big O means an upper bound but not strict. so n³ is in O(n³), but also n².
So you can think about big O as a "lower equal".
The same way you can do with the others.
Little o is a strict lower: n² is in o(n³), but n³ is not.
Big Omega is a "greater equal": n³ is in Ω(n³) and also n⁴.
The little omega is a strict "greater": n³ is not in ω(n³) but n⁴ is.
And the big Theta is something like "equal" so n³ is in Θ(n³), but neither n² nor n⁴ is.
I hope this helps a little.
So the idea is that O means "on average" , one means the best case , one the worst case. For example lets think of most sorting algorithms. Most of them sort in n time if the items are already in order. You just have to check they are in order. All of them have a worst case order where they have to do most work to order everything.
Assume f, g to be asymptotically non-negative functions.
In brief,
1) Big_O Notation
f(n) = O(g(n)) if asymptotically, f(n) ≤ c · g(n), for some constant c.
2) Theta Notation
f(n) = Θ(g(n)) if asymptotically, c1 · g(n) ≤ f(n) ≤ c2 · g(n),
for some constants c1, c2; that is, up to constant factors, f(n)
and g(n) are asymptotically similar.
3) Omega Notation
f(n) = Ω(g(n)) if asymptotically, f(n) ≥ c · g(n), for some constant
c.
(Informally, asymptotically and up to constant factors, f is at least g).
4) Small o Notation
f(n) = o(g(n)) if limn→∞ f(n)/g(n)
= 0.
That is, for every c > 0, asymptotically, f(n) < c · g(n), (i.e., f is an order lower than g).
For the last part of your Question, Merge-Sort is Θ(nlog(n)) that is both worst and best case asymptotically converge to c1*nlog(n) + c2 for some constants c1, c2.

Algorithms Analysis Big O notation

I need help in this question. I really don't understand how to do it.
Show, either mathematically or by an example, that if f(n) is O(g(n)), a*f(n) is O(g(n)), for any constant a > 0.
I'll give you this. It should help you look in the right direction:
definition of O(n):
a function f(n) who satisfies f(n) <= C*n for an arbitrary constant number C and for every n above an arbitrary constant number N will be noted f(n) = O(n).
This is the formal definition for big-o notation, it should be simple to take this and turn it into a solution.

Resources