I am really confused what big O,big theta and big omega represent: best case, worst case and average case or upper bound and lower bound.
If the answer is upper bound and lower bound, then whose upper bound and lower bound? For example let's consider an algorithm. Then does it have three different expressions or rate of growths for best case, lower case and average case and for every case can we find it's big O, theta and omega.
Lastly, we know merge sort via divide and conquer algorithm has a rate of growth or time complexity of n*log(n), then is it the rate of growth of best case or worst case and how do we relate big O, theta and omega to this. please can you explain via a hypothetical expression.
The notations are all about asymptotic growing. If they explain the worst or the average case depends only on what you say it should express.
E.g. quicksort is a randomized algorithm for sorting. Lets say we use it deterministic and always choose the first element in the list as pivot. Then there exists an input of length n (for all n) such that the worst case is O(n²). But on random lists the average case is O(n log n).
So here I used the big O for average and worst case.
Basically this notation is for simplification. If you have an algorithm that does exactly 5n³-4n²-3logn steps you can simply write O(n³) and get rid of all the crap after n³ and also forget about the constants.
You can use big O to get rid of all monomials except for the one with the biggest exponent and all constant factors (constant means they don't grow, but 10100 is also constant)
At the end you get with O(f(n)) a set of functions, that all have the upper bound f(n) (this means g(n) is in O(f(n)), if you can find a constant number c such that g(n) ≤ c⋅f(n))
To make it a little easier:
I have explained that big O means an upper bound but not strict. so n³ is in O(n³), but also n².
So you can think about big O as a "lower equal".
The same way you can do with the others.
Little o is a strict lower: n² is in o(n³), but n³ is not.
Big Omega is a "greater equal": n³ is in Ω(n³) and also n⁴.
The little omega is a strict "greater": n³ is not in ω(n³) but n⁴ is.
And the big Theta is something like "equal" so n³ is in Θ(n³), but neither n² nor n⁴ is.
I hope this helps a little.
So the idea is that O means "on average" , one means the best case , one the worst case. For example lets think of most sorting algorithms. Most of them sort in n time if the items are already in order. You just have to check they are in order. All of them have a worst case order where they have to do most work to order everything.
Assume f, g to be asymptotically non-negative functions.
In brief,
1) Big_O Notation
f(n) = O(g(n)) if asymptotically, f(n) ≤ c · g(n), for some constant c.
2) Theta Notation
f(n) = Θ(g(n)) if asymptotically, c1 · g(n) ≤ f(n) ≤ c2 · g(n),
for some constants c1, c2; that is, up to constant factors, f(n)
and g(n) are asymptotically similar.
3) Omega Notation
f(n) = Ω(g(n)) if asymptotically, f(n) ≥ c · g(n), for some constant
c.
(Informally, asymptotically and up to constant factors, f is at least g).
4) Small o Notation
f(n) = o(g(n)) if limn→∞ f(n)/g(n)
= 0.
That is, for every c > 0, asymptotically, f(n) < c · g(n), (i.e., f is an order lower than g).
For the last part of your Question, Merge-Sort is Θ(nlog(n)) that is both worst and best case asymptotically converge to c1*nlog(n) + c2 for some constants c1, c2.
Related
I know what the Big O, Theta and Omega notations are, but for example, if my algorithm is a for inside of a for, looping n times, my complexity would be O(n²), but why O(n²) instead of ϴ(n²)? Since the complexity IS in fact O(n²) and Ω(n²), then it would also be ϴ(n²), and I just can't see any reason to not use ϴ(n²) instead of O(n²), since ϴ(n²) restricts my complexity with an upper and bottom value, not only upper in the case of O(n²).
If f(n) = Θ(g(n)) then f(n) = O(g(n)). This because Θ(g(n)) ⊆ O (g(n)).
In your specific case if a loop runs exactly n^2 time the time complexity is in both O(n^2) and Θ(n^2).
The main reason why big-O is typically enough is that we are more interested in the worst case time complexity when analyzing the algorithm's performance, and knowing the worst case scenario is usually enough.
Also, not always is possible to find a tight bound.
I have a question about one of my homework questions. I've watched a couple videos on youtube explaining Big O, Theta, Omega etc but I do not understand what this question is asking.
What is this question asking? There is no function that exists that is less than or equals to its complexity as its upper bound and where it is greater than its omega but is a lower bound?
I am at a complete loss and pretty confused. If someone could clear up the confusion by explanation, that would be fantastic. I cannot wrap my head around it.
I believe the question is asking you to prove or disprove the statement. When it comes to asymptotic notation using the less than/equal/greater than symbols can be confusing for new learners because it kind of implies an equation between the two when really it saying an entirely different thing.
O(g(n)) is actually a set of functions that is bounded above by g(n) times some constant factor for large enough n. In math you would say f(n) ≤ O(g(n)) implies f(n) ≤ c g(n) for c>0, n>N. That is the reason ≤ is used for O. Big-Omega is defined similarly but as a lower bound. There are many functions that can satisfy an upper and lower bound which is the reason why it's defined as a set.
So it might be more clear to use set notation for this. You can express the same thing as:
f(n) ∈ O(g(n))
f(n) ∈ Ω(g(n))
So f(n) ≤ O(g(n)) means the same as f(n) = O(g(n)) which is the same as f(n) ∈ O(g(n)). And f(n) ≥ Ω(g(n)) means the same as f(n) = Ω(g(n)) which is the same as f(n) ∈ Ω(g(n)).
So what's it's really asking you to prove is whether you can have a function f(n) that is bounded above and below by g(n).
You can. This is actually the definition for Big-Theta. Ө(g(n)) is the set of all functions such that g(n) is an asymptotic upper and lower bound on those functions. In other words, h(n) = Ө(g(n)) implies c₁ g(n) ≤ h(n) ≤ c₂ g(n) for large enough n.
If f(n) = 7n^2 + 500 then a suitable upper and lower bound can be n^2 because f(n) ≥ 1*n^2 and f(n) ≤ 8*n^2 for all n > 10. Therefore f(n) ∈ Ө(n^2).
Hi i have started learning algorithm analysis. Here i have a doubt in asymptotic analysis.
Let's say i have a function f(n) = 5n^3 + 2n^2 + 23.
Now i need to find the Big-Oh, Big-Omega and Theta Notations for the above function,
Big-Oh:
f(n) <= (5 + 2 + 23) n^3 // raising all the n's to the power of 3 will give us a value which will be always greater than f(n)
f(n) <= 30n^3
f(n) belongs to Big-Oh(n^3)
Big-Omega:
n^3 <= f(n)
f(n) belongs to Big-Omega(n^3)
Theta:
n^3 <= f(n) <= 30 n^3
f(n) belongs to Theta ( n^3)
So here,
f(n) belongs to Big-Oh(n^3)
f(n) belongs to Big-Omega(n^3)
f(n) belongs to Theta(n^3)
Like this for any polynomial,the order of growth for Oh,Omega and Theta Notations are same(in our case it is order of n^3).
When order of growth will be same for all the notations, then what is the use of showing them with different notations and where exactly
it can be used? Kindly give me a practical example if possible.
Big theta (Θ) is when our upper bound (O) and lower bound (Ω) are the same, in other words it's a tight bound then. That's one reason to show both O and Ω (or well all three).
Why is this useful? Because Θ is a tight bound - it's much stronger than O. With O you can say that the above is O(n^1000) and you are still technically correct. A lot of times O != Ω and you don't have that tight bound.
Why are we usually talking about O usually, then? Well because in most cases we are interested in the upper bound (of the worst case scenario) of our algorithm. Sometimes we simply don't know the Θ and we go with O instead. Also it's important to notice that many people simply misuse those symbols, don't fully understand them and/or are simply not precise enough and use O in places where Θ could be used.
For instance quicksort does not have a tight bound (unless we are talking specifically about either best/average or worst case analysis) as it's Ω(nlogn) in the best and average cases but O(n^2) in the worst case. On the other hand mergesort is both Ω(nlogn) and O(nlogn) therefore it's also Θ(nlogn).
All in all it's all theoretical as quicksort in practics is in most cases faster as you usually don't hit the worst case and the operations done by quicksort are easier.
In the example you gave, the execution time is known and seems to be fixed. So it is in O(n^3), Ω(n^3) and thus in Θ(n^3) for all cases.
However, for an algorithm, the execution time may, and not so rarely, depend on the input the algorithm is running on.
Eg.: to search a key in a linked list takes going through all the list members in the worst case and that's linear time.
In the best case, the key you're looking for at that time is in the very beginning of the list and that's constant time.
So the algorithm is in O(n) and in Ω(1)=O(1). There's no valid f(n) to specify Θ(f(n)) for this algorithm.
Above Function Running Time can be computed in the following manner
Is it omega and big o of n ?
5n^3+2n^2+23 >=cn
5n^2+2n+23/n >= c as n grows and it tends to infinite
such constant c can exist which is smaller than or equals
to left hand side of the inequality so the function running
time is omega of n.
5n^2+2n+23/n <= c when n tends to infinite this inequality
doesn't hold true because such constant can not exist
which is equal to or greater than Left hand side of the
inequality so the function running time is not big o of n.
is it omega and Big o of n ^3 ?
5n^3+2n^2+23 >=cn^3
5 + 2/n + 23/n^3 >=c this inequality holds true so it's
omega of n^3.
5 + 2/n + 23/n^3 <=c this inequality also holds true so it's
big o of n^3.
Since it's omega and big o of n^3 hence it's theta of n^3 as
well.
Similarly its omega of n^ 2 but not big o of n^2.
I am reading about Big O notation. In the book I have, there is an example in which the complexity of n2 is in class of O(n3). That doesn't seem logical to me because n3 depends on n and it isn't just a plain constant multiplier that we can "get rid of."
Please explain to me why those two are of the same complexity. I can't find an answer on this forum or any other.
Big O determines an upper bound for large values of n. O(n3) is larger than O(n2) and so an n2 program is still O(n3). It's also O(n4), O(*n5), ..., O(ninfinity).
The reverse is not true, however. An n^3 program is not O(n2). Rather it would be Omega(n2), as Omega determines a lower bound (how much work we have to do at least).
Big O says nothing of this upper bound being "tight", it just needs to be higher than the actual complexity. So while an n*n complexity program is bounded by O(n3), that's not a very tight bound. O(n2) is tighter and more informative.
I need help in this question. I really don't understand how to do it.
Show, either mathematically or by an example, that if f(n) is O(g(n)), a*f(n) is O(g(n)), for any constant a > 0.
I'll give you this. It should help you look in the right direction:
definition of O(n):
a function f(n) who satisfies f(n) <= C*n for an arbitrary constant number C and for every n above an arbitrary constant number N will be noted f(n) = O(n).
This is the formal definition for big-o notation, it should be simple to take this and turn it into a solution.