How to compare worst-case time complexity? - big-o

In the below 4 options, wasn't 3., i.e, O(n²) supposed to be the worst time complexity? Because it takes more time to run and log(n) takes lesser time to run? But unfortunately the answer given is option 4.
What's wrong with my logic?
O(log(n!))
O(n)
O(n²)
O(log(log(n)))

To answer the direct question, this is the order of the functions from best to worst, sorted by their worst-case time-complexities (what we would call big-O notation):
O(log(log(n))
O(n)
O(log(n!))
O(n^2)
This is pretty easy to see in a graphing of the functions:
(Note that O(log(log(n))) is nearly a constant function, an iterated logarithm. Thus it does not stray far from the x-axis).
Big-O notation describes the worst-case time complexity for any function, but the question likely originally asked for the best worst-case time complexity.

Related

Still not understanding Big-O vs Worst Case Time Complexity

The worst case for time taken by linear search is when the item is at the end of the list/array, or doesn't exist. In this case, the algorithm will need to perform n comparisons, to see if each element is the required value, assuming n is the length of the array/list.
From what I've understood of big-O notation, it makes sense to say that the time complexity of this algorithm is O(n), as it COULD happen that the worst case occurs, and big-O is used when we want to make a conservative estimate of the "worst case".
From a lot posts and answers on Stack Overflow, it seems this thinking is flawed, with claims made such as Big-O notation has nothing to do with the worst case analysis.
Please help me to understand the distinction in a way that doesn't just add to my confusion, as the answers here: Why big-Oh is not always a worst case analysis of an algorithm? do.
I'm not seeing how big-O has NOTHING to do with worst case analysis. From my current hilltop, it looks like big-O expresses how the worst case grows as the input size grows, which seems very much "to do" with worst-case analysis.
Statements such as this, from https://medium.com/omarelgabrys-blog/the-big-scary-o-notation-ce9352d827ce :
As an example, worst case analysis gives the maximum number of operations assuming that the input is in the worst possible state, while the big o notation express the max number of operations done in the worst case.
don't help much, as I cannot see what distinction is being referred to.
Any added clarity much appreciated.
The big-O notation is indeed independent of the worst-case analysis. It applies to any function you want.
In the case of a linear seach,
the worst-case complexity is O(n) (in fact even Θ(n)),
the average-case complexity is O(n) (in fact even Θ(n)),
the best-case complexity is O(1) (in fact even Θ(1)).
So big-O and worst-case are different concepts, though a big-O bound for the running time of an algorithm must hold for the worst-case.
This is the case:
If an algorithm to find a solution for a problem is in O(f(n)), means that the worst-case scenario for finding the solution for the problem by the algorithm is in O(f(n)). In other words, if the worst-case scenario can be found in g(n) steps by the algorithm, then g(n) is in O(f(n)).
For example, for the search algorithm, as you have mentioned, we know that the worst-case scenario can be found in O(n). Now, although the algorithm is in O(n), we can say the algorithm is in O(n^2) as well. As you see, here is the distinction between Big-Oh complexity and the worst-case scenario.
In sum, the worst-case scenario complexity of an algorithm is a subset of the Big-Oh complexity of the algorithm.

Is an algorithm with a worst-case time complexity of O(n) always faster than an algorithm with a worst-case time complexity of O(n^2)?

This question has appeared in my algorithms class. Here's my thought:
I think the answer is no, an algorithm with worst-case time complexity of O(n) is not always faster than an algorithm with worst-case time complexity of O(n^2).
For example, suppose we have total-time functions S(n) = 99999999n and T(n) = n^2. Then clearly S(n) = O(n) and T(n) = O(n^2), but T(n) is faster than S(n) for all n < 99999999.
Is this reasoning valid? I'm slightly skeptical that, while this is a counterexample, it might be a counterexample to the wrong idea.
Thanks so much!
Big-O notation says nothing about the speed of an algorithm for any given input; it describes how the time increases with the number of elements. If your algorithm executes in constant time, but that time is 100 billion years, then it's certainly slower than many linear, quadratic and even exponential algorithms for large ranges of inputs.
But that's probably not really what the question is asking. The question is asking whether an algorithm A1 with worst-case complexity O(N) is always faster than an algorithm A2 with worst-case complexity O(N^2); and by faster it probably refers to the complexity itself. In which case you only need a counter-example, e.g.:
A1 has normal complexity O(log n) but worst-case complexity O(n^2).
A2 has normal complexity O(n) and worst-case complexity O(n).
In this example, A1 is normally faster (i.e. scales better) than A2 even though it has a greater worst-case complexity.
Since the question says Always it means it is enough to find only one counter example to prove that the answer is No.
Example for O(n^2) and O(n logn) but the same is true for O(n^2) and O(n)
One simple example can be a bubble sort where you keep comparing pairs until the array is sorted. Bubble sort is O(n^2).
If you use bubble sort on a sorted array, it will be faster than using other algorithms of time complexity O(nlogn).
You're talking about worst-case complexity here, and for some algorithms the worst case never happen in a practical application.
Saying that an algorithm runs faster than another means it run faster for all input data for all sizes of input. So the answer to your question is obviously no because the worst-case time complexity is not an accurate measure of the running time, it measures the order of growth of the number of operations in a worst case.
In practice, the running time depends of the implementation, and is not only about this number of operations. For example, one has to care about memory allocated, cache-efficiency, space/temporal locality. And obviously, one of the most important thing is the input data.
If you want examples of when the an algorithm runs faster than another while having a higher worst-case complexity, look at all the sorting algorithms and their running time depending of the input.
You are correct in every sense, that you provide a counter example to the statement. If it is for exam, then period, it should grant you full mark.
Yet for a better understanding about big-O notation and complexity stuff, I will share my own reasoning below. I also suggest you to always think the following graph when you are confused, especially the O(n) and O(n^2) line:
Big-O notation
My own reasoning when I first learnt computational complexity is that,
Big-O notation is saying for sufficient large size input, "sufficient" depends on the exact formula (Using the graph, n = 20 when compared O(n) & O(n^2) line), a higher order one will always be slower than lower order one
That means, for small input, there is no guarantee a higher order complexity algorithm will run slower than lower order one.
But Big-O notation tells you an information: When the input size keeping increasing, keep increasing....until a "sufficient" size, after that point, a higher order complexity algorithm will be always slower. And such a "sufficient" size is guaranteed to exist*.
Worst-time complexity
While Big-O notation provides a upper bound of the running time of an algorithm, depends on the structure of the input and the implementation of the algorithm, it may generally have a best complexity, average complexity and worst complexity.
The famous example is sorting algorithm: QuickSort vs MergeSort!
QuickSort, with a worst case of O(n^2)
MergeSort, with a worst case of O(n lg n)
However, Quick Sort is basically always faster than Merge Sort!
So, if your question is about Worst Case Complexity, quick sort & merge sort maybe the best counter example I can think of (Because both of them are common and famous)
Therefore, combine two parts, no matter from the point of view of input size, input structure, algorithm implementation, the answer to your question is NO.

Asymptotic Analysis questions

I found a couple questions on geeksforgeeks.org that i can't seem to understand(#1 and #3). I was hoping someone could clarify the answers for me:
clarify whether true/valid or false
1.Time Complexity of QuickSort is Θ(n^2)
I answered true but it is false, why? If quicksort has a time complexity of O(n^2) and we know that Θ(g(n))={f(n) where c1*g(n) <= f(n) <=c2*g(n) and n >= n0} then doesn't that prove that it is true since c2*g(n) being the upper bound can equal f(n)?
2.Time Complexity of QuickSort is O(n^2) - true
3.Time complexity of all computer algorithms can be written as Ω(1)
This is true but i have no understanding of why this is true. A search algorithm can have a lower bound of Ω(1) assuming we find what we were looking for on the first element but how does this hold true for ALL computer algorithms such as insertion sort algorithm where the worst case is O(n^2)?
link:
http://www.geeksforgeeks.org/analysis-of-algorithms-set-3asymptotic-notations/
Time Complexity of QuickSort is Θ(n^2)----This means for every value of n, time taken by the algorithm to produce the output is equal to a function which is f(n)=n^2.but we know this is not true for quick sort because we know for some input, running time of quick sort may be equal to a function which is g(n)=nlogn. so we need to specify if it is worst,best or average case.It is correct to say "Worst case time complexity of quicksort is Θ(n^2)".
"Time Complexity of QuickSort is O(n^2)"---this means for each input value of n,running time of the algorithm is at most a function which is f(n)=n^2.This implies there exist some input, for which the algorithm has a running time which may be less than f(n)=n^2.we know best case time complexity of quicksort is g(n)=nlogn and g(n)< f(n).As this statement covers all the cases so the statement is true.
Similarly it is correct to say "Time complexity of quicksort is Ω(nlogn)".because this means running time of the algorithm is at least nlogn, and n^2>nlogn.
"Time complexity of all computer algorithms can be written as Ω(1)"---here 1 represent constant time function.the above statement implies: to execute any computer algorithms we need a minimum constant time.which is correct for all computer algorithms.
The worst case scenario for QuickSort is O(n^2). But you expect it to run in O(n log n) time. Hence the running time of the algorithm varies per case and you cannot use the theta symbol to give the general running time of the algorithm.
And of course the lowerbound on the running time of any algorithm is constant time (Ω(1)). It doesn't have to reach this lower bound though but the algorithm should be run, and should have at least one operation.

Time Complexity Explanation with addition

If i have something with O(logN) and add it to something with O(1)
Is the overall complexity still logN?
thanks
Often big-O notation is an approximation. For example, you might say logN when the actual complexity is 4logN + 7. This is still considered to be logN time, because the major factor is the behaviour as N changes.
If you had some algorithm that is N^2 + logN, then the most significant term is N^2 and the logN quickly becomes unimportant as N increases... In that case, you might simply say it is O(N^2) because it describes the characteristic time complexity of the algorithm.
So it depends on your needs. If you simply need to describe the nature of the algorithm, then logN should suffice. If you need to completely categorize every part of it or compare with similar but optimized algorithms, then add in all the terms.

Is O(log n) always faster than O(n)

If there are 2 algorthims that calculate the same result with different complexities, will O(log n) always be faster? If so please explain. BTW this is not an assignment question.
No. If one algorithm runs in N/100 and the other one in (log N)*100, then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity.
No, it will not always be faster. BUT, as the problem size grows larger and larger, eventually you will always reach a point where the O(log n) algorithm is faster than the O(n) one.
In real-world situations, usually the point where the O(log n) algorithm would overtake the O(n) algorithm would come very quickly. There is a big difference between O(log n) and O(n), just like there is a big difference between O(n) and O(n^2).
If you ever have the chance to read Programming Pearls by Jon Bentley, there is an awesome chapter in there where he pits a O(n) algorithm against a O(n^2) one, doing everything possible to give O(n^2) the advantage. (He codes the O(n^2) algorithm in C on an Alpha, and the O(n) algorithm in interpreted BASIC on an old Z80 or something, running at about 1MHz.) It is surprising how fast the O(n) algorithm overtakes the O(n^2) one.
Occasionally, though, you may find a very complex algorithm which has complexity just slightly better than a simpler one. In such a case, don't blindly choose the algorithm with a better big-O -- you may find that it is only faster on extremely large problems.
For the input of size n, an algorithm of O(n) will perform steps proportional to n, while another algorithm of O(log(n)) will perform steps roughly log(n).
Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.
More Answers from stackoverflow

Resources