Big O Efficiency not always full proof? - performance

I have been learning big o efficiency at school as the "go to" method for describing algorithm runtimes as better or worse than others but what I want to know is will the algorithm with the better efficiency always outperform the worst of the lot like bubble sort in every single situation, are there any situations where a bubble sort or a O(n2) algorithm will be better for a task than another algorithm with a lower O() runtime?

Generally, O() notation gives the asymptotic growth of a particular algorithm. That is, the larger category that an algorithm is placed into in terms of asymptotic growth indicates how long the algorithm will take to run as n grows (for some n number of items).
For example, we say that if a given algorithm is O(n), then it "grows linearly", meaning that as n increases, the algorithm will take about as long as any other O(n) algorithm to run.
That doesn't mean that it's exactly as long as any other algorithm that grows as O(n), because we disregard some things. For example, if the runtime of an algorithm takes exactly 12n+65ms, and another algorithm takes 8n+44ms, we can clearly see that for n=1000, algorithm 1 will take 12065ms to run and algorithm 2 will take 8044ms to run. Clearly algorithm 2 requires less time to run, but they are both O(n).
There are also situations that, for small values of n, an algorithm that is O(n2) might outperform another algorithm that's O(n), due to constants in the runtime that aren't being considered in the analysis.
Basically, Big-O notation gives you an estimate of the complexity of the algorithm, and can be used to compare different algorithms. In terms of application, though, you may need to dig deeper to find out which algorithm is best suited for a given project/program.

Big O is gives you the worst cast scenario. That means that it assumes the input in in the worst possible It also ignores the coefficient. If you are using selection sort on an array that is reverse sorted then it will run in n^2 time. If you use selection sort on a sorted array then it will run in n time. Therefore selection sort would run faster than many other sort algorithms on an already sorted list and slower than most (reasonable) algorithms on a reverse sorted list.
Edit: sorry, I meant insertion sort, not selection sort. Selection sort is always n^2

Related

Why big-Oh is not always a worst case analysis of an algorithm?

I am trying to learn analysis of algorithms and I am stuck with relation between asymptotic notation(big O...) and cases(best, worst and average).
I learn that the Big O notation defines an upper bound of an algorithm, i.e. it defines function can not grow more than its upper bound.
At first it sound to me as it calculates the worst case.
I google about(why worst case is not big O?) and got ample of answers which were not so simple to understand for beginner.
I concluded it as follows:
Big O is not always used to represent worst case analysis of algorithm because, suppose a algorithm which takes O(n) execution steps for best, average and worst input then it's best, average and worst case can be expressed as O(n).
Please tell me if I am correct or I am missing something as I don't have anyone to validate my understanding.
Please suggest a better example to understand why Big O is not always worst case.
Big-O?
First let us see what Big O formally means:
In computer science, big O notation is used to classify algorithms
according to how their running time or space requirements grow as the
input size grows.
This means that, Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation. Here, O means order of the function, and it only provides an upper bound on the growth rate of the function.
Now let us look at the rules of Big O:
If f(x) is a sum of several terms, if there is one with largest
growth rate, it can be kept, and all others omitted
If f(x) is a product of several factors, any constants (terms in the
product that do not depend on x) can be omitted.
Example:
f(x) = 6x^4 − 2x^3 + 5
Using the 1st rule we can write it as, f(x) = 6x^4
Using the 2nd rule it will give us, O(x^4)
What is Worst Case?
Worst case analysis gives the maximum number of basic operations that
have to be performed during execution of the algorithm. It assumes
that the input is in the worst possible state and maximum work has to
be done to put things right.
For example, for a sorting algorithm which aims to sort an array in ascending order, the worst case occurs when the input array is in descending order. In this case maximum number of basic operations (comparisons and assignments) have to be done to set the array in ascending order.
It depends on a lot of things like:
CPU (time) usage
memory usage
disk usage
network usage
What's the difference?
Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm, but big-O notation doesn’t imply anything of the sort.
The important point here is we're talking in terms of growth, not number of operations. However, with algorithms we do talk about the number of operations relative to the input size.
Big-O is used for making statements about functions. The functions can measure time or space or cache misses or rabbits on an island or anything or nothing. Big-O notation doesn’t care.
In fact, when used for algorithms, big-O is almost never about time. It is about primitive operations.
When someone says that the time complexity of MergeSort is O(nlogn), they usually mean that the number of comparisons that MergeSort makes is O(nlogn). That in itself doesn’t tell us what the time complexity of any particular MergeSort might be because that would depend how much time it takes to make a comparison. In other words, the O(nlogn) refers to comparisons as the primitive operation.
The important point here is that when big-O is applied to algorithms, there is always an underlying model of computation. The claim that the time complexity of MergeSort is O(nlogn), is implicitly referencing an model of computation where a comparison takes constant time and everything else is free.
Example -
If we are sorting strings that are kk bytes long, we might take “read a byte” as a primitive operation that takes constant time with everything else being free.
In this model, MergeSort makes O(nlogn) string comparisons each of which makes O(k) byte comparisons, so the time complexity is O(k⋅nlogn). One common implementation of RadixSort will make k passes over the n strings with each pass reading one byte, and so has time complexity O(nk).
The two are not the same thing.  Worst-case analysis as other have said is identifying instances for which the algorithm takes the longest to complete (i.e., takes the most number of steps), then formulating a growth function using this.  One can analyze the worst-case time complexity using Big-Oh, or even other variants such as Big-Omega and Big-Theta (in fact, Big-Theta is usually what you want, though often Big-Oh is used for ease of comprehension by those not as much into theory).  One important detail and why worst-case analysis is useful is that the algorithm will run no slower than it does in the worst case.  Worst-case analysis is a method of analysis we use in analyzing algorithms.
Big-Oh itself is an asymptotic measure of a growth function; this can be totally independent as people can use Big-Oh to not even measure an algorithm's time complexity; its origins stem from Number Theory.  You are correct to say it is the asymptotic upper bound of a growth function; but the manner you prescribe and construct the growth function comes from your analysis.  The Big-Oh of a growth function itself means little to nothing without context as it only says something about the function you are analyzing.  Keep in mind there can be infinitely many algorithms that could be constructed that share the same time complexity (by the definition of Big-Oh, Big-Oh is a set of growth functions).
In short, worst-case analysis is how you build your growth function, Big-Oh notation is one method of analyzing said growth function.  Then, we can compare that result against other worst-case time complexities of competing algorithms for a given problem.  Worst-case analysis if done correctly yields the worst-case running time if done exactly (you can cut a lot of corners and still get the correct asymptotics if you use a barometer), and using this growth function yields the worst-case time complexity of the algorithm.  Big-Oh alone doesn't guarantee the worst-case time complexity as you had to make the growth function itself.  For instance, I could utilize Big-Oh notation for any other kind of analysis (e.g., best case, average case).  It really depends on what you're trying to capture.  For instance, Big-Omega is great for lower bounds.
Imagine a hypothetical algorithm that in best case only needs to do 1 step, in the worst case needs to do n2 steps, but in average (expected) case, only needs to do n steps. With n being the input size.
For each of these 3 cases you could calculate a function that describes the time complexity of this algorithm.
1 Best case has O(1) because the function f(x)=1 is really the highest we can go, but also the lowest we can go in this case, omega(1). Since Omega is equal to O (the upper bound and lower bound), we state that this function, in the best case, behaves like theta(1).
2 We could do the same analysis for the worst case and figure out that O(n2 ) = omega(n2 ) =theta(n2 ).
3 Same counts for the average case but with theta( n ).
So in theory you could determine 3 cases of an algorithm and for those 3 cases calculate the lower/upper/thight bounds. I hope this clears things up a bit.
https://www.google.co.in/amp/s/amp.reddit.com/r/learnprogramming/comments/3qtgsh/how_is_big_o_not_the_same_as_worst_case_or_big/
Big O notation shows how an algorithm grows with respect to input size. It says nothing of which algorithm is faster because it doesn't account for constant set up time (which can dominate if you have small input sizes). So when you say
which takes O(n) execution steps
this almost doesn't mean anything. Big O doesn't say how many execution steps there are. There are C + O(n) steps (where C is a constant) and this algorithm grows at rate n depending on input size.
Big O can be used for best, worst, or average cases. Let's take sorting as an example. Bubble sort is a naive O(n^2) sorting algorithm, but when the list is sorted it takes O(n). Quicksort is often used for sorting (the GNU standard C library uses it with some modifications). It preforms at O(n log n), however this is only true if the pivot chosen splits the array in to two equal sized pieces (on average). In the worst case we get an empty array one side of the pivot and Quicksort performs at O(n^2).
As Big O shows how an algorithm grows with respect to size, you can look at any aspect of an algorithm. Its best case, average case, worst case in both time and/or memory usage. And it tells you how these grow when the input size grows - but it doesn't say which is faster.
If you deal with small sizes then Big O won't matter - but an analysis can tell you how things will go when your input sizes increase.
One example of where the worst case might not be the asymptotic limit: suppose you have an algorithm that works on the set difference between some set and the input. It might run in O(N) time, but get faster as the input gets larger and knocks more values out of the working set.
Or, to get more abstract, f(x) = 1/x for x > 0 is a decreasing O(1) function.
I'll focus on time as a fairly common item of interest, but Big-O can also be used to evaluate resource requirements such as memory. It's essential for you to realize that Big-O tells how the runtime or resource requirements of a problem scale (asymptotically) as the problem size increases. It does not give you a prediction of the actual time required. Predicting the actual runtimes would require us to know the constants and lower order terms in the prediction formula, which are dependent on the hardware, operating system, language, compiler, etc. Using Big-O allows us to discuss algorithm behaviors while sidestepping all of those dependencies.
Let's talk about how to interpret Big-O scalability using a few examples. If a problem is O(1), it takes the same amount of time regardless of the problem size. That may be a nanosecond or a thousand seconds, but in the limit doubling or tripling the size of the problem does not change the time. If a problem is O(n), then doubling or tripling the problem size will (asymptotically) double or triple the amounts of time required, respectively. If a problem is O(n^2), then doubling or tripling the problem size will (asymptotically) take 4 or 9 times as long, respectively. And so on...
Lots of algorithms have different performance for their best, average, or worst cases. Sorting provides some fairly straightforward examples of how best, average, and worst case analyses may differ.
I'll assume that you know how insertion sort works. In the worst case, the list could be reverse ordered, in which case each pass has to move the value currently being considered as far to the left as possible, for all items. That yields O(n^2) behavior. Doubling the list size will take four times as long. More likely, the list of inputs is in randomized order. In that case, on average each item has to move half the distance towards the front of the list. That's less than in the worst case, but only by a constant. It's still O(n^2), so sorting a randomized list that's twice as large as our first randomized list will quadruple the amount of time required, on average. It will be faster than the worst case (due to the constants involved), but it scales in the same way. The best case, however, is when the list is already sorted. In that case, you check each item to see if it needs to be slid towards the front, and immediately find the answer is "no," so after checking each of the n values you're done in O(n) time. Consequently, using insertion sort for an already ordered list that is twice the size only takes twice as long rather than four times as long.
You are right, in that you can say certainly say that an algorithm runs in O(f(n)) time in the best or average case. We do that all the time for, say, quicksort, which is O(N log N) on average, but only O(N^2) worst case.
Unless otherwise specified, however, when you say that an algorithm runs in O(f(n)) time, you are saying the algorithm runs in O(f(n)) time in the worst case. At least that's the way it should be. Sometimes people get sloppy, and you will often hear that a hash table is O(1) when in the worst case it is actually worse.
The other way in which a big O definition can fail to characterize the worst case is that it's an upper bound only. Any function in O(N) is also in O(N^2) and O(2^N), so we would be entirely correct to say that quicksort takes O(2^N) time. We just don't say that because it isn't useful to do so.
Big Theta and Big Omega are there to specify lower bounds and tight bounds respectively.
There are two "different" and most important tools:
the best, worst, and average-case complexity are for generating numerical function over the size of possible problem instances (e.g. f(x) = 2x^2 + 8x - 4) but it is very difficult to work precisely with these functions
big O notation extract the main point; "how efficient the algorithm is", it ignore a lot of non important things like constants and ... and give you a big picture

Can O(k * n) be considered as linear complexity (O(n))?

When talking about complexity in general, things like O(3n) tend to be simplified to O(n) and so on. This is merely theoretical, so how does complexity work in reality? Can O(3n) also be simplified to O(n)?
For example, if a task implies that solution must be in O(n) complexity and in our code we have 2 times linear search of an array, which is O(n) + O(n). So, in reality, would that solution be considered as linear complexity or not fast enough?
Note that this question is asking about real implementations, not theoretical. I'm already aware that O(n) + O(n) is simplified to O(n)?
Bear in mind that O(f(n)) does not give you the amount of real-world time that something takes: only the rate of growth as n grows. O(n) only indicates that if n doubles, the runtime doubles as well, which lumps functions together that take one second per iteration or one millennium per iteration.
For this reason, O(n) + O(n) and O(2n) are both equivalent to O(n), which is the set of functions of linear complexity, and which should be sufficient for your purposes.
Though an algorithm that takes arbitrary-sized inputs will often want the most optimal function as represented by O(f(n)), an algorithm that grows faster (e.g. O(n²)) may still be faster in practice, especially when the data set size n is limited or fixed in practice. However, learning to reason about O(f(n)) representations can help you compose algorithms to have a predictable—optimal for your use-case—upper bound.
Yes, as long as k is a constant, you can write O(kn) = O(n).
The intuition behind is that the constant k doesn't increase with the size of the input space and at some point will be incomparably small to n, so it doesn't have much influence on the overall complexity.
Yes - as long as the number k of array searches is not affected by the input size, even for inputs that are too big to be possible in practice, O(kn) = O(n). The main idea of the O notation is to emphasize how the computation time increases with the size of the input, and so constant factors that stay the same no matter how big the input is aren't of interest.
An example of an incorrect way to apply this is to say that you can perform selection sort in linear time because you can only fit about one billion numbers in memory, and so selection sort is merely one billion array searches. However, with an ideal computer with infinite memory, your algorithm would not be able to handle more than one billion numbers, and so it is not a correct sorting algorithm (algorithms must be able to handle arbitrarily large inputs unless you specify a limit as a part of the problem statement); it is merely a correct algorithm for sorting up to one billion numbers.
(As a matter of fact, once you put a limit on the input size, most algorithms will become constant-time because for all inputs within your limit, the algorithm will solve it using at most the amount of time that is required for the biggest / most difficult input.)

Is an algorithm with a worst-case time complexity of O(n) always faster than an algorithm with a worst-case time complexity of O(n^2)?

This question has appeared in my algorithms class. Here's my thought:
I think the answer is no, an algorithm with worst-case time complexity of O(n) is not always faster than an algorithm with worst-case time complexity of O(n^2).
For example, suppose we have total-time functions S(n) = 99999999n and T(n) = n^2. Then clearly S(n) = O(n) and T(n) = O(n^2), but T(n) is faster than S(n) for all n < 99999999.
Is this reasoning valid? I'm slightly skeptical that, while this is a counterexample, it might be a counterexample to the wrong idea.
Thanks so much!
Big-O notation says nothing about the speed of an algorithm for any given input; it describes how the time increases with the number of elements. If your algorithm executes in constant time, but that time is 100 billion years, then it's certainly slower than many linear, quadratic and even exponential algorithms for large ranges of inputs.
But that's probably not really what the question is asking. The question is asking whether an algorithm A1 with worst-case complexity O(N) is always faster than an algorithm A2 with worst-case complexity O(N^2); and by faster it probably refers to the complexity itself. In which case you only need a counter-example, e.g.:
A1 has normal complexity O(log n) but worst-case complexity O(n^2).
A2 has normal complexity O(n) and worst-case complexity O(n).
In this example, A1 is normally faster (i.e. scales better) than A2 even though it has a greater worst-case complexity.
Since the question says Always it means it is enough to find only one counter example to prove that the answer is No.
Example for O(n^2) and O(n logn) but the same is true for O(n^2) and O(n)
One simple example can be a bubble sort where you keep comparing pairs until the array is sorted. Bubble sort is O(n^2).
If you use bubble sort on a sorted array, it will be faster than using other algorithms of time complexity O(nlogn).
You're talking about worst-case complexity here, and for some algorithms the worst case never happen in a practical application.
Saying that an algorithm runs faster than another means it run faster for all input data for all sizes of input. So the answer to your question is obviously no because the worst-case time complexity is not an accurate measure of the running time, it measures the order of growth of the number of operations in a worst case.
In practice, the running time depends of the implementation, and is not only about this number of operations. For example, one has to care about memory allocated, cache-efficiency, space/temporal locality. And obviously, one of the most important thing is the input data.
If you want examples of when the an algorithm runs faster than another while having a higher worst-case complexity, look at all the sorting algorithms and their running time depending of the input.
You are correct in every sense, that you provide a counter example to the statement. If it is for exam, then period, it should grant you full mark.
Yet for a better understanding about big-O notation and complexity stuff, I will share my own reasoning below. I also suggest you to always think the following graph when you are confused, especially the O(n) and O(n^2) line:
Big-O notation
My own reasoning when I first learnt computational complexity is that,
Big-O notation is saying for sufficient large size input, "sufficient" depends on the exact formula (Using the graph, n = 20 when compared O(n) & O(n^2) line), a higher order one will always be slower than lower order one
That means, for small input, there is no guarantee a higher order complexity algorithm will run slower than lower order one.
But Big-O notation tells you an information: When the input size keeping increasing, keep increasing....until a "sufficient" size, after that point, a higher order complexity algorithm will be always slower. And such a "sufficient" size is guaranteed to exist*.
Worst-time complexity
While Big-O notation provides a upper bound of the running time of an algorithm, depends on the structure of the input and the implementation of the algorithm, it may generally have a best complexity, average complexity and worst complexity.
The famous example is sorting algorithm: QuickSort vs MergeSort!
QuickSort, with a worst case of O(n^2)
MergeSort, with a worst case of O(n lg n)
However, Quick Sort is basically always faster than Merge Sort!
So, if your question is about Worst Case Complexity, quick sort & merge sort maybe the best counter example I can think of (Because both of them are common and famous)
Therefore, combine two parts, no matter from the point of view of input size, input structure, algorithm implementation, the answer to your question is NO.

Big O algorithms minimum time

I know that for some problems, no matter what algorithm you use to solve it, there will always be a certain minimum amount of time that will be required to solve the problem. I know BigO captures the worst-case (maximum time needed), but how can you find the minimum time required as a function of n? Can we find the minimum time needed for sorting n integers, or perhaps maybe finding the minimum of n integers?
what you are looking for is called best case complexity. It is kind of useless analysis for algorithms while worst case analysis is the most important analysis and average case analysis is sometimes used in special scenario.
the best case complexity depends on the algorithms. for example in a linear search the best case is, when the searched number is at the beginning of the array. or in a binary search it is in the first dividing point. in these cases the complexity is O(1).
for a single problem, best case complexity may vary depending on the algorithm. for example lest discuss about some basic sorting algorithms.
in bubble sort best case is when the array is already sorted. but even in this case you have to check all element to be sure. so the best case here is O(n). same goes to the insertion sort
for quicksort/mergesort/heapsort the best case complexity is O(n log n)
for selection sort it is O(n^2)
So from the above case you can understand that the complexity ( whether it is best , worst or average) depends on the algorithm, not on the problem

Is n or nlog(n) better than constant or logarithmic time?

In the Princeton tutorial on Coursera the lecturer explains the common order-of-growth functions that are encountered. He says that linear and linearithmic running times are "what we strive" for and his reasoning was that as the input size increases so too does the running time. I think this is where he made a mistake because I have previously heard him refer to a linear order-of-growth as unsatisfactory for an efficient algorithm.
While he was speaking he also showed a chart that plotted the different running times - constant and logarithmic running times looked to be more efficient. So was this a mistake or is this true?
It is a mistake when taken in the context that O(n) and O(n log n) functions have better complexity than O(1) and O(log n) functions. When looking typical cases of complexity in big O notation:
O(1) < O(log n) < O(n) < O(n log n) < O(n^2)
Notice that this doesn't necessarily mean that they will always be better performance-wise - we could have an O(1) function that takes a long time to execute even though its complexity is unaffected by element count. Such a function would look better in big O notation than an O(log n) function, but could actually perform worse in practice.
Generally speaking: a function with lower complexity (in big O notation) will outperform a function with greater complexity (in big O notation) when n is sufficiently high.
You're missing the broader context in which those statements must have been made. Different kinds of problems have different demands, and often even have theoretical lower bounds on how much work is absolutely necessary to solve them, no matter the means.
For operations like sorting or scanning every element of a simple collection, you can make a hard lower bound of the number of elements in the collection for those operations, because the output depends on every element of the input. [1] Thus, O(n) or O(n*log(n)) are the best one can do.
For other kinds of operations, like accessing a single element of a hash table or linked list, or searching in a sorted set, the algorithm needn't examine all of the input. In those settings, an O(n) operation would be dreadfully slow.
[1] Others will note that sorting by comparisons also has an n*log(n) lower bound, from information-theoretic arguments. There are non-comparison based sorting algorithms that can beat this, for some types of input.
Generally speaking, what we strive for is the best we can manage to do. But depending on what we're doing, that might be O(1), O(log log N), O(log N), O(N), O(N log N), O(N2), O(N3), or (or certain algorithms) perhaps O(N!) or even O(2N).
Just for example, when you're dealing with searching in a sorted collection, binary search borders on trivial and gives O(log N) complexity. If the distribution of items in the collection is reasonably predictable, we can typically do even better--around O(log log N). Knowing that, an algorithm that was O(N) or O(N2) (for a couple of obvious examples) would probably be pretty disappointing.
On the other hand, sorting is generally quite a bit higher complexity--the "good" algorithms manage O(N log N), and the poorer ones are typically around O(N2). Therefore, for sorting an O(N) algorithm is actually very good (in fact, only possible for rather constrained types of inputs), and we can pretty much count on the fact that something like O(log log N) simply isn't possible.
Going even further, we'd be happy to manage a matrix multiplication in only O(N2) instead of the usual O(N3). We'd be ecstatic to get optimum, reproducible answers to the traveling salesman problem or subset sum problem in only O(N3), given that optimal solutions to these normally require O(N!).
Algorithms with a sublinear behavior like O(1) or O(Log(N)) are special in that they do not require to look at all elements. In a way this is a fallacy because if there are really N elements, it will take O(N) just to read or compute them.
Sublinear algorithms are often possible after some preprocessing has been performed. Think of binary search in a sorted table, taking O(Log(N)). If the data is initially unsorted, it will cost O(N Log(N)) to sort it first. The cost of sorting can be balanced if you perform many searches, say K, on the same data set. Indeed, without the sort, the cost of the searches will be O(K N), and with pre-sorting O(N Log(N)+ K Log(N)). You win if K >> Log(N).
This said, when no preprocessing is allowed, O(N) behavior is ideal, and O(N Log(N)) is quite comfortable as well (for a million elements, Lg(N) is only 20). You start screaming with O(N²) and worse.
He said those algorithms are what we strive for, which is generally true. Many algorithms cannot possibly be improved better than logarithmic or linear time, and while constant time would be better in a perfect world, it's often unattainable.
constant time is always better because the time (or space) complexity doesn't depend on the problem size... isn't it a great feature? :-)
then we have O(N) and then Nlog(N)
did you know? problems with constant time complexity exist!
e.g.
let A[N] be an array of N integer values, with N > 3. Find and algorithm to tell if the sum of the first three elements is positive or negative.
What we strive for is efficiency, in the sense of designing algorithms with a time (or space) complexity that does not exceed their theoretical lower bound.
For instance, using comparison-based algorithms, you can't find a value in a sorted array faster than Omega(Log(N)), and you cannot sort an array faster than Omega(N Log(N)) - in the worst case.
Thus, binary search O(Log(N)) and Heapsort O(N Log(N)) are efficient algorithms, while linear search O(N) and Bubblesort O(N²) are not.
The lower bound depends on the problem to be solved, not on the algorithm.
Yes constant time i.e. O(1) is better than linear time O(n) because the former is not depending on the input-size of the problem. The order is O(1) > O (logn) > O (n) > O (nlogn).
Linear or linearthimic time we strive for because going for O(1) might not be realistic as in every sorting algorithm we atleast need a few comparisons which the professor tries to prove with his decison Tree- comparison analysis where he tries to sort three elements a b c and proves a lower bound of nlogn. Check his "Complexity of Sorting" in the Mergesort lecture.

Resources