Big O(n logn) is not preferable over the O(n^2) - algorithm

Any Algorithms example when do we prefer Big O(n^2) time complexity over the O(n logn)?
I have seen this question somewhere but did not find answer.

For a large problem, O(n log n) will always beat O(n^2). For a small problem, the constant factors hidden by the big-O notation may cause you to prefer the O(n^2) algorithm. For instance, the O(n log n) quicksort is faster than the O(n^2) insert sort, but some quicksort implementations switch to insert sort when the partitions get small (less than ten elements).

There are several reasons to choose an algorithm with a higher time complexity:
Speed: The asymptotic complexity only applies to values of n greater than some n_0. Also, it assumes a certain machine underneath which only partially matches real machines with multiple levels of cache and constrained memory.
Space: Some algorithms require more space than others, and thus become impossible to implement. Also, this may simply influence the speed on a real machine. For example, locality of references has influence on cache hits or misses, which is the reason why Quicksort performs better than Mergesort.
Implementation complexity: In some cases the loss in performance is simply negligible, but the development time isn't.

Many naive O(n^2) algorithms are faster on small inputs than their more complicated O(n log(n)) brethren.
For example, the GNU MP Bignum library has a very highly optimized multiplication implementation. But for numbers made up of a couple dozen words it just uses schoolbook multiplication (the best threshold depends heavily on the machine). In fact GMP transitions through a whole sequence of fastest-around-size-X algorithms.

One possibility - the O(n logn) algorithm is recursive, but you can program the O(n^2) iteratively, and your programming language that you must use does not support recursion.
"Preferred" is relative here BTW. If the data-set was large enough, you COULD emulate recursion by using your own stack variable that you manipulate in a version of the "recursive" algorithm that was implemented iteratively (we had to do that exercise in Guy Steele's Comparative Programming class at CMU back-in-the-day).

Related

Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one?

Are there are any cases where you would prefer O(log n) time complexity to O(1) time complexity? Or O(n) to O(log n)?
Do you have any examples?
There can be many reasons to prefer an algorithm with higher big O time complexity over the lower one:
most of the time, lower big-O complexity is harder to achieve and requires skilled implementation, a lot of knowledge and a lot of testing.
big-O hides the details about a constant: algorithm that performs in 10^5 is better from big-O point of view than 1/10^5 * log(n) (O(1) vs O(log(n)), but for most reasonable n the first one will perform better. For example the best complexity for matrix multiplication is O(n^2.373) but the constant is so high that no (to my knowledge) computational libraries use it.
big-O makes sense when you calculate over something big. If you need to sort array of three numbers, it matters really little whether you use O(n*log(n)) or O(n^2) algorithm.
sometimes the advantage of the lowercase time complexity can be really negligible. For example there is a data structure tango tree which gives a O(log log N) time complexity to find an item, but there is also a binary tree which finds the same in O(log n). Even for huge numbers of n = 10^20 the difference is negligible.
time complexity is not everything. Imagine an algorithm that runs in O(n^2) and requires O(n^2) memory. It might be preferable over O(n^3) time and O(1) space when the n is not really big. The problem is that you can wait for a long time, but highly doubt you can find a RAM big enough to use it with your algorithm
parallelization is a good feature in our distributed world. There are algorithms that are easily parallelizable, and there are some that do not parallelize at all. Sometimes it makes sense to run an algorithm on 1000 commodity machines with a higher complexity than using one machine with a slightly better complexity.
in some places (security) a complexity can be a requirement. No one wants to have a hash algorithm that can hash blazingly fast (because then other people can bruteforce you way faster)
although this is not related to switch of complexity, but some of the security functions should be written in a manner to prevent timing attack. They mostly stay in the same complexity class, but are modified in a way that it always takes worse case to do something. One example is comparing that strings are equal. In most applications it makes sense to break fast if the first bytes are different, but in security you will still wait for the very end to tell the bad news.
somebody patented the lower-complexity algorithm and it is more economical for a company to use higher complexity than to pay money.
some algorithms adapt well to particular situations. Insertion sort, for example, has an average time-complexity of O(n^2), worse than quicksort or mergesort, but as an online algorithm it can efficiently sort a list of values as they are received (as user input) where most other algorithms can only efficiently operate on a complete list of values.
There is always the hidden constant, which can be lower on the O(log n) algorithm. So it can work faster in practice for real-life data.
There are also space concerns (e.g. running on a toaster).
There's also developer time concern - O(log n) may be 1000× easier to implement and verify.
I'm surprised nobody has mentioned memory-bound applications yet.
There may be an algorithm that has less floating point operations either due to its complexity (i.e. O(1) < O(log n)) or because the constant in front of the complexity is smaller (i.e. 2n2 < 6n2). Regardless, you might still prefer the algorithm with more FLOP if the lower FLOP algorithm is more memory-bound.
What I mean by "memory-bound" is that you are often accessing data that is constantly out-of-cache. In order to fetch this data, you have to pull the memory from your actually memory space into your cache before you can perform your operation on it. This fetching step is often quite slow - much slower than your operation itself.
Therefore, if your algorithm requires more operations (yet these operations are performed on data that is already in cache [and therefore no fetching required]), it will still out-perform your algorithm with fewer operations (which must be performed on out-of-cache data [and therefore require a fetch]) in terms of actual wall-time.
In contexts where data security is a concern, a more complex algorithm may be preferable to a less complex algorithm if the more complex algorithm has better resistance to timing attacks.
Alistra nailed it but failed to provide any examples so I will.
You have a list of 10,000 UPC codes for what your store sells. 10 digit UPC, integer for price (price in pennies) and 30 characters of description for the receipt.
O(log N) approach: You have a sorted list. 44 bytes if ASCII, 84 if Unicode. Alternately, treat the UPC as an int64 and you get 42 & 72 bytes. 10,000 records--in the highest case you're looking at a bit under a megabyte of storage.
O(1) approach: Don't store the UPC, instead you use it as an entry into the array. In the lowest case you're looking at almost a third of a terabyte of storage.
Which approach you use depends on your hardware. On most any reasonable modern configuration you're going to use the log N approach. I can picture the second approach being the right answer if for some reason you're running in an environment where RAM is critically short but you have plenty of mass storage. A third of a terabyte on a disk is no big deal, getting your data in one probe of the disk is worth something. The simple binary approach takes 13 on average. (Note, however, that by clustering your keys you can get this down to a guaranteed 3 reads and in practice you would cache the first one.)
Consider a red-black tree. It has access, search, insert, and delete of O(log n). Compare to an array, which has access of O(1) and the rest of the operations are O(n).
So given an application where we insert, delete, or search more often than we access and a choice between only these two structures, we would prefer the red-black tree. In this case, you might say we prefer the red-black tree's more cumbersome O(log n) access time.
Why? Because the access is not our overriding concern. We are making a trade off: the performance of our application is more heavily influenced by factors other than this one. We allow this particular algorithm to suffer performance because we make large gains by optimizing other algorithms.
So the answer to your question is simply this: when the algorithm's growth rate isn't what we want to optimize, when we want to optimize something else. All of the other answers are special cases of this. Sometimes we optimize the run time of other operations. Sometimes we optimize for memory. Sometimes we optimize for security. Sometimes we optimize maintainability. Sometimes we optimize for development time. Even the overriding constant being low enough to matter is optimizing for run time when you know the growth rate of the algorithm isn't the greatest impact on run time. (If your data set was outside this range, you would optimize for the growth rate of the algorithm because it would eventually dominate the constant.) Everything has a cost, and in many cases, we trade the cost of a higher growth rate for the algorithm to optimize something else.
Yes.
In a real case, we ran some tests on doing table lookups with both short and long string keys.
We used a std::map, a std::unordered_map with a hash that samples at most 10 times over the length of the string (our keys tend to be guid-like, so this is decent), and a hash that samples every character (in theory reduced collisions), an unsorted vector where we do a == compare, and (if I remember correctly) an unsorted vector where we also store a hash, first compare the hash, then compare the characters.
These algorithms range from O(1) (unordered_map) to O(n) (linear search).
For modest sized N, quite often the O(n) beat the O(1). We suspect this is because the node-based containers required our computer to jump around in memory more, while the linear-based containers did not.
O(lg n) exists between the two. I don't remember how it did.
The performance difference wasn't that large, and on larger data sets the hash-based one performed much better. So we stuck with the hash-based unordered map.
In practice, for reasonable sized n, O(lg n) is O(1). If your computer only has room for 4 billion entries in your table, then O(lg n) is bounded above by 32. (lg(2^32)=32) (in computer science, lg is short hand for log based 2).
In practice, lg(n) algorithms are slower than O(1) algorithms not because of the logarithmic growth factor, but because the lg(n) portion usually means there is a certain level of complexity to the algorithm, and that complexity adds a larger constant factor than any of the "growth" from the lg(n) term.
However, complex O(1) algorithms (like hash mapping) can easily have a similar or larger constant factor.
The possibility to execute an algorithm in parallel.
I don't know if there is an example for the classes O(log n) and O(1), but for some problems, you choose an algorithm with a higher complexity class when the algorithm is easier to execute in parallel.
Some algorithms cannot be parallelized but have so low complexity class. Consider another algorithm which achieves the same result and can be parallelized easily, but has a higher complexity class. When executed on one machine, the second algorithm is slower, but when executed on multiple machines, the real execution time gets lower and lower while the first algorithm cannot speed up.
Let's say you're implementing a blacklist on an embedded system, where numbers between 0 and 1,000,000 might be blacklisted. That leaves you two possible options:
Use a bitset of 1,000,000 bits
Use a sorted array of the blacklisted integers and use a binary search to access them
Access to the bitset will have guaranteed constant access. In terms of time complexity, it is optimal. Both from a theoretical and from a practical point view (it is O(1) with an extremely low constant overhead).
Still, you might want to prefer the second solution. Especially if you expect the number of blacklisted integers to be very small, as it will be more memory efficient.
And even if you do not develop for an embedded system where memory is scarce, I just can increase the arbitrary limit of 1,000,000 to 1,000,000,000,000 and make the same argument. Then the bitset would require about 125G of memory. Having a guaranteed worst-case complexitity of O(1) might not convince your boss to provide you such a powerful server.
Here, I would strongly prefer a binary search (O(log n)) or binary tree (O(log n)) over the O(1) bitset. And probably, a hash table with its worst-case complexity of O(n) will beat all of them in practice.
My answer here Fast random weighted selection across all rows of a stochastic matrix is an example where an algorithm with complexity O(m) is faster than one with complexity O(log(m)), when m is not too big.
A more general question is if there are situations where one would prefer an O(f(n)) algorithm to an O(g(n)) algorithm even though g(n) << f(n) as n tends to infinity. As others have already mentioned, the answer is clearly "yes" in the case where f(n) = log(n) and g(n) = 1. It is sometimes yes even in the case that f(n) is polynomial but g(n) is exponential. A famous and important example is that of the Simplex Algorithm for solving linear programming problems. In the 1970s it was shown to be O(2^n). Thus, its worse-case behavior is infeasible. But -- its average case behavior is extremely good, even for practical problems with tens of thousands of variables and constraints. In the 1980s, polynomial time algorithms (such a Karmarkar's interior-point algorithm) for linear programming were discovered, but 30 years later the simplex algorithm still seems to be the algorithm of choice (except for certain very large problems). This is for the obvious reason that average-case behavior is often more important than worse-case behavior, but also for a more subtle reason that the simplex algorithm is in some sense more informative (e.g. sensitivity information is easier to extract).
People have already answered your exact question, so I'll tackle a slightly different question that people may actually be thinking of when coming here.
A lot of the "O(1) time" algorithms and data structures actually only take expected O(1) time, meaning that their average running time is O(1), possibly only under certain assumptions.
Common examples: hashtables, expansion of "array lists" (a.k.a. dynamically sized arrays/vectors).
In such scenarios, you may prefer to use data structures or algorithms whose time is guaranteed to be absolutely bounded logarithmically, even though they may perform worse on average.
An example might therefore be a balanced binary search tree, whose running time is worse on average but better in the worst case.
To put my 2 cents in:
Sometimes a worse complexity algorithm is selected in place of a better one, when the algorithm runs on a certain hardware environment. Suppose our O(1) algorithm non-sequentially accesses every element of a very big, fixed-size array to solve our problem. Then put that array on a mechanical hard drive, or a magnetic tape.
In that case, the O(logn) algorithm (suppose it accesses disk sequentially), becomes more favourable.
There is a good use case for using a O(log(n)) algorithm instead of an O(1) algorithm that the numerous other answers have ignored: immutability. Hash maps have O(1) puts and gets, assuming good distribution of hash values, but they require mutable state. Immutable tree maps have O(log(n)) puts and gets, which is asymptotically slower. However, immutability can be valuable enough to make up for worse performance and in the case where multiple versions of the map need to be retained, immutability allows you to avoid having to copy the map, which is O(n), and therefore can improve performance.
Simply: Because the coefficient - the costs associated with setup, storage, and the execution time of that step - can be much, much larger with a smaller big-O problem than with a larger one. Big-O is only a measure of the algorithms scalability.
Consider the following example from the Hacker's Dictionary, proposing a sorting algorithm relying on the Multiple Worlds Interpretation of Quantum Mechanics:
Permute the array randomly using a quantum process,
If the array is not sorted, destroy the universe.
All remaining universes are now sorted [including the one you are in].
(Source: http://catb.org/~esr/jargon/html/B/bogo-sort.html)
Notice that the big-O of this algorithm is O(n), which beats any known sorting algorithm to date on generic items. The coefficient of the linear step is also very low (since it's only a comparison, not a swap, that is done linearly). A similar algorithm could, in fact, be used to solve any problem in both NP and co-NP in polynomial time, since each possible solution (or possible proof that there is no solution) can be generated using the quantum process, then verified in polynomial time.
However, in most cases, we probably don't want to take the risk that Multiple Worlds might not be correct, not to mention that the act of implementing step 2 is still "left as an exercise for the reader".
At any point when n is bounded and the constant multiplier of O(1) algorithm is higher than the bound on log(n). For example, storing values in a hashset is O(1), but may require an expensive computation of a hash function. If the data items can be trivially compared (with respect to some order) and the bound on n is such that log n is significantly less than the hash computation on any one item, then storing in a balanced binary tree may be faster than storing in a hashset.
In a realtime situation where you need a firm upper bound you would select e.g. a heapsort as opposed to a Quicksort, because heapsort's average behaviour is also its worst-case behaviour.
Adding to the already good answers.A practical example would be Hash indexes vs B-tree indexes in postgres database.
Hash indexes form a hash table index to access the data on the disk while btree as the name suggests uses a Btree data structure.
In Big-O time these are O(1) vs O(logN).
Hash indexes are presently discouraged in postgres since in a real life situation particularly in database systems, achieving hashing without collision is very hard(can lead to a O(N) worst case complexity) and because of this, it is even more harder to make them crash safe (called write ahead logging - WAL in postgres).
This tradeoff is made in this situation since O(logN) is good enough for indexes and implementing O(1) is pretty hard and the time difference would not really matter.
When n is small, and O(1) is constantly slow.
When the "1" work unit in O(1) is very high relative to the work unit in O(log n) and the expected set size is small-ish. For example, it's probably slower to compute Dictionary hash codes than iterate an array if there are only two or three items.
or
When the memory or other non-time resource requirements in the O(1) algorithm are exceptionally large relative to the O(log n) algorithm.
when redesigning a program, a procedure is found to be optimized with O(1) instead of O(lgN), but if it's not the bottleneck of this program, and it's hard to understand the O(1) alg. Then you would not have to use O(1) algorithm
when O(1) needs much memory that you cannot supply, while the time of O(lgN) can be accepted.
This is often the case for security applications that we want to design problems whose algorithms are slow on purpose in order to stop someone from obtaining an answer to a problem too quickly.
Here are a couple of examples off the top of my head.
Password hashing is sometimes made arbitrarily slow in order to make it harder to guess passwords by brute-force. This Information Security post has a bullet point about it (and much more).
Bit Coin uses a controllably slow problem for a network of computers to solve in order to "mine" coins. This allows the currency to be mined at a controlled rate by the collective system.
Asymmetric ciphers (like RSA) are designed to make decryption without the keys intentionally slow in order to prevent someone else without the private key to crack the encryption. The algorithms are designed to be cracked in hopefully O(2^n) time where n is the bit-length of the key (this is brute force).
Elsewhere in CS, Quick Sort is O(n^2) in the worst case but in the general case is O(n*log(n)). For this reason, "Big O" analysis sometimes isn't the only thing you care about when analyzing algorithm efficiency.
There are plenty of good answers, a few of which mention the constant factor, the input size and memory constraints, among many other reasons complexity is only a theoretical guideline rather than the end-all determination of real-world fitness for a given purpose or speed.
Here's a simple, concrete example to illustrate these ideas. Let's say we want to figure out whether an array has a duplicate element. The naive quadratic approach is to write a nested loop:
const hasDuplicate = arr => {
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) {
return true;
}
}
}
return false;
};
console.log(hasDuplicate([1, 2, 3, 4]));
console.log(hasDuplicate([1, 2, 4, 4]));
But this can be done in linear time by creating a set data structure (i.e. removing duplicates), then comparing its size to the length of the array:
const hasDuplicate = arr => new Set(arr).size !== arr.length;
console.log(hasDuplicate([1, 2, 3, 4]));
console.log(hasDuplicate([1, 2, 4, 4]));
Big O tells us is that the new Set approach will scale a great deal better from a time complexity standpoint.
However, it turns out that the "naive" quadratic approach has a lot going for it that Big O can't account for:
No additional memory usage
No heap memory allocation (no new)
No garbage collection for the temporary Set
Early bailout; in a case when the duplicate is known to be likely in the front of the array, there's no need to check more than a few elements.
If our use case is on bounded small arrays, we have a resource-constrained environment and/or other known common-case properties allow us to establish through benchmarks that the nested loop is faster on our particular workload, it might be a good idea.
On the other hand, maybe the set can be created one time up-front and used repeatedly, amortizing its overhead cost across all of the lookups.
This leads inevitably to maintainability/readability/elegance and other "soft" costs. In this case, the new Set() approach is probably more readable, but it's just as often (if not more often) that achieving the better complexity comes at great engineering cost.
Creating and maintaining a persistent, stateful Set structure can introduce bugs, memory/cache pressure, code complexity, and all other manner of design tradeoffs. Negotiating these tradeoffs optimally is a big part of software engineering, and time complexity is just one factor to help guide that process.
A few other examples that I don't see mentioned yet:
In real-time environments, for example resource-constrained embedded systems, sometimes complexity sacrifices are made (typically related to caches and memory or scheduling) to avoid incurring occasional worst-case penalties that can't be tolerated because they might cause jitter.
Also in embedded programming, the size of the code itself can cause cache pressure, impacting memory performance. If an algorithm has worse complexity but will result in massive code size savings, that might be a reason to choose it over an algorithm that's theoretically better.
In most implementations of recursive linearithmic algorithms like quicksort, when the array is small enough, a quadratic sorting algorithm like insertion sort is often called because the overhead of recursive function calls on increasingly tiny arrays tends to outweigh the cost of nested loops. Insertion sort is also fast on mostly-sorted arrays as the inner loop won't run much. This answer discusses this in an older version of Chrome's V8 engine before they moved to Timsort.

What is the purpose of Big-O notation in computer science if it doesn't give all the information needed?

What is the use of Big-O notation in computer science if it doesn't give all the information needed?
For example, if one algorithm runs at 1000n and one at n, it is true that they are both O(n). But I still may make a foolish choice based on this information, since one algorithm takes 1000 times as long as the other for any given input.
I still need to know all the parts of the equation, including the constant, to make an informed choice, so what is the importance of this "intermediate" comparison? I end up loosing important information when it gets reduced to this form, and what do I gain?
What does that constant factor represent? You can't say with certainty, for example, that an algorithm that is O(1000n) will be slower than an algorithm that's O(5n). It might be that the 1000n algorithm loads all data into memory and makes 1,000 passes over that data, and the 5n algorithm makes five passes over a file that's stored on a slow I/O device. The 1000n algorithm will run faster even though its "constant" is much larger.
In addition, some computers perform some operations more quickly than other computers do. It's quite common, given two O(n) algorithms (call them A and B), for A to execute faster on one computer and B to execute faster on the other computer. Or two different implementations of the same algorithm can have widely varying runtimes on the same computer.
Asymptotic analysis, as others have said, gives you an indication of how an algorithm's running time varies with the size of the input. It's useful for giving you a good starting place in algorithm selection. Quick reference will tell you that a particular algorithm is O(n) or O(n log n) or whatever, but it's very easy to find more detailed information on most common algorithms. Still, that more detailed analysis will only give you a constant number without saying how that number relates to real running time.
In the end, the only way you can determine which algorithm is right for you is to study it yourself and then test it against your expected data.
In short, I think you're expecting too much from asymptotic analysis. It's a useful "first line" filter. But when you get beyond that you have to look for more information.
As you correctly noted, it does not give you information on the exact running time of an algorithm. It is mainly used to indicate the complexity of an algorithm, to indicate if it is linear in the input size, quadratic, exponential, etc. This is important when choosing between algorithms if you know that your input size is large, since even a 1000n algorithm well beat a 1.23 exp(n) algorithm for large enough n.
In real world algorithms, the hidden 'scaling factor' is of course important. It is therefore not uncommon to use an algorithm with a 'worse' complexity if it has a lower scaling factor. Many practical implementations of sorting algorithms are for example 'hybrid' and will resort to some 'bad' algorithm like insertion sort (which is O(n^2) but very simple to implement) for n < 10, while changing to quicksort (which is O(n log(n)) but more complex) for n >= 10.
Big-O tells you how the runtime or memory consumption of a process changes as the size of its input changes. O(n) and O(1000n) are both still O(n) -- if you double the size of the input, then for all practical purposes the runtime doubles too.
Now, we can have an O(n) algorithm and an O(n2) algorithm where the coefficient of n is 1000000 and the coefficient of n2 is 1, in which case the O(n2) algorithm would outperform the O(n) for smaller n values. This doesn't change the fact, however, that the second algorithm's runtime grows more rapidly than the first's, and this is the information that big-O tells us. There will be some input size at which the O(n) algorithm begins to outperform the O(n2) algorithm.
In addition to the hidden impact of the constant term, complexity notation also only considers the worst case instance of a problem.
Case in point, the simplex method (linear programming) has exponential complexity for all known implementations. However, the simplex method works much faster in practice than the provably polynomial-time interior point methods.
Complexity notation has much value for theoretical problem classification. If you want some more information on practical consequences check out "Smoothed Analysis" by Spielman: http://www.cs.yale.edu/homes/spielman
This is what you are looking for.
It's main purpose is for rough comparisons of logic. The difference of O(n) and O(1000n) is large for n ~ 1000 (n roughly equal to 1000) and n < 1000, but when you compare it to values where n >> 1000 (n much larger than 1000) the difference is miniscule.
You are right in saying they both scale linearly and knowing the coefficient helps in a detailed analysis but generally in computing the difference between linear (O(cn)) and exponential (O(cn^x)) performance is more important to note than the difference between two linear times. There is a larger value in the comparisons of runtime of higher orders such as and Where the performance difference scales exponentially.
The overall purpose of Big O notation is to give a sense of relative performance time in order to compare and further optimize algorithms.
You're right that it doesn't give you all information, but there's no single metric in any field that does that.
Big-O notation tells you how quickly the performance gets worse, as your dataset gets larger. In other words, it describes the type of performance curve, but not the absolute performance.
Generally, Big-O notation is useful to express an algorithm's scaling performance as it falls into one of three basic categories:
Linear
Logarithmic (or "linearithmic")
Exponential
It is possible to do deep analysis of an algorithm for very accurate performance measurements, but it is time consuming and not really necessary to get a broad indication of performance.

Why is Insertion sort better than Quick sort for small list of elements?

Isn't Insertion sort O(n^2) > Quicksort O(n log n)...so for a small n, won't the relation be the same?
Big-O Notation describes the limiting behavior when n is large, also known as asymptotic behavior. This is an approximation. (See http://en.wikipedia.org/wiki/Big_O_notation)
Insertion sort is faster for small n because Quick Sort has extra overhead from the recursive function calls. Insertion sort is also more stable than Quick sort and requires less memory.
This question describes some further benefits of insertion sort. ( Is there ever a good reason to use Insertion Sort? )
Define "small".
When benchmarking sorting algorithms, I found out that switching from quicksort to insertion sort - despite what everybody was saying - actually hurts performance (recursive quicksort in C) for arrays larger than 4 elements. And those arrays can be sorted with a size-dependent optimal sorting algorithm.
That being said, always keep in mind that O(n...) only is the number of comparisons (in this specific case), not the speed of the algorithm. The speed depends on the implementation, e. g., if your quicksort function as or not recursive and how quickly function calls are dealt with.
Last but not least, big oh notation is only an upper bound.
If algorithm A requires 10000 n log n comparions and algorithm B requires 10 n ^ 2, the first is O(n log n) and the second is O(n ^ 2). Nevertheless, the second will (probably) be faster.
O()-notation is typically used to characterize performance for large problems, while deliberately ignoring constant factors and additive offsets to performance.
This is important because constant factors and overhead can vary greatly between processors and between implementations: the performance you get for a single-threaded Basic program on a 6502 machine will be very different from the same algorithm implemented as a C program running on an Intel i7-class processor. Note that implementation optimization is also a factor: attention to detail can often get you a major performance boost, even if all other factors are the same!
However, the constant factor and overhead are still important. If your application ensures that N never gets very large, the asymptotic behavior of O(N^2) vs. O(N log N) doesn't come into play.
Insertion sort is simple and, for small lists, it is generally faster than a comparably implemented quicksort or mergesort. That is why a practical sort implementation will generally fall back on something like insertion sort for the "base case", instead of recursing all the way down to single elements.
Its a matter of the constants that are attached to the running time that we ignore in the big-oh notation(because we are concerned with order of growth). For insertion sort, the running time is O(n^2) i.e. T(n)<=c(n^2) whereas for Quicksort it is T(n)<=k(nlgn). As c is quite small, for small n, the running time of insertion sort is less then that of Quicksort.....
Hope it helps...
Good real-world example when insertion sort can be used in conjunction with quicksort is the implementation of qsort function from glibc.
The first thing to point is qsort implements quicksort algorithm with a stack because it consumes less memory, stack implemented through macros directives.
Summary of current implementation from the source code (you'll find a lot of useful information through comments if you take a look at it):
Non-recursive
Chose the pivot element using a median-of-three decision tree
Only quicksorts TOTAL_ELEMS / MAX_THRESH partitions, leaving
insertion sort to order the MAX_THRESH items within each partition.
This is a big win, since insertion sort is faster for small, mostly
sorted array segments.
The larger of the two sub-partitions is always pushed onto the
stack first
What is MAX_THRESH value stands for? Well, just a small constant magic value which
was chosen to work best on a Sun 4/260.
How about binary insertion sort? You can absolutely search the position to swap by using binary search.

Is amortization ever really desirable?

For instance, suppose I have an algorithm that's O(n) and an algorithm that's an amortized O(n). Is it fair to say that in strictly big oh terms, the non-amortized algorithm will always be as fast or faster than the amortized algorithm? Or is there ever any reason to prefer the amortized version (ignoring things like code simplicity or ease of implementation)?
Big O notation only tells you how your code scales, not how fast it will be for finite N.
The difference between amortized and non-amortized is important if you care about worst-case performance or latency (such as real-time or interactive systems). If you only care about average throughput, though, they're for all practical purposes the same. Amortized analysis is good enough even in some very performance-critical situations like scientific computing and large-scale data mining.
Or is there ever any reason to prefer the amortized version
Smaller constants.
The main difference between a O(n) algorithm and an amortized O(n) algorithm is that you don't know anything about the worst-case behavior of the amortized O(n) algorithm. In practice, that doesn't matter very often: If your algorithm is being run many times, then you can count on the law of averages to balance out a few bad cases, and if your algorithm isn't being run very many times, then it's unlikely that you'll ever hit the worst case at all.
The only cases where the word "amortized" should matter are cases where you can't accept occasional bad performance for some reason. For example, in GUI applications, you would happily give up some average-case performance in return for a guarantee that you'll never bog down and stop responding while the user is sitting there and getting bored. In such applications, you want to make sure that even the worst-case behavior is good for any code that could cause the GUI to stop responding.
Still, most of the time, you don't need to worry about amortized O(n) versus O(n), and you can worry instead about what the constant factors are (as others have already said).
Big O notation tells you about how your algorithm changes with growing input.
It is not a shortcut for profiling your code.
It is possible that a better algorithm is O(n^2) for all n in your program, because there is a constant in an O(n) that is larger.
So your choice of algorithms really depends on which algorithm is faster for your input size. I guess the answer to your question is to profile both algorithms in your program and then decide which is faster.
A classic example of the need for amortized algorithm would be std::vector for which insert is amortized O(1).
Some reasons to use amortized algorithms:
Much more efficient average case.
Easier Implementation.
No worst case guarantee algorithm exists.
Strictly speaking big-O isn't precise enough of a measure to so be able to say that an algorithm with O(n) is faster than one with a amortized O(n). Think about it. If you drop down a level of fidelity in your complexity analysis, the constant factors may be significantly different and make the amortized version faster.
Also, the impact of amortization tends to depend on use. For example, if you're using a hash table, the impact of the amortization will be largely dependent on your ratio of get to add operations. So if you add 1000 entries, then do a billion lookups, the fact that you had to rehash a couple times doesn't make much of a difference. But if you're constantly adding entries the cost of rehashing might add up.
That's why amortization is different than worst case. Big-O reflects the worst case, while amortization lets you say "one in every x will take a hit, and x is sufficiently large that it doesn't matter." Also, if you consider examples like insertion into a hash table, x grows according to some constant. So if you have a hash table that starts with 100 buckets and doubles every rehash, then the rehashes are asymptotically going to get farther and farther apart. In addition, the absolute worst case complexity for an algorithm with an amortized complexity is dependent on previous calls, while in measures like average case the calls are considered independent.

Why is quicksort used in practice? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Quicksort has a worst-case performance of O(n2), but is still used widely in practice anyway. Why is this?
You shouldn't center only on worst case and only on time complexity. It's more about average than worst, and it's about time and space.
Quicksort:
has average time complexity of Θ(n log n);
can be implemented with space complexity of Θ(log n);
Also have in account that big O notation doesn't take in account any constants, but in practice it does make difference if the algorithm is few times faster. Θ(n log n) means, that algorithm executes in K n log(n), where K is constant. Quicksort is the comparison-sort algorithm with the lowest K.
Average asymptotic order of QuickSort is O(nlogn) and it's usually more efficient than heapsort due to smaller constants (tighter loops). In fact, there is a theoretical linear time median selection algorithm that you can use to always find the best pivot, thus resulting a worst case O(nlogn). However, the normal QuickSort is usually faster than this theoretical one.
To make it more sensible, consider the probability that QuickSort will finish in O(n2). It's just 1/n! which means it'll almost never encounter that bad case.
Interestingly, quicksort performs more comparisons on average than mergesort - 1.44 n lg n (expected) for quicksort versus n lg n for mergesort. If all that mattered were comparisons, mergesort would be strongly preferable to quicksort.
The reason that quicksort is fast is that it has many other desirable properties that work extremely well on modern hardware. For example, quicksort requires no dynamic allocations. It can work in-place on the original array, using only O(log n) stack space (worst-case if implemented correctly) to store the stack frames necessary for recursion. Although mergesort can be made to do this, doing so usually comes at a huge performance penalty during the merge step. Other sorting algorithms like heapsort also have this property.
Additionally, quicksort has excellent locality of reference. The partitioning step, if done using Hoare's in-place partitioning algorithm, is essentially two linear scans performed inward from both ends of the array. This means that quicksort will have a very small number of cache misses, which on modern architectures is critical for performance. Heapsort, on the other hand, doesn't have very good locality (it jumps around all over an array), though most mergesort implementations have reasonably locality.
Quicksort is also very parallelizable. Once the initial partitioning step has occurred to split the array into smaller and greater regions, those two parts can be sorted independently of one another. Many sorting algorithms can be parallelized, including mergesort, but the performance of parallel quicksort tends to be better than other parallel algorithms for the above reason. Heapsort, on the other hand, does not.
The only issue with quicksort is the possibility that it degrades to O(n2), which on large data sets can be very serious. One way to avoid this is to have the algorithm introspect on itself and switch to one of the slower but more dependable algorithms in the case where it degenerates. This algorithm, called introsort, is a great hybrid sorting algorithm that gets many of the benefits of quicksort without the pathological case.
In summary:
Quicksort is in-place except for the stack frames used in the recursion, which take O(log n) space.
Quicksort has good locality of reference.
Quicksort is easily parallelized.
This accounts for why quicksort tends to outperform sorting algorithms that on paper might be better.
Hope this helps!
Because on average it's the fastest comparison sort (in terms of elapsed time).
Because, in the general case, it's one of the fastest sorting algorithms.
In addition to being the fastest though, some of it's bad case scenarios can be avoided by shuffling the array before sorting it. As for it's weakness with small data sets, obviously isn't as big a problem since the datasets are small and the sort time is probably small regardless.
As an example, I wrote a python function for QuickSort and bubble sorts. The bubble sort takes ~20 seconds to sort 10,000 records, 11 seconds for 7500, and 5 for 5000. The quicksort does all these sorts in around 0.15 seconds!
It might be worth pointing out that C does have the library function qsort(), but there's no requirement that it be implemented using an actual QuickSort, that is up to the compiler vendor.
Bcs this is one of Algorithm with work well on large data set with O(NlogN) complexity. This is also in place algorithm which take constant space. By selecting pivot element wisely we can avoid worse case of Quick sort and will perform in O(NlogN) always even on sorted array.

Resources