Using red black trees for sorting - algorithm

The worst-case running time of insertion on a red-black tree is O(lg n) and if I perform a in-order walk on the tree, I essentially visit each node, so the total worst-case runtime to print the sorted collection would be O(n lg n)
I am curious, why are red-black trees not preferred for sorting over quick sort (whose average-case running time is O(n lg n).
I see that maybe because red-black trees do not sort in-place, but I am not sure, so maybe someone could help.

Knowing which sort algorithm performs better really depend on your data and situation.
If you are talking in general/practical terms,
Quicksort (the one where you select the pivot randomly/just pick one fixed, making worst case Omega(n^2)) might be better than Red-Black Trees because (not necessarily in order of importance)
Quicksort is in-place. The keeps your memory footprint low. Say this quicksort routine was part of a program which deals with a lot of data. If you kept using large amounts of memory, your OS could start swapping your process memory and trash your perf.
Quicksort memory accesses are localized. This plays well with the caching/swapping.
Quicksort can be easily parallelized (probably more relevant these days).
If you were to try and optimize binary tree sorting (using binary tree without balancing) by using an array instead, you will end up doing something like Quicksort!
Red-Black trees have memory overheads. You have to allocate nodes possibly multiple times, your memory requirements with trees is doubles/triple that using arrays.
After sorting, say you wanted the 1045th (say) element, you will need to maintain order statistics in your tree (extra memory cost because of this) and you will have O(logn) access time!
Red-black trees have overheads just to access the next element (pointer lookups)
Red-black trees do not play well with the cache and the pointer accesses could induce more swapping.
Rotation in red-black trees will increase the constant factor in the O(nlogn).
Perhaps the most important reason (but not valid if you have lib etc available), Quicksort is very simple to understand and implement. Even a school kid can understand it!
I would say you try to measure both implementations and see what happens!
Also, Bob Sedgewick did a thesis on quicksort! Might be worth reading.

There are plenty of sorting algorithms which are worst case O(n log n) - for example, merge sort. The reason quicksort is preferred is because it is faster in practice, even though algorithmically it may not be as good as some other algorithms.
Often in-built sorts use a combination of various methods depending on the values of n.

There are many cases where red-back trees are not bad for sorting. My testing showed, compared to natural merge sort, that red-black trees excel where:
Trees are better for Dups:
All the tests where dups need to be eleminated, tree algorithm is better. This is not astonishing, since the tree can be kept very small from the beginning, whereby algorithms that are designed for inline array sort might pass around larger segments for a longer time.
Trees are better for Random:
All the tests with random, tree algorithm is better. This is also not astonishing, since in a tree distance between elements is shorter and shifting is not necessary. So repeatedly inserting into a tree could need less effort than sorting an array.
So we get the impression that the natural merge sort only excels in ascending and descending special cases. Which cant be even said for quick sort.
Gist with the test cases here.
P.S.: it should be noted that using trees for sorting is non-trivial. One has not only to provide an insert routine but also a routine that can linearize the tree back to an array. We are currently using a get_last and a predecessor routine, which doesn't need a stack. But these routines are not O(1) since they contain loops.

Big-O time complexity measures do not usually take into account scalar factors, e.g., O(2n) and O(4n) are usually just reduced to O(n). Time complexity analysis is based on operational steps at an algorithmic level, not at a strict programming level, i.e., no source code or native machine instruction considerations.
Quicksort is generally faster than tree-based sorting since (1) the methods have the same algorithmic average time complexity, and (2) lookup and swapping operations require fewer program commands and data accesses when working with simple arrays than with red-black trees, even if the tree uses an underlying array-based implementation. Maintenance of the red-black tree constraints requires additional operational steps, data field value storage/access (node colors), etc than the simple array partition-exchange steps of a quicksort.
The net result is that red-black trees have higher scalar coefficients than quicksort does that are being obscured by the standard O(n log n) average time complexity analysis result.
Some other practical considerations related to machine architectures are briefly discussed in the Quicksort article on Wikipedia

Generally, representations of O(nlgn) algorithms can be expanded to A*nlgn + B where A and B are constants. There are many algorithmic proofs that show the coefficients for quicksort are smaller than those of other algorithms. That is in best-case (quick sort performs horribly on sorted data).

Hi the best way to explain the difference between all sorting routine in my opinion is.
(My answer is for people who are confused how quick sort is faster in practice than another sorting algo).
"Think u are running on a very slow computer".
First thing one comparing operation takes 1 hour.
One shifting operation takes 2 hours.
"I am using hour just to make people understand how important time is".
Now from all the sorting operations quick-sort have very very less comparisons and very less swapping for elements.
Quick-sort is faster for this main reason.

Related

Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one?

Are there are any cases where you would prefer O(log n) time complexity to O(1) time complexity? Or O(n) to O(log n)?
Do you have any examples?
There can be many reasons to prefer an algorithm with higher big O time complexity over the lower one:
most of the time, lower big-O complexity is harder to achieve and requires skilled implementation, a lot of knowledge and a lot of testing.
big-O hides the details about a constant: algorithm that performs in 10^5 is better from big-O point of view than 1/10^5 * log(n) (O(1) vs O(log(n)), but for most reasonable n the first one will perform better. For example the best complexity for matrix multiplication is O(n^2.373) but the constant is so high that no (to my knowledge) computational libraries use it.
big-O makes sense when you calculate over something big. If you need to sort array of three numbers, it matters really little whether you use O(n*log(n)) or O(n^2) algorithm.
sometimes the advantage of the lowercase time complexity can be really negligible. For example there is a data structure tango tree which gives a O(log log N) time complexity to find an item, but there is also a binary tree which finds the same in O(log n). Even for huge numbers of n = 10^20 the difference is negligible.
time complexity is not everything. Imagine an algorithm that runs in O(n^2) and requires O(n^2) memory. It might be preferable over O(n^3) time and O(1) space when the n is not really big. The problem is that you can wait for a long time, but highly doubt you can find a RAM big enough to use it with your algorithm
parallelization is a good feature in our distributed world. There are algorithms that are easily parallelizable, and there are some that do not parallelize at all. Sometimes it makes sense to run an algorithm on 1000 commodity machines with a higher complexity than using one machine with a slightly better complexity.
in some places (security) a complexity can be a requirement. No one wants to have a hash algorithm that can hash blazingly fast (because then other people can bruteforce you way faster)
although this is not related to switch of complexity, but some of the security functions should be written in a manner to prevent timing attack. They mostly stay in the same complexity class, but are modified in a way that it always takes worse case to do something. One example is comparing that strings are equal. In most applications it makes sense to break fast if the first bytes are different, but in security you will still wait for the very end to tell the bad news.
somebody patented the lower-complexity algorithm and it is more economical for a company to use higher complexity than to pay money.
some algorithms adapt well to particular situations. Insertion sort, for example, has an average time-complexity of O(n^2), worse than quicksort or mergesort, but as an online algorithm it can efficiently sort a list of values as they are received (as user input) where most other algorithms can only efficiently operate on a complete list of values.
There is always the hidden constant, which can be lower on the O(log n) algorithm. So it can work faster in practice for real-life data.
There are also space concerns (e.g. running on a toaster).
There's also developer time concern - O(log n) may be 1000× easier to implement and verify.
I'm surprised nobody has mentioned memory-bound applications yet.
There may be an algorithm that has less floating point operations either due to its complexity (i.e. O(1) < O(log n)) or because the constant in front of the complexity is smaller (i.e. 2n2 < 6n2). Regardless, you might still prefer the algorithm with more FLOP if the lower FLOP algorithm is more memory-bound.
What I mean by "memory-bound" is that you are often accessing data that is constantly out-of-cache. In order to fetch this data, you have to pull the memory from your actually memory space into your cache before you can perform your operation on it. This fetching step is often quite slow - much slower than your operation itself.
Therefore, if your algorithm requires more operations (yet these operations are performed on data that is already in cache [and therefore no fetching required]), it will still out-perform your algorithm with fewer operations (which must be performed on out-of-cache data [and therefore require a fetch]) in terms of actual wall-time.
In contexts where data security is a concern, a more complex algorithm may be preferable to a less complex algorithm if the more complex algorithm has better resistance to timing attacks.
Alistra nailed it but failed to provide any examples so I will.
You have a list of 10,000 UPC codes for what your store sells. 10 digit UPC, integer for price (price in pennies) and 30 characters of description for the receipt.
O(log N) approach: You have a sorted list. 44 bytes if ASCII, 84 if Unicode. Alternately, treat the UPC as an int64 and you get 42 & 72 bytes. 10,000 records--in the highest case you're looking at a bit under a megabyte of storage.
O(1) approach: Don't store the UPC, instead you use it as an entry into the array. In the lowest case you're looking at almost a third of a terabyte of storage.
Which approach you use depends on your hardware. On most any reasonable modern configuration you're going to use the log N approach. I can picture the second approach being the right answer if for some reason you're running in an environment where RAM is critically short but you have plenty of mass storage. A third of a terabyte on a disk is no big deal, getting your data in one probe of the disk is worth something. The simple binary approach takes 13 on average. (Note, however, that by clustering your keys you can get this down to a guaranteed 3 reads and in practice you would cache the first one.)
Consider a red-black tree. It has access, search, insert, and delete of O(log n). Compare to an array, which has access of O(1) and the rest of the operations are O(n).
So given an application where we insert, delete, or search more often than we access and a choice between only these two structures, we would prefer the red-black tree. In this case, you might say we prefer the red-black tree's more cumbersome O(log n) access time.
Why? Because the access is not our overriding concern. We are making a trade off: the performance of our application is more heavily influenced by factors other than this one. We allow this particular algorithm to suffer performance because we make large gains by optimizing other algorithms.
So the answer to your question is simply this: when the algorithm's growth rate isn't what we want to optimize, when we want to optimize something else. All of the other answers are special cases of this. Sometimes we optimize the run time of other operations. Sometimes we optimize for memory. Sometimes we optimize for security. Sometimes we optimize maintainability. Sometimes we optimize for development time. Even the overriding constant being low enough to matter is optimizing for run time when you know the growth rate of the algorithm isn't the greatest impact on run time. (If your data set was outside this range, you would optimize for the growth rate of the algorithm because it would eventually dominate the constant.) Everything has a cost, and in many cases, we trade the cost of a higher growth rate for the algorithm to optimize something else.
Yes.
In a real case, we ran some tests on doing table lookups with both short and long string keys.
We used a std::map, a std::unordered_map with a hash that samples at most 10 times over the length of the string (our keys tend to be guid-like, so this is decent), and a hash that samples every character (in theory reduced collisions), an unsorted vector where we do a == compare, and (if I remember correctly) an unsorted vector where we also store a hash, first compare the hash, then compare the characters.
These algorithms range from O(1) (unordered_map) to O(n) (linear search).
For modest sized N, quite often the O(n) beat the O(1). We suspect this is because the node-based containers required our computer to jump around in memory more, while the linear-based containers did not.
O(lg n) exists between the two. I don't remember how it did.
The performance difference wasn't that large, and on larger data sets the hash-based one performed much better. So we stuck with the hash-based unordered map.
In practice, for reasonable sized n, O(lg n) is O(1). If your computer only has room for 4 billion entries in your table, then O(lg n) is bounded above by 32. (lg(2^32)=32) (in computer science, lg is short hand for log based 2).
In practice, lg(n) algorithms are slower than O(1) algorithms not because of the logarithmic growth factor, but because the lg(n) portion usually means there is a certain level of complexity to the algorithm, and that complexity adds a larger constant factor than any of the "growth" from the lg(n) term.
However, complex O(1) algorithms (like hash mapping) can easily have a similar or larger constant factor.
The possibility to execute an algorithm in parallel.
I don't know if there is an example for the classes O(log n) and O(1), but for some problems, you choose an algorithm with a higher complexity class when the algorithm is easier to execute in parallel.
Some algorithms cannot be parallelized but have so low complexity class. Consider another algorithm which achieves the same result and can be parallelized easily, but has a higher complexity class. When executed on one machine, the second algorithm is slower, but when executed on multiple machines, the real execution time gets lower and lower while the first algorithm cannot speed up.
Let's say you're implementing a blacklist on an embedded system, where numbers between 0 and 1,000,000 might be blacklisted. That leaves you two possible options:
Use a bitset of 1,000,000 bits
Use a sorted array of the blacklisted integers and use a binary search to access them
Access to the bitset will have guaranteed constant access. In terms of time complexity, it is optimal. Both from a theoretical and from a practical point view (it is O(1) with an extremely low constant overhead).
Still, you might want to prefer the second solution. Especially if you expect the number of blacklisted integers to be very small, as it will be more memory efficient.
And even if you do not develop for an embedded system where memory is scarce, I just can increase the arbitrary limit of 1,000,000 to 1,000,000,000,000 and make the same argument. Then the bitset would require about 125G of memory. Having a guaranteed worst-case complexitity of O(1) might not convince your boss to provide you such a powerful server.
Here, I would strongly prefer a binary search (O(log n)) or binary tree (O(log n)) over the O(1) bitset. And probably, a hash table with its worst-case complexity of O(n) will beat all of them in practice.
My answer here Fast random weighted selection across all rows of a stochastic matrix is an example where an algorithm with complexity O(m) is faster than one with complexity O(log(m)), when m is not too big.
A more general question is if there are situations where one would prefer an O(f(n)) algorithm to an O(g(n)) algorithm even though g(n) << f(n) as n tends to infinity. As others have already mentioned, the answer is clearly "yes" in the case where f(n) = log(n) and g(n) = 1. It is sometimes yes even in the case that f(n) is polynomial but g(n) is exponential. A famous and important example is that of the Simplex Algorithm for solving linear programming problems. In the 1970s it was shown to be O(2^n). Thus, its worse-case behavior is infeasible. But -- its average case behavior is extremely good, even for practical problems with tens of thousands of variables and constraints. In the 1980s, polynomial time algorithms (such a Karmarkar's interior-point algorithm) for linear programming were discovered, but 30 years later the simplex algorithm still seems to be the algorithm of choice (except for certain very large problems). This is for the obvious reason that average-case behavior is often more important than worse-case behavior, but also for a more subtle reason that the simplex algorithm is in some sense more informative (e.g. sensitivity information is easier to extract).
People have already answered your exact question, so I'll tackle a slightly different question that people may actually be thinking of when coming here.
A lot of the "O(1) time" algorithms and data structures actually only take expected O(1) time, meaning that their average running time is O(1), possibly only under certain assumptions.
Common examples: hashtables, expansion of "array lists" (a.k.a. dynamically sized arrays/vectors).
In such scenarios, you may prefer to use data structures or algorithms whose time is guaranteed to be absolutely bounded logarithmically, even though they may perform worse on average.
An example might therefore be a balanced binary search tree, whose running time is worse on average but better in the worst case.
To put my 2 cents in:
Sometimes a worse complexity algorithm is selected in place of a better one, when the algorithm runs on a certain hardware environment. Suppose our O(1) algorithm non-sequentially accesses every element of a very big, fixed-size array to solve our problem. Then put that array on a mechanical hard drive, or a magnetic tape.
In that case, the O(logn) algorithm (suppose it accesses disk sequentially), becomes more favourable.
There is a good use case for using a O(log(n)) algorithm instead of an O(1) algorithm that the numerous other answers have ignored: immutability. Hash maps have O(1) puts and gets, assuming good distribution of hash values, but they require mutable state. Immutable tree maps have O(log(n)) puts and gets, which is asymptotically slower. However, immutability can be valuable enough to make up for worse performance and in the case where multiple versions of the map need to be retained, immutability allows you to avoid having to copy the map, which is O(n), and therefore can improve performance.
Simply: Because the coefficient - the costs associated with setup, storage, and the execution time of that step - can be much, much larger with a smaller big-O problem than with a larger one. Big-O is only a measure of the algorithms scalability.
Consider the following example from the Hacker's Dictionary, proposing a sorting algorithm relying on the Multiple Worlds Interpretation of Quantum Mechanics:
Permute the array randomly using a quantum process,
If the array is not sorted, destroy the universe.
All remaining universes are now sorted [including the one you are in].
(Source: http://catb.org/~esr/jargon/html/B/bogo-sort.html)
Notice that the big-O of this algorithm is O(n), which beats any known sorting algorithm to date on generic items. The coefficient of the linear step is also very low (since it's only a comparison, not a swap, that is done linearly). A similar algorithm could, in fact, be used to solve any problem in both NP and co-NP in polynomial time, since each possible solution (or possible proof that there is no solution) can be generated using the quantum process, then verified in polynomial time.
However, in most cases, we probably don't want to take the risk that Multiple Worlds might not be correct, not to mention that the act of implementing step 2 is still "left as an exercise for the reader".
At any point when n is bounded and the constant multiplier of O(1) algorithm is higher than the bound on log(n). For example, storing values in a hashset is O(1), but may require an expensive computation of a hash function. If the data items can be trivially compared (with respect to some order) and the bound on n is such that log n is significantly less than the hash computation on any one item, then storing in a balanced binary tree may be faster than storing in a hashset.
In a realtime situation where you need a firm upper bound you would select e.g. a heapsort as opposed to a Quicksort, because heapsort's average behaviour is also its worst-case behaviour.
Adding to the already good answers.A practical example would be Hash indexes vs B-tree indexes in postgres database.
Hash indexes form a hash table index to access the data on the disk while btree as the name suggests uses a Btree data structure.
In Big-O time these are O(1) vs O(logN).
Hash indexes are presently discouraged in postgres since in a real life situation particularly in database systems, achieving hashing without collision is very hard(can lead to a O(N) worst case complexity) and because of this, it is even more harder to make them crash safe (called write ahead logging - WAL in postgres).
This tradeoff is made in this situation since O(logN) is good enough for indexes and implementing O(1) is pretty hard and the time difference would not really matter.
When n is small, and O(1) is constantly slow.
When the "1" work unit in O(1) is very high relative to the work unit in O(log n) and the expected set size is small-ish. For example, it's probably slower to compute Dictionary hash codes than iterate an array if there are only two or three items.
or
When the memory or other non-time resource requirements in the O(1) algorithm are exceptionally large relative to the O(log n) algorithm.
when redesigning a program, a procedure is found to be optimized with O(1) instead of O(lgN), but if it's not the bottleneck of this program, and it's hard to understand the O(1) alg. Then you would not have to use O(1) algorithm
when O(1) needs much memory that you cannot supply, while the time of O(lgN) can be accepted.
This is often the case for security applications that we want to design problems whose algorithms are slow on purpose in order to stop someone from obtaining an answer to a problem too quickly.
Here are a couple of examples off the top of my head.
Password hashing is sometimes made arbitrarily slow in order to make it harder to guess passwords by brute-force. This Information Security post has a bullet point about it (and much more).
Bit Coin uses a controllably slow problem for a network of computers to solve in order to "mine" coins. This allows the currency to be mined at a controlled rate by the collective system.
Asymmetric ciphers (like RSA) are designed to make decryption without the keys intentionally slow in order to prevent someone else without the private key to crack the encryption. The algorithms are designed to be cracked in hopefully O(2^n) time where n is the bit-length of the key (this is brute force).
Elsewhere in CS, Quick Sort is O(n^2) in the worst case but in the general case is O(n*log(n)). For this reason, "Big O" analysis sometimes isn't the only thing you care about when analyzing algorithm efficiency.
There are plenty of good answers, a few of which mention the constant factor, the input size and memory constraints, among many other reasons complexity is only a theoretical guideline rather than the end-all determination of real-world fitness for a given purpose or speed.
Here's a simple, concrete example to illustrate these ideas. Let's say we want to figure out whether an array has a duplicate element. The naive quadratic approach is to write a nested loop:
const hasDuplicate = arr => {
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) {
return true;
}
}
}
return false;
};
console.log(hasDuplicate([1, 2, 3, 4]));
console.log(hasDuplicate([1, 2, 4, 4]));
But this can be done in linear time by creating a set data structure (i.e. removing duplicates), then comparing its size to the length of the array:
const hasDuplicate = arr => new Set(arr).size !== arr.length;
console.log(hasDuplicate([1, 2, 3, 4]));
console.log(hasDuplicate([1, 2, 4, 4]));
Big O tells us is that the new Set approach will scale a great deal better from a time complexity standpoint.
However, it turns out that the "naive" quadratic approach has a lot going for it that Big O can't account for:
No additional memory usage
No heap memory allocation (no new)
No garbage collection for the temporary Set
Early bailout; in a case when the duplicate is known to be likely in the front of the array, there's no need to check more than a few elements.
If our use case is on bounded small arrays, we have a resource-constrained environment and/or other known common-case properties allow us to establish through benchmarks that the nested loop is faster on our particular workload, it might be a good idea.
On the other hand, maybe the set can be created one time up-front and used repeatedly, amortizing its overhead cost across all of the lookups.
This leads inevitably to maintainability/readability/elegance and other "soft" costs. In this case, the new Set() approach is probably more readable, but it's just as often (if not more often) that achieving the better complexity comes at great engineering cost.
Creating and maintaining a persistent, stateful Set structure can introduce bugs, memory/cache pressure, code complexity, and all other manner of design tradeoffs. Negotiating these tradeoffs optimally is a big part of software engineering, and time complexity is just one factor to help guide that process.
A few other examples that I don't see mentioned yet:
In real-time environments, for example resource-constrained embedded systems, sometimes complexity sacrifices are made (typically related to caches and memory or scheduling) to avoid incurring occasional worst-case penalties that can't be tolerated because they might cause jitter.
Also in embedded programming, the size of the code itself can cause cache pressure, impacting memory performance. If an algorithm has worse complexity but will result in massive code size savings, that might be a reason to choose it over an algorithm that's theoretically better.
In most implementations of recursive linearithmic algorithms like quicksort, when the array is small enough, a quadratic sorting algorithm like insertion sort is often called because the overhead of recursive function calls on increasingly tiny arrays tends to outweigh the cost of nested loops. Insertion sort is also fast on mostly-sorted arrays as the inner loop won't run much. This answer discusses this in an older version of Chrome's V8 engine before they moved to Timsort.

Hash Table v/s Trees

Are hashtables always faster than trees? Though Hashtables have O(1) search complexity but suppose if due to badly designed hash function lot of collisions happen and if we handle collisions using chained structure (say a balanced tree) then the worst case running time for search would be O(log n). So can I conclude for big or small data sets even in case of worst case scenarios hash tables will always be faster than trees? Also If I have ample memory and I dont want range searches can I always go for a hash table?
Are hashtables always faster than trees?
No, not always. This depends on many things, such as the size of the collection, the hash function, and for some hash table implementations - also the number of delete ops.
hash-tables are O(1) per op on average - but this is not always the case. They might be O(n) in worst cases.
Some reasons I can think of at the moment to prefer trees:
Ordering is important. [hash-tables are not maintaining order, BST is sorted by definition]
Latency is an issue - and you cannot suffer the O(n) that might occur. [This might be critical for real-time systems]
Ther data might be "similar" related to your hash function, and many elements hashed to the same locations [collisions] is not unprobable. [this can be sometimes solved by using a different hash function]
For relatively small collections - many times the hidden constant between hashtable's O(1) is much higher then the tree's - and using a tree might be faster for small collections.
However - if the data is huge, latency is not an issue and collisions are unprobable - hash-tables are asymptotically better then using a tree.
If due to badly designed hash function lot of collisions happen and if we handle collisions using chained structure (say a balanced tree) then the worst case running time for search would be O(n) (not O(log n)). Therefore you cannot conclude for big or small data sets even in case of worst case scenarios hash tables will always be faster than trees.
Use hashtable, and init it with the proper dimension. For example if you use only half space the collisions are very few.
In worst case scenario you'll have O(n) time in hast-tables. But this is a billions less probable then sun exploding write now, so when using a good hash-function you can safely assume it works in O(1) unless sun explodes.
On the other hand, performance of both Hash-Tables and Trees can vary on implementation, language, and phase of the moon, so the only good answer to this question is "Try both, think and pick better".

Algorithms for Big O Analysis

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?
I have (quite) a few examples:
The union-find data structure, which supports operations in (amortized) inverse Ackermann time. It's particularly nice because the data structure is incredibly easy to code.
Splay trees, which are self-balancing binary trees (that is, no extra information is stored other than the BST -- no red/black information. Amortized analysis was essentially invented to prove bounds for splay trees; splay trees run in amortized logarithmic time, but worst-case linear time. The proofs are cool.
Fibonacci heaps, which perform most of the priority queue operations in amortized constant time, thus improving the runtime of Dijkstra's algorithm and other problems. As with splay trees, there are slick "potential function" proofs.
Bernard Chazelle's algorithm for computing minimum spanning trees in linear times inverse Ackermann time. The algorithm uses soft heaps, a variant of the traditional priority queue, except that some "corruption" might occur and queries might not be answered correctly.
While on the topic of MSTs: an optimal algorithm has been given by Pettie and Ramachandran, but we don't know the running time!
Lots of randomized algorithms have interested analyses. I'll only mention one example: Delaunay triangulation can be computed in expected O(n log n) time by incrementally adding points; the analysis is apparently intricate, though I haven't seen it.
Algorithms that use "bit tricks" can be neat, e.g. sorting in O(n log log n) time (and linear space) -- that's right, it breaks the O(n log n) barrier by using more than just comparisons.
Cache-oblivious algorithms often have interesting analyses. For example, cache-oblivious priority queues (see page 3) use log log n levels of sizes n, n2/3, n4/9, and so on.
(Static) range-minimum queries on arrays are neat. The standard proof tests your limits with respect to reduction: range-minimum queries is reduced to least common ancestor in trees, which is in turn reduced to a range-minimum queries in a specific kind of arrays. The final step uses a cute trick, too.
Ackermann's function.
This one is kinda simple but Comb Sort blows my mind a little.
http://en.wikipedia.org/wiki/Comb_sort
It is such a simple algorithm for the most part it reads like an overly complicated bubble sort, but it is O(n*Log[n]). I find that mildly impressive.
The plethora of Algorithms for Fast Fourier Transforms are impressive too, the math that proves their validity is trippy and it was fun to try to prove a few on my own.
http://en.wikipedia.org/wiki/Fast_Fourier_transform
I can fairly easily understand the prime radix, multiple prime radix, and mixed radix algorithms but one that works on sets whose size are prime is quite cool.
2D ordered search analysis is quite interesting. You've got a 2-dimensional numeric array of numbers NxN where each row is sorted left-right and each column is sorted top-down. The task is to find a particular number in the array.
The recursive algorithm: pick the element in the middle, compare with the target number, discard a quarter of the array (depending on the result of the comparison), apply recursively to the remainig 3 quarters is quite interesting to analyze.
Non-deterministically polynomial complexity gets my vote, especially with the (admittedly considered unlikely) possibility that it may turn out to be the same as polynomial. In the same vein, anything that can theoretically benefit from quantum computing (N.B. this set is by no means all algorithms).
The other that would get my vote would be common mathematical operations on arbitrary-precision numbers -- this is where you have to consider things like multiplying big numbers is more expensive than multiplying small ones. There is quite a lot of analysis of this in Knuth (which shouldn't be news to anyone). Karatsuba's method is pretty neat: cut the two factors in half by digit (A1;A2)(B1;B2) and multiply A1 B1, A1 B2, A2 B1, A2 B2 separately, and then combine the results. Recurse if desired...
Shell sort. There are tons of variants with various increments, most of which have no benefits except to make the complexity analysis simpler.

Is there ever a good reason to use Insertion Sort?

For general-purpose sorting, the answer appears to be no, as quick sort, merge sort and heap sort tend to perform better in the average- and worst-case scenarios. However, insertion sort appears to excel at incremental sorting, that is, adding elements to a list one at a time over an extended period of time while keeping the list sorted, especially if the insertion sort is implemented as a linked list (O(log n) average case vs. O(n)). However, a heap seems to be able to perform just (or nearly) as well for incremental sorting (adding or removing a single element from a heap has a worst-case scenario of O(log n)). So what exactly does insertion sort have to offer over other comparison-based sorting algorithms or heaps?
From http://www.sorting-algorithms.com/insertion-sort:
Although it is one of the elementary sorting algorithms with
O(n2) worst-case time, insertion sort
is the algorithm of choice either when
the data is nearly sorted (because it
is adaptive) or when the problem size
is small (because it has low
overhead).
For these reasons, and because it is also stable, insertion sort is
often used as the recursive base case
(when the problem size is small) for
higher overhead divide-and-conquer
sorting algorithms, such as merge sort
or quick sort.
An important concept in analysis of algorithms is asymptotic analysis. In the case of two algorithms with different asymptotic running times, such as one O(n^2) and one O(nlogn) as is the case with insertion sort and quicksort respectively, it is not definite that one is faster than the other.
The important distinction with this sort of analysis is that for sufficiently large N, one algorithm will be faster than another. When analyzing an algorithm down to a term like O(nlogn), you drop constants. When realistically analyzing the running of an algorithm, those constants will be important only for situations of small n.
So what does this mean? That means for certain small n, some algorithms are faster. This article from EmbeddedGurus.net includes an interesting perspective on choosing different sorting algorithms in the case of a limited space (16k) and limited memory system. Of course, the article references only sorting a list of 20 integers, so larger orders of n is irrelevant. Shorter code and less memory consumption (as well as avoiding recursion) were ultimately more important decisions.
Insertion sort has low overhead, it can be written fairly succinctly, and it has several two key benefits: it is stable, and it has a fairly fast running case when the input is nearly sorted.
Yes, there is a reason to use either an insertion sort or one of its variants.
The sorting alternatives (quick sort, etc.) of the other answers here make the assumption that the data is already in memory and ready to go.
But if you are attempting to read in a large amount of data from a slower external source (say a hard drive), there is a large amount of time wasted as the bottleneck is clearly the data channel or the drive itself. It just cannot keep up with the CPU. A natural series of waits occur during any read. These waits are wasted CPU cycles unless you use them to sort as you go.
For instance, if you were to make your solution to this be the following:
Read a ton of data in a dedicated loop into memory
Sort that data
You would very likely take longer than if you did the following in two threads.
Thread A:
Read a datum
Place datum into FIFO queue
(Repeat until data exhausted from drive)
Thread B:
Get a datum from the FIFO queue
Insert it into the proper place in your sorted list
(repeat until queue empty AND Thread A says "done").
...the above will allow you to use the otherwise wasted time. Note: Thread B does not impede Thread A's progress.
By the time the data is fully read, it will have been sorted and ready for use.
Most sorting procedures will use quicksort and then insertion sort for very small data sets.
If you're talking about maintaining a sorted list, there is no advantage over some kind of tree, it's just slower.
Well, maybe it consumes less memory or is a simpler implementation.
Inserting into a sorted list will involve a scan, which means that each insert is O(n), therefore sorting n items becomes O(n^2)
Inserting into a container such as a balanced tree, is typically log(n), therefore the sort is O(n log(n)) which is of course better.
But for small lists it hardly makes any difference. You might use an insert sort if you have to write it yourself without any libraries, the lists are small and/or you don't care about performance.
YES,
Insertion sort is better than Quick Sort on short lists.
In fact an optimal Quick Sort has a size threshold that it stops at, and then the entire array is sorted by insertion sort over the threshold limits.
Also...
For maintaining a scoreboard, Binary Insertion Sort may be as good as it gets.
See this page.
For small array size insertion sort outperforms quicksort.
Java 7 and Java 8 uses dual pivot quicksort to sort primitive data types.
Dual pivot quicksort out performs typical single pivot quicksort. According to algorithm of dual pivot quicksort :
For small arrays (length < 27), use the Insertion sort algorithm.
Choose two pivot...........
Definitely, insertion sort out performs quicksort for smaller array size and that is why you switch to insertion sort for arrays of length less than 27. The reason could be: there are no recursions in insertion sort.
Source: http://codeblab.com/wp-content/uploads/2009/09/DualPivotQuicksort.pdf

Why is quicksort better than mergesort?

I was asked this question during an interview. They're both O(nlogn) and yet most people use Quicksort instead of Mergesort. Why is that?
Quicksort has O(n2) worst-case runtime and O(nlogn) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out.
In particular, the often-quoted runtime of sorting algorithms refers to the number of comparisons or the number of swaps necessary to perform to sort the data. This is indeed a good measure of performance, especially since it’s independent of the underlying hardware design. However, other things – such as locality of reference (i.e. do we read lots of elements which are probably in cache?) – also play an important role on current hardware. Quicksort in particular requires little additional space and exhibits good cache locality, and this makes it faster than merge sort in many cases.
In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).
In practice, many modern implementations of quicksort (in particular libstdc++’s std::sort) are actually introsort, whose theoretical worst-case is O(nlogn), same as merge sort. It achieves this by limiting the recursion depth, and switching to a different algorithm (heapsort) once it exceeds logn.
As many people have noted, the average case performance for quicksort is faster than mergesort. But this is only true if you are assuming constant time to access any piece of memory on demand.
In RAM this assumption is generally not too bad (it is not always true because of caches, but it is not too bad). However if your data structure is big enough to live on disk, then quicksort gets killed by the fact that your average disk does something like 200 random seeks per second. But that same disk has no trouble reading or writing megabytes per second of data sequentially. Which is exactly what mergesort does.
Therefore if data has to be sorted on disk, you really, really want to use some variation on mergesort. (Generally you quicksort sublists, then start merging them together above some size threshold.)
Furthermore if you have to do anything with datasets of that size, think hard about how to avoid seeks to disk. For instance this is why it is standard advice that you drop indexes before doing large data loads in databases, and then rebuild the index later. Maintaining the index during the load means constantly seeking to disk. By contrast if you drop the indexes, then the database can rebuild the index by first sorting the information to be dealt with (using a mergesort of course!) and then loading it into a BTREE datastructure for the index. (BTREEs are naturally kept in order, so you can load one from a sorted dataset with few seeks to disk.)
There have been a number of occasions where understanding how to avoid disk seeks has let me make data processing jobs take hours rather than days or weeks.
Actually, QuickSort is O(n2). Its average case running time is O(nlog(n)), but its worst-case is O(n2), which occurs when you run it on a list that contains few unique items. Randomization takes O(n). Of course, this doesn't change its worst case, it just prevents a malicious user from making your sort take a long time.
QuickSort is more popular because it:
Is in-place (MergeSort requires extra memory linear to number of elements to be sorted).
Has a small hidden constant.
"and yet most people use Quicksort instead of Mergesort. Why is that?"
One psychological reason that has not been given is simply that Quicksort is more cleverly named. ie good marketing.
Yes, Quicksort with triple partioning is probably one of the best general purpose sort algorithms, but theres no getting over the fact that "Quick" sort sounds much more powerful than "Merge" sort.
As others have noted, worst case of Quicksort is O(n^2), while mergesort and heapsort stay at O(nlogn). On the average case, however, all three are O(nlogn); so they're for the vast majority of cases comparable.
What makes Quicksort better on average is that the inner loop implies comparing several values with a single one, while on the other two both terms are different for each comparison. In other words, Quicksort does half as many reads as the other two algorithms. On modern CPUs performance is heavily dominated by access times, so in the end Quicksort ends up being a great first choice.
I'd like to add that of the three algoritms mentioned so far (mergesort, quicksort and heap sort) only mergesort is stable. That is, the order does not change for those values which have the same key. In some cases this is desirable.
But, truth be told, in practical situations most people need only good average performance and quicksort is... quick =)
All sort algorithms have their ups and downs. See Wikipedia article for sorting algorithms for a good overview.
From the Wikipedia entry on Quicksort:
Quicksort also competes with
mergesort, another recursive sort
algorithm but with the benefit of
worst-case Θ(nlogn) running time.
Mergesort is a stable sort, unlike
quicksort and heapsort, and can be
easily adapted to operate on linked
lists and very large lists stored on
slow-to-access media such as disk
storage or network attached storage.
Although quicksort can be written to
operate on linked lists, it will often
suffer from poor pivot choices without
random access. The main disadvantage
of mergesort is that, when operating
on arrays, it requires Θ(n) auxiliary
space in the best case, whereas the
variant of quicksort with in-place
partitioning and tail recursion uses
only Θ(logn) space. (Note that when
operating on linked lists, mergesort
only requires a small, constant amount
of auxiliary storage.)
Mu!
Quicksort is not better, it is well suited for a different kind of application, than mergesort.
Mergesort is worth considering if speed is of the essence, bad worst-case performance cannot be tolerated, and extra space is available.1
You stated that they «They're both O(nlogn) […]». This is wrong. «Quicksort uses about n^2/2 comparisons in the worst case.»1.
However the most important property according to my experience is the easy implementation of sequential access you can use while sorting when using programming languages with the imperative paradigm.
1 Sedgewick, Algorithms
I would like to add to the existing great answers some math about how QuickSort performs when diverging from best case and how likely that is, which I hope will help people understand a little better why the O(n^2) case is not of real concern in the more sophisticated implementations of QuickSort.
Outside of random access issues, there are two main factors that can impact the performance of QuickSort and they are both related to how the pivot compares to the data being sorted.
1) A small number of keys in the data. A dataset of all the same value will sort in n^2 time on a vanilla 2-partition QuickSort because all of the values except the pivot location are placed on one side each time. Modern implementations address this by methods such as using a 3-partition sort. These methods execute on a dataset of all the same value in O(n) time. So using such an implementation means that an input with a small number of keys actually improves performance time and is no longer a concern.
2) Extremely bad pivot selection can cause worst case performance. In an ideal case, the pivot will always be such that 50% the data is smaller and 50% the data is larger, so that the input will be broken in half during each iteration. This gives us n comparisons and swaps times log-2(n) recursions for O(n*logn) time.
How much does non-ideal pivot selection affect execution time?
Let's consider a case where the pivot is consistently chosen such that 75% of the data is on one side of the pivot. It's still O(n*logn) but now the base of the log has changed to 1/0.75 or 1.33. The relationship in performance when changing base is always a constant represented by log(2)/log(newBase). In this case, that constant is 2.4. So this quality of pivot choice takes 2.4 times longer than the ideal.
How fast does this get worse?
Not very fast until the pivot choice gets (consistently) very bad:
50% on one side: (ideal case)
75% on one side: 2.4 times as long
90% on one side: 6.6 times as long
95% on one side: 13.5 times as long
99% on one side: 69 times as long
As we approach 100% on one side the log portion of the execution approaches n and the whole execution asymptotically approaches O(n^2).
In a naive implementation of QuickSort, cases such as a sorted array (for 1st element pivot) or a reverse-sorted array (for last element pivot) will reliably produce a worst-case O(n^2) execution time. Additionally, implementations with a predictable pivot selection can be subjected to DoS attack by data that is designed to produce worst case execution. Modern implementations avoid this by a variety of methods, such as randomizing the data before sort, choosing the median of 3 randomly chosen indexes, etc. With this randomization in the mix, we have 2 cases:
Small data set. Worst case is reasonably possible but O(n^2) is not catastrophic because n is small enough that n^2 is also small.
Large data set. Worst case is possible in theory but not in practice.
How likely are we to see terrible performance?
The chances are vanishingly small. Let's consider a sort of 5,000 values:
Our hypothetical implementation will choose a pivot using a median of 3 randomly chosen indexes. We will consider pivots that are in the 25%-75% range to be "good" and pivots that are in the 0%-25% or 75%-100% range to be "bad". If you look at the probability distribution using the median of 3 random indexes, each recursion has an 11/16 chance of ending up with a good pivot. Let us make 2 conservative (and false) assumptions to simplify the math:
Good pivots are always exactly at a 25%/75% split and operate at 2.4*ideal case. We never get an ideal split or any split better than 25/75.
Bad pivots are always worst case and essentially contribute nothing to the solution.
Our QuickSort implementation will stop at n=10 and switch to an insertion sort, so we require 22 25%/75% pivot partitions to break the 5,000 value input down that far. (10*1.333333^22 > 5000) Or, we require 4990 worst case pivots. Keep in mind that if we accumulate 22 good pivots at any point then the sort will complete, so worst case or anything near it requires extremely bad luck. If it took us 88 recursions to actually achieve the 22 good pivots required to sort down to n=10, that would be 4*2.4*ideal case or about 10 times the execution time of the ideal case. How likely is it that we would not achieve the required 22 good pivots after 88 recursions?
Binomial probability distributions can answer that, and the answer is about 10^-18. (n is 88, k is 21, p is 0.6875) Your user is about a thousand times more likely to be struck by lightning in the 1 second it takes to click [SORT] than they are to see that 5,000 item sort run any worse than 10*ideal case. This chance gets smaller as the dataset gets larger. Here are some array sizes and their corresponding chances to run longer than 10*ideal:
Array of 640 items: 10^-13 (requires 15 good pivot points out of 60 tries)
Array of 5,000 items: 10^-18 (requires 22 good pivots out of 88 tries)
Array of 40,000 items:10^-23 (requires 29 good pivots out of 116)
Remember that this is with 2 conservative assumptions that are worse than reality. So actual performance is better yet, and the balance of the remaining probability is closer to ideal than not.
Finally, as others have mentioned, even these absurdly unlikely cases can be eliminated by switching to a heap sort if the recursion stack goes too deep. So the TLDR is that, for good implementations of QuickSort, the worst case does not really exist because it has been engineered out and execution completes in O(n*logn) time.
This is a common question asked in the interviews that despite of better worst case performance of merge sort, quicksort is considered better than merge sort, especially for a large input. There are certain reasons due to which quicksort is better:
1- Auxiliary Space: Quick sort is an in-place sorting algorithm. In-place sorting means no additional storage space is needed to perform sorting. Merge sort on the other hand requires a temporary array to merge the sorted arrays and hence it is not in-place.
2- Worst case: The worst case of quicksort O(n^2) can be avoided by using randomized quicksort. It can be easily avoided with high probability by choosing the right pivot. Obtaining an average case behavior by choosing right pivot element makes it improvise the performance and becoming as efficient as Merge sort.
3- Locality of reference: Quicksort in particular exhibits good cache locality and this makes it faster than merge sort in many cases like in virtual memory environment.
4- Tail recursion: QuickSort is tail recursive while Merge sort is not. A tail recursive function is a function where recursive call is the last thing executed by the function. The tail recursive functions are considered better than non tail recursive functions as tail-recursion can be optimized by compiler.
Quicksort is the fastest sorting algorithm in practice but has a number of pathological cases that can make it perform as badly as O(n2).
Heapsort is guaranteed to run in O(n*ln(n)) and requires only finite additional storage. But there are many citations of real world tests which show that heapsort is significantly slower than quicksort on average.
Quicksort is NOT better than mergesort. With O(n^2) (worst case that rarely happens), quicksort is potentially far slower than the O(nlogn) of the merge sort. Quicksort has less overhead, so with small n and slow computers, it is better. But computers are so fast today that the additional overhead of a mergesort is negligible, and the risk of a very slow quicksort far outweighs the insignificant overhead of a mergesort in most cases.
In addition, a mergesort leaves items with identical keys in their original order, a useful attribute.
Wikipedia's explanation is:
Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the probability of requiring quadratic time.
Quicksort
Mergesort
I think there are also issues with the amount of storage needed for Mergesort (which is Ω(n)) that quicksort implementations don't have. In the worst case, they are the same amount of algorithmic time, but mergesort requires more storage.
Why Quicksort is good?
QuickSort takes N^2 in worst case and NlogN average case. The worst case occurs when data is sorted.
This can be mitigated by random shuffle before sorting is started.
QuickSort doesn't takes extra memory that is taken by merge sort.
If the dataset is large and there are identical items, complexity of Quicksort reduces by using 3 way partition. More the no of identical items better the sort. If all items are identical, it sorts in linear time. [This is default implementation in most libraries]
Is Quicksort always better than Mergesort?
Not really.
Mergesort is stable but Quicksort is not. So if you need stability in output, you would use Mergesort. Stability is required in many practical applications.
Memory is cheap nowadays. So if extra memory used by Mergesort is not critical to your application, there is no harm in using Mergesort.
Note: In java, Arrays.sort() function uses Quicksort for primitive data types and Mergesort for object data types. Because objects consume memory overhead, so added a little overhead for Mergesort may not be any issue for performance point of view.
Reference: Watch the QuickSort videos of Week 3, Princeton Algorithms Course at Coursera
Unlike Merge Sort Quick Sort doesn't uses an auxilary space. Whereas Merge Sort uses an auxilary space O(n).
But Merge Sort has the worst case time complexity of O(nlogn) whereas the worst case complexity of Quick Sort is O(n^2) which happens when the array is already is sorted.
The answer would slightly tilt towards quicksort w.r.t to changes brought with DualPivotQuickSort for primitive values . It is used in JAVA 7 to sort in java.util.Arrays
It is proved that for the Dual-Pivot Quicksort the average number of
comparisons is 2*n*ln(n), the average number of swaps is 0.8*n*ln(n),
whereas classical Quicksort algorithm has 2*n*ln(n) and 1*n*ln(n)
respectively. Full mathematical proof see in attached proof.txt
and proof_add.txt files. Theoretical results are also confirmed
by experimental counting of the operations.
You can find the JAVA7 implmentation here - http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/Arrays.java
Further Awesome Reading on DualPivotQuickSort - http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
In merge-sort, the general algorithm is:
Sort the left sub-array
Sort the right sub-array
Merge the 2 sorted sub-arrays
At the top level, merging the 2 sorted sub-arrays involves dealing with N elements.
One level below that, each iteration of step 3 involves dealing with N/2 elements, but you have to repeat this process twice. So you're still dealing with 2 * N/2 == N elements.
One level below that, you're merging 4 * N/4 == N elements, and so on. Every depth in the recursive stack involves merging the same number of elements, across all calls for that depth.
Consider the quick-sort algorithm instead:
Pick a pivot point
Place the pivot point at the correct place in the array, with all smaller elements to the left, and larger elements to the right
Sort the left-subarray
Sort the right-subarray
At the top level, you're dealing with an array of size N. You then pick one pivot point, put it in its correct position, and can then ignore it completely for the rest of the algorithm.
One level below that, you're dealing with 2 sub-arrays that have a combined size of N-1 (ie, subtract the earlier pivot point). You pick a pivot point for each sub-array, which comes up to 2 additional pivot points.
One level below that, you're dealing with 4 sub-arrays with combined size N-3, for the same reasons as above.
Then N-7... Then N-15... Then N-32...
The depth of your recursive stack remains approximately the same (logN). With merge-sort, you're always dealing with a N-element merge, across each level of the recursive stack. With quick-sort though, the number of elements that you're dealing with diminishes as you go down the stack. For example, if you look at the depth midway through the recursive stack, the number of elements you're dealing with is N - 2^((logN)/2)) == N - sqrt(N).
Disclaimer: On merge-sort, because you divide the array into 2 exactly equal chunks each time, the recursive depth is exactly logN. On quick-sort, because your pivot point is unlikely to be exactly in the middle of the array, the depth of your recursive stack may be slightly greater than logN. I haven't done the math to see how big a role this factor and the factor described above, actually play in the algorithm's complexity.
This is a pretty old question, but since I've dealt with both recently here are my 2c:
Merge sort needs on average ~ N log N comparisons. For already (almost) sorted sorted arrays this gets down to 1/2 N log N, since while merging we (almost) always select "left" part 1/2 N of times and then just copy right 1/2 N elements. Additionally I can speculate that already sorted input makes processor's branch predictor shine but guessing almost all branches correctly, thus preventing pipeline stalls.
Quick sort on average requires ~ 1.38 N log N comparisons. It does not benefit greatly from already sorted array in terms of comparisons (however it does in terms of swaps and probably in terms of branch predictions inside CPU).
My benchmarks on fairly modern processor shows the following:
When comparison function is a callback function (like in qsort() libc implementation) quicksort is slower than mergesort by 15% on random input and 30% for already sorted array for 64 bit integers.
On the other hand if comparison is not a callback, my experience is that quicksort outperforms mergesort by up to 25%.
However if your (large) array has a very few unique values, merge sort starts gaining over quicksort in any case.
So maybe the bottom line is: if comparison is expensive (e.g. callback function, comparing strings, comparing many parts of a structure mostly getting to a second-third-forth "if" to make difference) - the chances are that you will be better with merge sort. For simpler tasks quicksort will be faster.
That said all previously said is true:
- Quicksort can be N^2, but Sedgewick claims that a good randomized implementation has more chances of a computer performing sort to be struck by a lightning than to go N^2
- Mergesort requires extra space
Quicksort has a better average case complexity but in some applications it is the wrong choice. Quicksort is vulnerable to denial of service attacks. If an attacker can choose the input to be sorted, he can easily construct a set that takes the worst case time complexity of o(n^2).
Mergesort's average case complexity and worst case complexity are the same, and as such doesn't suffer the same problem. This property of merge-sort also makes it the superior choice for real-time systems - precisely because there aren't pathological cases that cause it to run much, much slower.
I'm a bigger fan of Mergesort than I am of Quicksort, for these reasons.
That's hard to say.The worst of MergeSort is n(log2n)-n+1,which is accurate if n equals 2^k(I have already proved this).And for any n,it's between (n lg n - n + 1) and (n lg n + n + O(lg n)).But for quickSort,its best is nlog2n(also n equals 2^k).If you divide Mergesort by quickSort,it equals one when n is infinite.So it's as if the worst case of MergeSort is better than the best case of QuickSort,why do we use quicksort?But remember,MergeSort is not in place,it require 2n memeroy space.And MergeSort also need to do many array copies,which we don't include in the analysis of algorithm.In a word,MergeSort is really faseter than quicksort in theroy,but in reality you need to consider memeory space,the cost of array copy,merger is slower than quick sort.I once made an experiment where I was given 1000000 digits in java by Random class,and it took 2610ms by mergesort,1370ms by quicksort.
Quick sort is worst case O(n^2), however, the average case consistently out performs merge sort. Each algorithm is O(nlogn), but you need to remember that when talking about Big O we leave off the lower complexity factors. Quick sort has significant improvements over merge sort when it comes to constant factors.
Merge sort also requires O(2n) memory, while quick sort can be done in place (requiring only O(n)). This is another reason that quick sort is generally preferred over merge sort.
Extra info:
The worst case of quick sort occurs when the pivot is poorly chosen. Consider the following example:
[5, 4, 3, 2, 1]
If the pivot is chosen as the smallest or largest number in the group then quick sort will run in O(n^2). The probability of choosing the element that is in the largest or smallest 25% of the list is 0.5. That gives the algorithm a 0.5 chance of being a good pivot. If we employ a typical pivot choosing algorithm (say choosing a random element), we have 0.5 chance of choosing a good pivot for every choice of a pivot. For collections of a large size the probability of always choosing a poor pivot is 0.5 * n. Based on this probability quick sort is efficient for the average (and typical) case.
When I experimented with both sorting algorithms, by counting the number of recursive calls,
quicksort consistently has less recursive calls than mergesort.
It is because quicksort has pivots, and pivots are not included in the next recursive calls. That way quicksort can reach recursive base case more quicker than mergesort.
While they're both in the same complexity class, that doesn't mean they both have the same runtime. Quicksort is usually faster than mergesort, just because it's easier to code a tight implementation and the operations it does can go faster. It's because that quicksort is generally faster that people use it instead of mergesort.
However! I personally often will use mergesort or a quicksort variant that degrades to mergesort when quicksort does poorly. Remember. Quicksort is only O(n log n) on average. It's worst case is O(n^2)! Mergesort is always O(n log n). In cases where realtime performance or responsiveness is a must and your input data could be coming from a malicious source, you should not use plain quicksort.
All things being equal, I'd expect most people to use whatever is most conveniently available, and that tends to be qsort(3). Other than that quicksort is known to be very fast on arrays, just like mergesort is the common choice for lists.
What I'm wondering is why it's so rare to see radix or bucket sort. They're O(n), at least on linked lists and all it takes is some method of converting the key to an ordinal number. (strings and floats work just fine.)
I'm thinking the reason has to do with how computer science is taught. I even had to demonstrate to my lecturer in Algorithm analysis that it was indeed possible to sort faster than O(n log(n)). (He had the proof that you can't comparison sort faster than O(n log(n)), which is true.)
In other news, floats can be sorted as integers, but you have to turn the negative numbers around afterwards.
Edit:
Actually, here's an even more vicious way to sort floats-as-integers: http://www.stereopsis.com/radix.html. Note that the bit-flipping trick can be used regardless of what sorting algorithm you actually use...
Small additions to quick vs merge sorts.
Also it can depend on kind of sorting items. If access to items, swap and comparisons is not simple operations, like comparing integers in plane memory, then merge sort can be preferable algorithm.
For example , we sort items using network protocol on remote server.
Also, in custom containers like "linked list", the are no benefit of quick sort.
1. Merge sort on linked list, don't need additional memory.
2. Access to elements in quick sort is not sequential (in memory)
Quick sort is an in-place sorting algorithm, so its better suited for arrays. Merge sort on the other hand requires extra storage of O(N), and is more suitable for linked lists.
Unlike arrays, in liked list we can insert items in the middle with O(1) space and O(1) time, therefore the merge operation in merge sort can be implemented without any extra space. However, allocating and de-allocating extra space for arrays have an adverse effect on the run time of merge sort. Merge sort also favors linked list as data is accessed sequentially, without much random memory access.
Quick sort on the other hand requires a lot of random memory access and with an array we can directly access the memory without any traversing as required by linked lists. Also quick sort when used for arrays have a good locality of reference as arrays are stored contiguously in memory.
Even though both sorting algorithms average complexity is O(NlogN), usually people for ordinary tasks uses an array for storage, and for that reason quick sort should be the algorithm of choice.
EDIT: I just found out that merge sort worst/best/avg case is always nlogn, but quick sort can vary from n2(worst case when elements are already sorted) to nlogn(avg/best case when pivot always divides the array in two halves).
Consider time and space complexity both.
For Merge sort :
Time complexity : O(nlogn) ,
Space complexity : O(nlogn)
For Quick sort :
Time complexity : O(n^2) ,
Space complexity : O(n)
Now, they both win in one scenerio each.
But, using a random pivot you can almost always reduce Time complexity of Quick sort to O(nlogn).
Thus, Quick sort is preferred in many applications instead of Merge sort.
In c/c++ land, when not using stl containers, I tend to use quicksort, because it is built
into the run time, while mergesort is not.
So I believe that in many cases, it is simply the path of least resistance.
In addition performance can be much higher with quick sort, for cases where the entire dataset does not fit into the working set.
One of the reason is more philosophical. Quicksort is Top->Down philosophy. With n elements to sort, there are n! possibilities. With 2 partitions of m & n-m which are mutually exclusive, the number of possibilities go down in several orders of magnitude. m! * (n-m)! is smaller by several orders than n! alone. imagine 5! vs 3! *2!. 5! has 10 times more possibilities than 2 partitions of 2 & 3 each . and extrapolate to 1 million factorial vs 900K!*100K! vs. So instead of worrying about establishing any order within a range or a partition,just establish order at a broader level in partitions and reduce the possibilities within a partition. Any order established earlier within a range will be disturbed later if the partitions themselves are not mutually exclusive.
Any bottom up order approach like merge sort or heap sort is like a workers or employee's approach where one starts comparing at a microscopic level early. But this order is bound to be lost as soon as an element in between them is found later on. These approaches are very stable & extremely predictable but do a certain amount of extra work.
Quick Sort is like Managerial approach where one is not initially concerned about any order , only about meeting a broad criterion with No regard for order. Then the partitions are narrowed until you get a sorted set. The real challenge in Quicksort is in finding a partition or criterion in the dark when you know nothing about the elements to sort. That is why we either need to spend some effort to find a median value or pick 1 at random or some arbitrary "Managerial" approach . To find a perfect median can take significant amount of effort and leads to a stupid bottom up approach again. So Quicksort says just a pick a random pivot and hope that it will be somewhere in the middle or do some work to find median of 3 , 5 or something more to find a better median but do not plan to be perfect & don't waste any time in initially ordering. That seems to do well if you are lucky or sometimes degrades to n^2 when you don't get a median but just take a chance. Any way data is random. right.
So I agree more with the top ->down logical approach of quicksort & it turns out that the chance it takes about pivot selection & comparisons that it saves earlier seems to work better more times than any meticulous & thorough stable bottom ->up approach like merge sort. But

Resources