Analysis of algorithms (complexity) - algorithm

How are algorithms analyzed? What makes quicksort have an O(n^2) worst-case performance while merge sort has an O(n log(n)) worst-case performance?

That's a topic for an entire semester. Ultimately we are talking about the upper bound on the number of operations that must be completed before the algorithm finishes as a function of the size of the input. We do not include the coeffecients (ie 10N vs 4N^2) because for N large enough, it doesn't matter anymore.
How to prove what the big-oh of an algorithm is can be quite difficult. It requires a formal proof and there are many techniques. Often a good adhoc way is to just count how many passes on the data the algorithm makes. For instance, if your algorithm has nested for loops, then for each of N items you must operate N times. That would generally be O(N^2).
As to merge sort, you split the data in half over and over. That takes log2(n). And for each split you make a pass on the data, which gives N log(n).
quick sort is a bit trickier because in the average case it is also n log (n). You have to imagine what happens if your partition splits the data such that every time you get only one element on one side of the partition. Then you will need to split the data n times instead of log(n) times which makes it N^2. The advantage of quicksort is that it can be done in place, and that we usually get closer to N log(n) performance.

This is introductory analysis of algorithms course material.
An operation is defined (ie, multiplication) and the analysis is performed in terms of either space or time.
This operation is counted in terms of space or time. Typically analyses are performed as Time being the dependent variable upon Input Size.
Example pseudocode:
foreach $elem in #list
op();
endfor
There will be n operations performed, where n is the size of #list. Count it yourself if you don't believe me.
To analyze quicksort and mergesort requires a decent level of what is known as mathematical sophistication. Loosely, you solve a discrete differential equation derived from the recursive relation.

Both quicksort and merge sort split the array into two, sort each part recursively, then combine the result. Quicksort splits by choosing a "pivot" element and partitioning the array into smaller or greater then the pivot. Merge sort splits arbitrarily and then merges the results in linear time. In both cases a single step is O(n), and if the array size halves each time this would give a logarithmic number of steps. So we would expect O(n log(n)).
However quicksort has a worst case where the split is always uneven so you don't get a number of steps proportional to the logarithmic of n, but a number of steps proportional to n. Merge sort splits exactly into two halves (or as close as possible) so it doesn't have this problem.

Quick sort has many variants depending on pivot selection
Let's assume we always select 1st item in the array as a pivot
If the input array is sorted then Quick sort will be only a kind of selection sort!
Because you are not really dividing the array.. you are only picking first item in each cycle
On the other hand merge sort will always divide the input array in the same manner, regardless of its content!
Also note: the best performance in divide and conquer when divisions length are -nearly- equal !

Analysing algorithms is a painstaking effort, and it is error-prone. I would compare it with a question like, how much chance do I have to get dealt two aces in a bridge game. One has to carefully consider all possibilities and must not overlook that the aces can arrive in any order.
So what one does for analysing those algorithms is going through an actual pseudo code of the algorithm and add what result a worst case situation would have. In the following I will paint with a large brush.
For quicksort one has to choose a pivot to split the set. In a case of dramatic bad luck the set splits in a set of n-1 and a set of 1 each time, for n steps, where each steps means inspecting n elements. This arrive at N^2
For merge sort one starts by splitting the sequence into in order sequences. Even in the worst case that means at most n sequences. Those can be combined two by two, then the larger sets are combined two by two etc. However those (at most) n/2 first combinations deal with extremely small subsets, and the last step deals with subsets that have about size n, but there is just one such step. This arrives at N.log(N)

Related

Sorting Algorithms with time complexity Log(n)

Is there any sorting algorithm with an average time complexity log(n)??
example [8,2,7,5,0,1]
sort given array with time complexity log(n)
No; this is, in fact, impossible for an arbitrary list! We can prove this fairly simply: the absolute minimum thing we must do for a sort is look at each element in the list at least once. After all, an element may belong anywhere in the sorted list; if we don't even look at an element, it's impossible for us to sort the array. This means that any sorting algorithm has a lower bound of n, and since n > log(n), a log(n) sort is impossible.
Although n is the lower bound, most sorts (like merge sort, quick sort) are n*log(n) time. In fact, while we can sort purely numerical lists in n time in some cases with radix sort, we actually have no way to, say, sort arbitrary objects like strings in less than n*log(n).
That said, there may be times when the list is not arbitrary; ex. we have a list that is entirely sorted except for one element, and we need to put that element in the list. In that case, methods like binary search tree can let you insert in log(n), but this is only possible because we are operating on a single element. Building up a tree (ie. performing n inserts) is n*log(n) time.
As #dominicm00 also mentioned the answer is no.
In general when you see an algorithm with time complexity of Log N with base 2 that means that, you are dividing the input list into 2 sets, and getting rid of one of them repeatedly. In sorting algorithm we need to put all the elements in their appropriate place, if we get rid of half of the list in each iteration, that does not correlate with sorting functionality.
The most efficient sorting algorithms have the time complexity of O(n), but with some limitations. Three most famous algorithm with complexity of O(n) are :
Counting sort with time complexity of O(n+k), while k is the maximum number in given list. Assuming n>>k, you can consider its time complexity as O(n)
Radix sort with time complexity of O(d*(n+k)), where k is maximum number of input list and d is maximum number of digits you may have in input list. Similar to counting sort assuming n>>k && n>>d => time complexity will be O(n)
Bucket sort with time complexity of O(n)
But in general due to limitation of each of these algorithms most implementation relies on O(n* log n) algorithms, such as merge sort, quick sort, and heap sort.
Also there are some sorting algorithms with time complexity of O(n^2) which are recommended for list with smaller sizes such as insertion sort, selection sort, and bubble sort.
Using a PLA it might be possible to implement counting sort for a few elements with a low range of values.
count each amount in parallel and sum using lg2(N) steps
find the offset of each element in lg2(N) steps
write the array in O(1)
Only massive parallel computation would be able to do this, general purpose CPU's would not do here unless they implement it in silicon as part of their SIMD.

What sorting techniques can I use when comparing elements is expensive?

Problem
I have an application where I want to sort an array a of elements a0, a1,...,an-1. I have a comparison function cmp(i,j) that compares elements ai and aj and a swap function swap(i,j), that swaps elements ai and aj of the array. In the application, execution of the cmp(i,j) function might be extremely expensive, to the point where one execution of cmp(i,j) takes longer than any other steps in the sort (except for other cmp(i,j) calls, of course) together. You may think of cmp(i,j) as a rather lengthy IO operation.
Please assume for the sake of this question that there is no way to make cmp(i,j) faster. Assume all optimizations that could possibly make cmp(i,j) faster have already been done.
Questions
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
It is possible in my application to write a predicate expensive(i,j) that is true iff a call to cmp(i,j) would take a long time. expensive(i,j) is cheap and expensive(i,j) ∧ expensive(j,k) → expensive(i,k) mostly holds in my current application. This is not guaranteed though.
Would the existance of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
I'd like pointers to further material on this topic.
Example
This is an example that is not entirely unlike the application I have.
Consider a set of possibly large files. In this application the goal is to find duplicate files among them. This essentially boils down to sorting the files by some arbitrary criterium and then traversing them in order, outputting sequences of equal files that were encountered.
Of course reader in large amounts of data is expensive, therefor one can, for instance, only read the first megabyte of each file and calculate a hash function on this data. If the files compare equal, so do the hashes, but the reverse may not hold. Two large file could only differ in one byte near the end.
The implementation of expensive(i,j) in this case is simply a check whether the hashes are equal. If they are, an expensive deep comparison is neccessary.
I'll try to answer each question as best as I can.
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
Traditional sorting methods may have some variation, but in general, there is a mathematical limit to the minimum number of comparisons necessary to sort a list, and most algorithms take advantage of that, since comparisons are often not inexpensive. You could try sorting by something else, or try using a shortcut that may be faster that may approximate the real solution.
Would the existance of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
I don't think you can get around the necessity of doing at least the minimum number of comparisons, but you may be able to change what you compare. If you can compare hashes or subsets of the data instead of the whole thing, that could certainly be helpful. Anything you can do to simplify the comparison operation will make a big difference, but without knowing specific details of the data, it's hard to suggest specific solutions.
I'd like pointers to further material on this topic.
Check these out:
Apparently Donald Knuth's The Art of Computer Programming, Volume 3 has a section on this topic, but I don't have a copy handy.
Wikipedia of course has some insight into the matter.
Sorting an array with minimal number of comparisons
How do I figure out the minimum number of swaps to sort a list in-place?
Limitations of comparison based sorting techniques
The theoretical minimum number of comparisons needed to sort an array of n elements on average is lg (n!), which is about n lg n - n. There's no way to do better than this on average if you're using comparisons to order the elements.
Of the standard O(n log n) comparison-based sorting algorithms, mergesort makes the lowest number of comparisons (just about n lg n, compared with about 1.44 n lg n for quicksort and about n lg n + 2n for heapsort), so it might be a good algorithm to use as a starting point. Typically mergesort is slower than heapsort and quicksort, but that's usually under the assumption that comparisons are fast.
If you do use mergesort, I'd recommend using an adaptive variant of mergesort like natural mergesort so that if the data is mostly sorted, the number of comparisons is closer to linear.
There are a few other options available. If you know for a fact that the data is already mostly sorted, you could use insertion sort or a standard variation of heapsort to try to speed up the sorting. Alternatively, you could use mergesort but use an optimal sorting network as a base case when n is small. This might shave off enough comparisons to give you a noticeable performance boost.
Hope this helps!
A technique called the Schwartzian transform can be used to reduce any sorting problem to that of sorting integers. It requires you to apply a function f to each of your input items, where f(x) < f(y) if and only if x < y.
(Python-oriented answer, when I thought the question was tagged [python])
If you can define a function f such that f(x) < f(y) if and only if x < y, then you can sort using
sort(L, key=f)
Python guarantees that key is called at most once for each element of the iterable you are sorting. This provides support for the Schwartzian transform.
Python 3 does not support specifying a cmp function, only the key parameter. This page provides a way of easily converting any cmp function to a key function.
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
Edit: Ah, sorry. There are algorithms that minimize the number of comparisons (below), but not that I know of for specific elements.
Would the existence of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
Not that I know of, but perhaps you'll find it in these papers below.
I'd like pointers to further material on this topic.
On Optimal and Efficient in Place Merging
Stable Minimum Storage Merging by Symmetric Comparisons
Optimal Stable Merging (this one seems to be O(n log2 n) though
Practical In-Place Mergesort
If you implement any of them, posting them here might be useful for others too! :)
Is there a sorting algorithm that minimizes the number of calls to cmp(i,j)?
Merge insertion algorithm, described in D. Knuth's "The art of computer programming", Vol 3, chapter 5.3.1, uses less comparisons than other comparison-based algorithms. But still it needs O(N log N) comparisons.
Would the existence of expensive(i,j) allow for a better algorithm that tries to avoid expensive comparing operations? If yes, can you point me to such an algorithm?
I think some of existing sorting algorithms may be modified to take into account expensive(i,j) predicate. Let's take the simplest of them - insertion sort. One of its variants, named in Wikipedia as binary insertion sort, uses only O(N log N) comparisons.
It employs a binary search to determine the correct location to insert new elements. We could apply expensive(i,j) predicate after each binary search step to determine if it is cheap to compare the inserted element with "middle" element found in binary search step. If it is expensive we could try the "middle" element's neighbors, then their neighbors, etc. If no cheap comparisons could be found we just return to the "middle" element and perform expensive comparison.
There are several possible optimizations. If predicate and/or cheap comparisons are not so cheap we could roll back to the "middle" element earlier than all other possibilities are tried. Also if move operations cannot be considered as very cheap, we could use some order statistics data structure (like Indexable skiplist) do reduce insertion cost to O(N log N).
This modified insertion sort needs O(N log N) time for data movement, O(N2) predicate computations and cheap comparisons and O(N log N) expensive comparisons in the worst case. But more likely there would be only O(N log N) predicates and cheap comparisons and O(1) expensive comparisons.
Consider a set of possibly large files. In this application the goal is to find duplicate files among them.
If the only goal is to find duplicates, I think sorting (at least comparison sorting) is not necessary. You could just distribute the files between buckets depending on hash value computed for first megabyte of data from each file. If there are more than one file in some bucket, take other 10, 100, 1000, ... megabytes. If still more than one file in some bucket, compare them byte-by-byte. Actually this procedure is similar to radix sort.
Most sorting algorithm out there try minimize the amount of comparisons during sorting.
My advice:
Pick quick-sort as a base algorithm and memorize results of comparisons just in case you happen to compare the same problems again. This should help you in the O(N^2) worst case of quick-sort. Bear in mind that this will make you use O(N^2) memory.
Now if you are really adventurous you could try the Dual-Pivot quick-sort.
Something to keep in mind is that if you are continuously sorting the list with new additions, and the comparison between two elements is guaranteed to never change, you can memoize the comparison operation which will lead to a performance increase. In most cases this won't be applicable, unfortunately.
We can look at your problem in the another direction, Seems your problem is IO related, then you can use advantage of parallel sorting algorithms, In fact you can run many many threads to run comparison on files, then sort them by one of a best known parallel algorithms like Sample sort algorithm.
Quicksort and mergesort are the fastest possible sorting algorithm, unless you have some additional information about the elements you want to sort. They will need O(n log(n)) comparisons, where n is the size of your array.
It is mathematically proved that any generic sorting algorithm cannot be more efficient than that.
If you want to make the procedure faster, you might consider adding some metadata to accelerate the computation (can't be more precise unless you are, too).
If you know something stronger, such as the existence of a maximum and a minimum, you can use faster sorting algorithms, such as radix sort or bucket sort.
You can look for all the mentioned algorithms on wikipedia.
As far as I know, you can't benefit from the expensive relationship. Even if you know that, you still need to perform such comparisons. As I said, you'd better try and cache some results.
EDIT I took some time to think about it, and I came up with a slightly customized solution, that I think will make the minimum possible amount of expensive comparisons, but totally disregards the overall number of comparisons. It will make at most (n-m)*log(k) expensive comparisons, where
n is the size of the input vector
m is the number of distinct component which are easy to compare between each other
k is the maximum number of elements which are hard to compare and have consecutive ranks.
Here is the description of the algorithm. It's worth nothing saying that it will perform much worse than a simple merge sort, unless m is big and k is little. The total running time is O[n^4 + E(n-m)log(k)], where E is the cost of an expensive comparison (I assumed E >> n, to prevent it from being wiped out from the asymptotic notation. That n^4 can probably be further reduced, at least in the mean case.
EDIT The file I posted contained some errors. While trying it, I also fixed them (I overlooked the pseudocode for insert_sorted function, but the idea was correct. I made a Java program that sorts a vector of integers, with delays added as you described. Even if I was skeptical, it actually does better than mergesort, if the delay is significant (I used 1s delay agains integer comparison, which usually takes nanoseconds to execute)

understanding of lower bound for comparison-based sorting algorithm

First, I know
lower bound is O(nlogn)
and how to prove it
And I agree the lower bound should be O(nlogn).
What I don't quite understand is:
For some special cases, the # of comparisons could actually be even lower than the lower bound. For example, use bubble sort to sort an already sorted array. The # of comparisons is O(n).
So how to actually understand the idea of lower bound?
The classical definition on Wikipedial: http://en.wikipedia.org/wiki/Upper_and_lower_bounds does not help much.
My current understanding of this is:
lower bound of the comparison-based sorting is actually the upper bound for the worst case.
namely, how best you could in the worst case.
Is this correct? Thanks.
lower bound of the comparison-based sorting is actually the upper bound for the best case.
No.
The function that you are bounding is the worst-case running time of the best possible sorting algorithm.
Imagine the following game:
We choose some number n.
You pick your favorite sorting algorithm.
After looking at your algorithm, I pick some input sequence of length n.
We run your algorithm on my input, and you give me a dollar for every executed instruction.
The O(n log n) upper bound means you can limit your cost to at most O(n log n) dollars, no matter what input sequence I choose.
The Ω(n log n) lower bound means that I can force you to pay at least Ω(n log n) dollars, no matter what sorting algorithm you choose.
Also: "The lower bound is O(n log n)" doesn't make any sense. O(f(n)) means "at most a constant times f(n)". But "lower bound" means "at least ...". So saying "a lower bound of O(n log n)" is exactly like saying "You can save up to 50% or more!" — it's completely meaningless! The correct notation for lower bounds is Ω(...).
The problem of sorting can be viewed as following.
Input: A sequence of n numbers .
Output: A permutation (reordering) of the input sequence such that a‘1 <= a‘2 ….. <= a‘n.
A sorting algorithm is comparison based if it uses comparison operators to find the order between two numbers. Comparison sorts can be viewed abstractly in terms of decision trees. A decision tree is a full binary tree that represents the comparisons between elements that are performed by a particular sorting algorithm operating on an input of a given size. The execution of the sorting algorithm corresponds to tracing a path from the root of the decision tree to a leaf. At each internal node, a comparison ai aj is made. The left subtree then dictates subsequent comparisons for ai aj, and the right subtree dictates subsequent comparisons for ai > aj. When we come to a leaf, the sorting algorithm has established the ordering. So we can say following about the decison tree.
1) Each of the n! permutations on n elements must appear as one of the leaves of the decision tree for the sorting algorithm to sort properly.
2) Let x be the maximum number of comparisons in a sorting algorithm. The maximum height of the decison tree would be x. A tree with maximum height x has at most 2^x leaves.
After combining the above two facts, we get following relation.
n! <= 2^x
Taking Log on both sides.
\log_2n! <= x
Since \log_2n! = \Theta(nLogn), we can say
x = \Omega(nLog_2n)
Therefore, any comparison based sorting algorithm must make at least \Omega(nLog_2n) comparisons to sort the input array, and Heapsort and merge sort are asymptotically optimal comparison sorts.
When you do asymptotic analysis you derive an O or Θ or Ω for all input.
But you can also make analysis on whether properties of the input affect the runtime.
For example algorithms that take as input something almost sorted have better performance than the formal asymptotic formula due to the input characteristics and the structure of the algorithm. Examples are bubblesort and quicksort.
It is not that you can go bellow the lower boundaries. It only behavior of the implementation on specific input.
Imagine all the possible arrays of things that could be sorted. Lets say they are arrays of length 'n' and ignore stuff like arrays with one element (which, of course, are always already sorted.
Imagine a long list of all possible value combinations for that array. Notice that we can simplify this a bit since the values in the array always have some sort of ordering. So if we replace the smallest one with the number 1, the next one with 1 or 2 (depending on whether its equal or greater) and so forth, we end up with the same sorting problem as if we allowed any value at all. (This means an array of length n will need, at most, the numbers 1-n. Maybe less if some are equal.)
Then put a number beside each one telling how much work it takes to sort that array with those values in it. You could put several numbers. For example, you could put the number of comparisons it takes. Or you could put the number of element moves or swaps it takes. Whatever number you put there indicates how many operations it takes. You could put the sum of them.
One thing you have to do is ignore any special information. For example, you can't know ahead of time that the arrangement of values in the array are already sorted. Your algorithm has to do the same steps with that array as with any other. (But the first step could be to check if its sorted. Usually that doesn't help in sorting, though.)
So. The largest number, measured by comparisons, is the typical number of comparisons when the values are arranged in a pathologically bad way. The smallest number, similarly, is the number of comparisons needed when the values are arranged in a really good way.
For a bubble sort, the best case (shortest or fastest) is if the values are in order already. But that's only if you use a flag to tell whether you swapped any values. In that best case, you look at each adjacent pair of elements one time and find they are already sorted and when you get to the end, you find you haven't swapped anything so you are done. that's n-1 comparisons total and forms the lowest number of comparisons you could ever do.
It would take me a while to figure out the worst case. I haven't looked at a bubble sort in decades. But I would guess its a case where they are reverse ordered. You do the 1st comparison and find the 1st element needs to move. You slide up to the top comparing to each one and finally swap it with the last element. So you did n-1 comparisons in that pass. The 2nd pass starts at the 2nd element and does n-2 comparisons and so forth. So you do (n-1)+(n-2)+(n-3)+...+1 comparisons in this case which is about (n**2)/2.
Maybe your variation on bubble sort is better than the one I described. No matter.
For bubble sort then, the lower bound is n-1 and the upper bound is (n**2)/2
Other sort algorithms have better performance.
You might want to remember that there are other operations that cost besides comparisons. We use comparisons because much sorting is done with strings and a string comparison is costly in compute time.
You could use element swaps to count (or the sum of swaps and elements swaps) but they are typically shorter than comparisons with strings. If you have numbers, they are similar.
You could also use more esoteric things like branch prediction failure or memory cache misses or for measuring.

Intuitive explanation for why QuickSort is n log n?

Is anybody able to give a 'plain english' intuitive, yet formal, explanation of what makes QuickSort n log n? From my understanding it has to make a pass over n items, and it does this log n times...Im not sure how to put it into words why it does this log n times.
Complexity
A Quicksort starts by partitioning the input into two chunks: it chooses a "pivot" value, and partitions the input into those less than the pivot value and those larger than the pivot value (and, of course, any equal to the pivot value have go into one or the other, of course, but for a basic description, it doesn't matter a lot which those end up in).
Since the input (by definition) isn't sorted, to partition it like that, it has to look at every item in the input, so that's an O(N) operation. After it's partitioned the input the first time, it recursively sorts each of those "chunks". Each of those recursive calls looks at every one of its inputs, so between the two calls it ends up visiting every input value (again). So, at the first "level" of partitioning, we have one call that looks at every input item. At the second level, we have two partitioning steps, but between the two, they (again) look at every input item. Each successive level has more individual partitioning steps, but in total the calls at each level look at all the input items.
It continues partitioning the input into smaller and smaller pieces until it reaches some lower limit on the size of a partition. The smallest that could possibly be would be a single item in each partition.
Ideal Case
In the ideal case we hope each partitioning step breaks the input in half. The "halves" probably won't be precisely equal, but if we choose the pivot well, they should be pretty close. To keep the math simple, let's assume perfect partitioning, so we get exact halves every time.
In this case, the number of times we can break it in half will be the base-2 logarithm of the number of inputs. For example, given 128 inputs, we get partition sizes of 64, 32, 16, 8, 4, 2, and 1. That's 7 levels of partitioning (and yes log2(128) = 7).
So, we have log(N) partitioning "levels", and each level has to visit all N inputs. So, log(N) levels times N operations per level gives us O(N log N) overall complexity.
Worst Case
Now let's revisit that assumption that each partitioning level will "break" the input precisely in half. Depending on how good a choice of partitioning element we make, we might not get precisely equal halves. So what's the worst that could happen? The worst case is a pivot that's actually the smallest or largest element in the input. In this case, we do an O(N) partitioning level, but instead of getting two halves of equal size, we've ended up with one partition of one element, and one partition of N-1 elements. If that happens for every level of partitioning, we obviously end up doing O(N) partitioning levels before even partition is down to one element.
This gives the technically correct big-O complexity for Quicksort (big-O officially refers to the upper bound on complexity). Since we have O(N) levels of partitioning, and each level requires O(N) steps, we end up with O(N * N) (i.e., O(N2)) complexity.
Practical implementations
As a practical matter, a real implementation will typically stop partitioning before it actually reaches partitions of a single element. In a typical case, when a partition contains, say, 10 elements or fewer, you'll stop partitioning and and use something like an insertion sort (since it's typically faster for a small number of elements).
Modified Algorithms
More recently other modifications to Quicksort have been invented (e.g., Introsort, PDQ Sort) which prevent that O(N2) worst case. Introsort does so by keeping track of the current partitioning "level", and when/if it goes too deep, it'll switch to a heap sort, which is slower than Quicksort for typical inputs, but guarantees O(N log N) complexity for any inputs.
PDQ sort adds another twist to that: since Heap sort is slower, it tries to avoid switching to heap sort if possible To to that, if it looks like it's getting poor pivot values, it'll randomly shuffle some of the inputs before choosing a pivot. Then, if (and only if) that fails to produce sufficiently better pivot values, it'll switch to using a Heap sort instead.
Each partitioning operation takes O(n) operations (one pass on the array).
In average, each partitioning divides the array to two parts (which sums up to log n operations). In total we have O(n * log n) operations.
I.e. in average log n partitioning operations and each partitioning takes O(n) operations.
There's a key intuition behind logarithms:
The number of times you can divide a number n by a constant before reaching 1 is O(log n).
In other words, if you see a runtime that has an O(log n) term, there's a good chance that you'll find something that repeatedly shrinks by a constant factor.
In quicksort, what's shrinking by a constant factor is the size of the largest recursive call at each level. Quicksort works by picking a pivot, splitting the array into two subarrays of elements smaller than the pivot and elements bigger than the pivot, then recursively sorting each subarray.
If you pick the pivot randomly, then there's a 50% chance that the chosen pivot will be in the middle 50% of the elements, which means that there's a 50% chance that the larger of the two subarrays will be at most 75% the size of the original. (Do you see why?)
Therefore, a good intuition for why quicksort runs in time O(n log n) is the following: each layer in the recursion tree does O(n) work, and since each recursive call has a good chance of reducing the size of the array by at least 25%, we'd expect there to be O(log n) layers before you run out of elements to throw away out of the array.
This assumes, of course, that you're choosing pivots randomly. Many implementations of quicksort use heuristics to try to get a nice pivot without too much work, and those implementations can, unfortunately, lead to poor overall runtimes in the worst case. #Jerry Coffin's excellent answer to this question talks about some variations on quicksort that guarantee O(n log n) worst-case behavior by switching which sorting algorithms are used, and that's a great place to look for more information about this.
Well, it's not always n(log n). It is the performance time when the pivot chosen is approximately in the middle. In worst case if you choose the smallest or the largest element as the pivot then the time will be O(n^2).
To visualize 'n log n', you can assume the pivot to be element closest to the average of all the elements in the array to be sorted.
This would partition the array into 2 parts of roughly same length.
On both of these you apply the quicksort procedure.
As in each step you go on halving the length of the array, you will do this for log n(base 2) times till you reach length = 1 i.e a sorted array of 1 element.
Break the sorting algorithm in two parts. First is the partitioning and second recursive call. Complexity of partioning is O(N) and complexity of recursive call for ideal case is O(logN). For example, if you have 4 inputs then there will be 2(log4) recursive call. Multiplying both you get O(NlogN). It is a very basic explanation.
In-fact you need to find the position of all the N elements(pivot),but the maximum number of comparisons is logN for each element (the first is N,second pivot N/2,3rd N/4..assuming pivot is the median element)
In the case of the ideal scenario, the first level call, places 1 element in its proper position. there are 2 calls at the second level taking O(n) time combined but it puts 2 elements in their proper position. in the same way. there will be 4 calls at the 3rd level which would take O(n) combined time but will place 4 elements into their proper position. so the depth of the recursive tree will be log(n) and at each depth, O(n) time is needed for all recursive calls. So time complexity is O(nlogn).

Why is quicksort better than mergesort?

I was asked this question during an interview. They're both O(nlogn) and yet most people use Quicksort instead of Mergesort. Why is that?
Quicksort has O(n2) worst-case runtime and O(nlogn) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out.
In particular, the often-quoted runtime of sorting algorithms refers to the number of comparisons or the number of swaps necessary to perform to sort the data. This is indeed a good measure of performance, especially since it’s independent of the underlying hardware design. However, other things – such as locality of reference (i.e. do we read lots of elements which are probably in cache?) – also play an important role on current hardware. Quicksort in particular requires little additional space and exhibits good cache locality, and this makes it faster than merge sort in many cases.
In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).
In practice, many modern implementations of quicksort (in particular libstdc++’s std::sort) are actually introsort, whose theoretical worst-case is O(nlogn), same as merge sort. It achieves this by limiting the recursion depth, and switching to a different algorithm (heapsort) once it exceeds logn.
As many people have noted, the average case performance for quicksort is faster than mergesort. But this is only true if you are assuming constant time to access any piece of memory on demand.
In RAM this assumption is generally not too bad (it is not always true because of caches, but it is not too bad). However if your data structure is big enough to live on disk, then quicksort gets killed by the fact that your average disk does something like 200 random seeks per second. But that same disk has no trouble reading or writing megabytes per second of data sequentially. Which is exactly what mergesort does.
Therefore if data has to be sorted on disk, you really, really want to use some variation on mergesort. (Generally you quicksort sublists, then start merging them together above some size threshold.)
Furthermore if you have to do anything with datasets of that size, think hard about how to avoid seeks to disk. For instance this is why it is standard advice that you drop indexes before doing large data loads in databases, and then rebuild the index later. Maintaining the index during the load means constantly seeking to disk. By contrast if you drop the indexes, then the database can rebuild the index by first sorting the information to be dealt with (using a mergesort of course!) and then loading it into a BTREE datastructure for the index. (BTREEs are naturally kept in order, so you can load one from a sorted dataset with few seeks to disk.)
There have been a number of occasions where understanding how to avoid disk seeks has let me make data processing jobs take hours rather than days or weeks.
Actually, QuickSort is O(n2). Its average case running time is O(nlog(n)), but its worst-case is O(n2), which occurs when you run it on a list that contains few unique items. Randomization takes O(n). Of course, this doesn't change its worst case, it just prevents a malicious user from making your sort take a long time.
QuickSort is more popular because it:
Is in-place (MergeSort requires extra memory linear to number of elements to be sorted).
Has a small hidden constant.
"and yet most people use Quicksort instead of Mergesort. Why is that?"
One psychological reason that has not been given is simply that Quicksort is more cleverly named. ie good marketing.
Yes, Quicksort with triple partioning is probably one of the best general purpose sort algorithms, but theres no getting over the fact that "Quick" sort sounds much more powerful than "Merge" sort.
As others have noted, worst case of Quicksort is O(n^2), while mergesort and heapsort stay at O(nlogn). On the average case, however, all three are O(nlogn); so they're for the vast majority of cases comparable.
What makes Quicksort better on average is that the inner loop implies comparing several values with a single one, while on the other two both terms are different for each comparison. In other words, Quicksort does half as many reads as the other two algorithms. On modern CPUs performance is heavily dominated by access times, so in the end Quicksort ends up being a great first choice.
I'd like to add that of the three algoritms mentioned so far (mergesort, quicksort and heap sort) only mergesort is stable. That is, the order does not change for those values which have the same key. In some cases this is desirable.
But, truth be told, in practical situations most people need only good average performance and quicksort is... quick =)
All sort algorithms have their ups and downs. See Wikipedia article for sorting algorithms for a good overview.
From the Wikipedia entry on Quicksort:
Quicksort also competes with
mergesort, another recursive sort
algorithm but with the benefit of
worst-case Θ(nlogn) running time.
Mergesort is a stable sort, unlike
quicksort and heapsort, and can be
easily adapted to operate on linked
lists and very large lists stored on
slow-to-access media such as disk
storage or network attached storage.
Although quicksort can be written to
operate on linked lists, it will often
suffer from poor pivot choices without
random access. The main disadvantage
of mergesort is that, when operating
on arrays, it requires Θ(n) auxiliary
space in the best case, whereas the
variant of quicksort with in-place
partitioning and tail recursion uses
only Θ(logn) space. (Note that when
operating on linked lists, mergesort
only requires a small, constant amount
of auxiliary storage.)
Mu!
Quicksort is not better, it is well suited for a different kind of application, than mergesort.
Mergesort is worth considering if speed is of the essence, bad worst-case performance cannot be tolerated, and extra space is available.1
You stated that they «They're both O(nlogn) […]». This is wrong. «Quicksort uses about n^2/2 comparisons in the worst case.»1.
However the most important property according to my experience is the easy implementation of sequential access you can use while sorting when using programming languages with the imperative paradigm.
1 Sedgewick, Algorithms
I would like to add to the existing great answers some math about how QuickSort performs when diverging from best case and how likely that is, which I hope will help people understand a little better why the O(n^2) case is not of real concern in the more sophisticated implementations of QuickSort.
Outside of random access issues, there are two main factors that can impact the performance of QuickSort and they are both related to how the pivot compares to the data being sorted.
1) A small number of keys in the data. A dataset of all the same value will sort in n^2 time on a vanilla 2-partition QuickSort because all of the values except the pivot location are placed on one side each time. Modern implementations address this by methods such as using a 3-partition sort. These methods execute on a dataset of all the same value in O(n) time. So using such an implementation means that an input with a small number of keys actually improves performance time and is no longer a concern.
2) Extremely bad pivot selection can cause worst case performance. In an ideal case, the pivot will always be such that 50% the data is smaller and 50% the data is larger, so that the input will be broken in half during each iteration. This gives us n comparisons and swaps times log-2(n) recursions for O(n*logn) time.
How much does non-ideal pivot selection affect execution time?
Let's consider a case where the pivot is consistently chosen such that 75% of the data is on one side of the pivot. It's still O(n*logn) but now the base of the log has changed to 1/0.75 or 1.33. The relationship in performance when changing base is always a constant represented by log(2)/log(newBase). In this case, that constant is 2.4. So this quality of pivot choice takes 2.4 times longer than the ideal.
How fast does this get worse?
Not very fast until the pivot choice gets (consistently) very bad:
50% on one side: (ideal case)
75% on one side: 2.4 times as long
90% on one side: 6.6 times as long
95% on one side: 13.5 times as long
99% on one side: 69 times as long
As we approach 100% on one side the log portion of the execution approaches n and the whole execution asymptotically approaches O(n^2).
In a naive implementation of QuickSort, cases such as a sorted array (for 1st element pivot) or a reverse-sorted array (for last element pivot) will reliably produce a worst-case O(n^2) execution time. Additionally, implementations with a predictable pivot selection can be subjected to DoS attack by data that is designed to produce worst case execution. Modern implementations avoid this by a variety of methods, such as randomizing the data before sort, choosing the median of 3 randomly chosen indexes, etc. With this randomization in the mix, we have 2 cases:
Small data set. Worst case is reasonably possible but O(n^2) is not catastrophic because n is small enough that n^2 is also small.
Large data set. Worst case is possible in theory but not in practice.
How likely are we to see terrible performance?
The chances are vanishingly small. Let's consider a sort of 5,000 values:
Our hypothetical implementation will choose a pivot using a median of 3 randomly chosen indexes. We will consider pivots that are in the 25%-75% range to be "good" and pivots that are in the 0%-25% or 75%-100% range to be "bad". If you look at the probability distribution using the median of 3 random indexes, each recursion has an 11/16 chance of ending up with a good pivot. Let us make 2 conservative (and false) assumptions to simplify the math:
Good pivots are always exactly at a 25%/75% split and operate at 2.4*ideal case. We never get an ideal split or any split better than 25/75.
Bad pivots are always worst case and essentially contribute nothing to the solution.
Our QuickSort implementation will stop at n=10 and switch to an insertion sort, so we require 22 25%/75% pivot partitions to break the 5,000 value input down that far. (10*1.333333^22 > 5000) Or, we require 4990 worst case pivots. Keep in mind that if we accumulate 22 good pivots at any point then the sort will complete, so worst case or anything near it requires extremely bad luck. If it took us 88 recursions to actually achieve the 22 good pivots required to sort down to n=10, that would be 4*2.4*ideal case or about 10 times the execution time of the ideal case. How likely is it that we would not achieve the required 22 good pivots after 88 recursions?
Binomial probability distributions can answer that, and the answer is about 10^-18. (n is 88, k is 21, p is 0.6875) Your user is about a thousand times more likely to be struck by lightning in the 1 second it takes to click [SORT] than they are to see that 5,000 item sort run any worse than 10*ideal case. This chance gets smaller as the dataset gets larger. Here are some array sizes and their corresponding chances to run longer than 10*ideal:
Array of 640 items: 10^-13 (requires 15 good pivot points out of 60 tries)
Array of 5,000 items: 10^-18 (requires 22 good pivots out of 88 tries)
Array of 40,000 items:10^-23 (requires 29 good pivots out of 116)
Remember that this is with 2 conservative assumptions that are worse than reality. So actual performance is better yet, and the balance of the remaining probability is closer to ideal than not.
Finally, as others have mentioned, even these absurdly unlikely cases can be eliminated by switching to a heap sort if the recursion stack goes too deep. So the TLDR is that, for good implementations of QuickSort, the worst case does not really exist because it has been engineered out and execution completes in O(n*logn) time.
This is a common question asked in the interviews that despite of better worst case performance of merge sort, quicksort is considered better than merge sort, especially for a large input. There are certain reasons due to which quicksort is better:
1- Auxiliary Space: Quick sort is an in-place sorting algorithm. In-place sorting means no additional storage space is needed to perform sorting. Merge sort on the other hand requires a temporary array to merge the sorted arrays and hence it is not in-place.
2- Worst case: The worst case of quicksort O(n^2) can be avoided by using randomized quicksort. It can be easily avoided with high probability by choosing the right pivot. Obtaining an average case behavior by choosing right pivot element makes it improvise the performance and becoming as efficient as Merge sort.
3- Locality of reference: Quicksort in particular exhibits good cache locality and this makes it faster than merge sort in many cases like in virtual memory environment.
4- Tail recursion: QuickSort is tail recursive while Merge sort is not. A tail recursive function is a function where recursive call is the last thing executed by the function. The tail recursive functions are considered better than non tail recursive functions as tail-recursion can be optimized by compiler.
Quicksort is the fastest sorting algorithm in practice but has a number of pathological cases that can make it perform as badly as O(n2).
Heapsort is guaranteed to run in O(n*ln(n)) and requires only finite additional storage. But there are many citations of real world tests which show that heapsort is significantly slower than quicksort on average.
Quicksort is NOT better than mergesort. With O(n^2) (worst case that rarely happens), quicksort is potentially far slower than the O(nlogn) of the merge sort. Quicksort has less overhead, so with small n and slow computers, it is better. But computers are so fast today that the additional overhead of a mergesort is negligible, and the risk of a very slow quicksort far outweighs the insignificant overhead of a mergesort in most cases.
In addition, a mergesort leaves items with identical keys in their original order, a useful attribute.
Wikipedia's explanation is:
Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the probability of requiring quadratic time.
Quicksort
Mergesort
I think there are also issues with the amount of storage needed for Mergesort (which is Ω(n)) that quicksort implementations don't have. In the worst case, they are the same amount of algorithmic time, but mergesort requires more storage.
Why Quicksort is good?
QuickSort takes N^2 in worst case and NlogN average case. The worst case occurs when data is sorted.
This can be mitigated by random shuffle before sorting is started.
QuickSort doesn't takes extra memory that is taken by merge sort.
If the dataset is large and there are identical items, complexity of Quicksort reduces by using 3 way partition. More the no of identical items better the sort. If all items are identical, it sorts in linear time. [This is default implementation in most libraries]
Is Quicksort always better than Mergesort?
Not really.
Mergesort is stable but Quicksort is not. So if you need stability in output, you would use Mergesort. Stability is required in many practical applications.
Memory is cheap nowadays. So if extra memory used by Mergesort is not critical to your application, there is no harm in using Mergesort.
Note: In java, Arrays.sort() function uses Quicksort for primitive data types and Mergesort for object data types. Because objects consume memory overhead, so added a little overhead for Mergesort may not be any issue for performance point of view.
Reference: Watch the QuickSort videos of Week 3, Princeton Algorithms Course at Coursera
Unlike Merge Sort Quick Sort doesn't uses an auxilary space. Whereas Merge Sort uses an auxilary space O(n).
But Merge Sort has the worst case time complexity of O(nlogn) whereas the worst case complexity of Quick Sort is O(n^2) which happens when the array is already is sorted.
The answer would slightly tilt towards quicksort w.r.t to changes brought with DualPivotQuickSort for primitive values . It is used in JAVA 7 to sort in java.util.Arrays
It is proved that for the Dual-Pivot Quicksort the average number of
comparisons is 2*n*ln(n), the average number of swaps is 0.8*n*ln(n),
whereas classical Quicksort algorithm has 2*n*ln(n) and 1*n*ln(n)
respectively. Full mathematical proof see in attached proof.txt
and proof_add.txt files. Theoretical results are also confirmed
by experimental counting of the operations.
You can find the JAVA7 implmentation here - http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/Arrays.java
Further Awesome Reading on DualPivotQuickSort - http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
In merge-sort, the general algorithm is:
Sort the left sub-array
Sort the right sub-array
Merge the 2 sorted sub-arrays
At the top level, merging the 2 sorted sub-arrays involves dealing with N elements.
One level below that, each iteration of step 3 involves dealing with N/2 elements, but you have to repeat this process twice. So you're still dealing with 2 * N/2 == N elements.
One level below that, you're merging 4 * N/4 == N elements, and so on. Every depth in the recursive stack involves merging the same number of elements, across all calls for that depth.
Consider the quick-sort algorithm instead:
Pick a pivot point
Place the pivot point at the correct place in the array, with all smaller elements to the left, and larger elements to the right
Sort the left-subarray
Sort the right-subarray
At the top level, you're dealing with an array of size N. You then pick one pivot point, put it in its correct position, and can then ignore it completely for the rest of the algorithm.
One level below that, you're dealing with 2 sub-arrays that have a combined size of N-1 (ie, subtract the earlier pivot point). You pick a pivot point for each sub-array, which comes up to 2 additional pivot points.
One level below that, you're dealing with 4 sub-arrays with combined size N-3, for the same reasons as above.
Then N-7... Then N-15... Then N-32...
The depth of your recursive stack remains approximately the same (logN). With merge-sort, you're always dealing with a N-element merge, across each level of the recursive stack. With quick-sort though, the number of elements that you're dealing with diminishes as you go down the stack. For example, if you look at the depth midway through the recursive stack, the number of elements you're dealing with is N - 2^((logN)/2)) == N - sqrt(N).
Disclaimer: On merge-sort, because you divide the array into 2 exactly equal chunks each time, the recursive depth is exactly logN. On quick-sort, because your pivot point is unlikely to be exactly in the middle of the array, the depth of your recursive stack may be slightly greater than logN. I haven't done the math to see how big a role this factor and the factor described above, actually play in the algorithm's complexity.
This is a pretty old question, but since I've dealt with both recently here are my 2c:
Merge sort needs on average ~ N log N comparisons. For already (almost) sorted sorted arrays this gets down to 1/2 N log N, since while merging we (almost) always select "left" part 1/2 N of times and then just copy right 1/2 N elements. Additionally I can speculate that already sorted input makes processor's branch predictor shine but guessing almost all branches correctly, thus preventing pipeline stalls.
Quick sort on average requires ~ 1.38 N log N comparisons. It does not benefit greatly from already sorted array in terms of comparisons (however it does in terms of swaps and probably in terms of branch predictions inside CPU).
My benchmarks on fairly modern processor shows the following:
When comparison function is a callback function (like in qsort() libc implementation) quicksort is slower than mergesort by 15% on random input and 30% for already sorted array for 64 bit integers.
On the other hand if comparison is not a callback, my experience is that quicksort outperforms mergesort by up to 25%.
However if your (large) array has a very few unique values, merge sort starts gaining over quicksort in any case.
So maybe the bottom line is: if comparison is expensive (e.g. callback function, comparing strings, comparing many parts of a structure mostly getting to a second-third-forth "if" to make difference) - the chances are that you will be better with merge sort. For simpler tasks quicksort will be faster.
That said all previously said is true:
- Quicksort can be N^2, but Sedgewick claims that a good randomized implementation has more chances of a computer performing sort to be struck by a lightning than to go N^2
- Mergesort requires extra space
Quicksort has a better average case complexity but in some applications it is the wrong choice. Quicksort is vulnerable to denial of service attacks. If an attacker can choose the input to be sorted, he can easily construct a set that takes the worst case time complexity of o(n^2).
Mergesort's average case complexity and worst case complexity are the same, and as such doesn't suffer the same problem. This property of merge-sort also makes it the superior choice for real-time systems - precisely because there aren't pathological cases that cause it to run much, much slower.
I'm a bigger fan of Mergesort than I am of Quicksort, for these reasons.
That's hard to say.The worst of MergeSort is n(log2n)-n+1,which is accurate if n equals 2^k(I have already proved this).And for any n,it's between (n lg n - n + 1) and (n lg n + n + O(lg n)).But for quickSort,its best is nlog2n(also n equals 2^k).If you divide Mergesort by quickSort,it equals one when n is infinite.So it's as if the worst case of MergeSort is better than the best case of QuickSort,why do we use quicksort?But remember,MergeSort is not in place,it require 2n memeroy space.And MergeSort also need to do many array copies,which we don't include in the analysis of algorithm.In a word,MergeSort is really faseter than quicksort in theroy,but in reality you need to consider memeory space,the cost of array copy,merger is slower than quick sort.I once made an experiment where I was given 1000000 digits in java by Random class,and it took 2610ms by mergesort,1370ms by quicksort.
Quick sort is worst case O(n^2), however, the average case consistently out performs merge sort. Each algorithm is O(nlogn), but you need to remember that when talking about Big O we leave off the lower complexity factors. Quick sort has significant improvements over merge sort when it comes to constant factors.
Merge sort also requires O(2n) memory, while quick sort can be done in place (requiring only O(n)). This is another reason that quick sort is generally preferred over merge sort.
Extra info:
The worst case of quick sort occurs when the pivot is poorly chosen. Consider the following example:
[5, 4, 3, 2, 1]
If the pivot is chosen as the smallest or largest number in the group then quick sort will run in O(n^2). The probability of choosing the element that is in the largest or smallest 25% of the list is 0.5. That gives the algorithm a 0.5 chance of being a good pivot. If we employ a typical pivot choosing algorithm (say choosing a random element), we have 0.5 chance of choosing a good pivot for every choice of a pivot. For collections of a large size the probability of always choosing a poor pivot is 0.5 * n. Based on this probability quick sort is efficient for the average (and typical) case.
When I experimented with both sorting algorithms, by counting the number of recursive calls,
quicksort consistently has less recursive calls than mergesort.
It is because quicksort has pivots, and pivots are not included in the next recursive calls. That way quicksort can reach recursive base case more quicker than mergesort.
While they're both in the same complexity class, that doesn't mean they both have the same runtime. Quicksort is usually faster than mergesort, just because it's easier to code a tight implementation and the operations it does can go faster. It's because that quicksort is generally faster that people use it instead of mergesort.
However! I personally often will use mergesort or a quicksort variant that degrades to mergesort when quicksort does poorly. Remember. Quicksort is only O(n log n) on average. It's worst case is O(n^2)! Mergesort is always O(n log n). In cases where realtime performance or responsiveness is a must and your input data could be coming from a malicious source, you should not use plain quicksort.
All things being equal, I'd expect most people to use whatever is most conveniently available, and that tends to be qsort(3). Other than that quicksort is known to be very fast on arrays, just like mergesort is the common choice for lists.
What I'm wondering is why it's so rare to see radix or bucket sort. They're O(n), at least on linked lists and all it takes is some method of converting the key to an ordinal number. (strings and floats work just fine.)
I'm thinking the reason has to do with how computer science is taught. I even had to demonstrate to my lecturer in Algorithm analysis that it was indeed possible to sort faster than O(n log(n)). (He had the proof that you can't comparison sort faster than O(n log(n)), which is true.)
In other news, floats can be sorted as integers, but you have to turn the negative numbers around afterwards.
Edit:
Actually, here's an even more vicious way to sort floats-as-integers: http://www.stereopsis.com/radix.html. Note that the bit-flipping trick can be used regardless of what sorting algorithm you actually use...
Small additions to quick vs merge sorts.
Also it can depend on kind of sorting items. If access to items, swap and comparisons is not simple operations, like comparing integers in plane memory, then merge sort can be preferable algorithm.
For example , we sort items using network protocol on remote server.
Also, in custom containers like "linked list", the are no benefit of quick sort.
1. Merge sort on linked list, don't need additional memory.
2. Access to elements in quick sort is not sequential (in memory)
Quick sort is an in-place sorting algorithm, so its better suited for arrays. Merge sort on the other hand requires extra storage of O(N), and is more suitable for linked lists.
Unlike arrays, in liked list we can insert items in the middle with O(1) space and O(1) time, therefore the merge operation in merge sort can be implemented without any extra space. However, allocating and de-allocating extra space for arrays have an adverse effect on the run time of merge sort. Merge sort also favors linked list as data is accessed sequentially, without much random memory access.
Quick sort on the other hand requires a lot of random memory access and with an array we can directly access the memory without any traversing as required by linked lists. Also quick sort when used for arrays have a good locality of reference as arrays are stored contiguously in memory.
Even though both sorting algorithms average complexity is O(NlogN), usually people for ordinary tasks uses an array for storage, and for that reason quick sort should be the algorithm of choice.
EDIT: I just found out that merge sort worst/best/avg case is always nlogn, but quick sort can vary from n2(worst case when elements are already sorted) to nlogn(avg/best case when pivot always divides the array in two halves).
Consider time and space complexity both.
For Merge sort :
Time complexity : O(nlogn) ,
Space complexity : O(nlogn)
For Quick sort :
Time complexity : O(n^2) ,
Space complexity : O(n)
Now, they both win in one scenerio each.
But, using a random pivot you can almost always reduce Time complexity of Quick sort to O(nlogn).
Thus, Quick sort is preferred in many applications instead of Merge sort.
In c/c++ land, when not using stl containers, I tend to use quicksort, because it is built
into the run time, while mergesort is not.
So I believe that in many cases, it is simply the path of least resistance.
In addition performance can be much higher with quick sort, for cases where the entire dataset does not fit into the working set.
One of the reason is more philosophical. Quicksort is Top->Down philosophy. With n elements to sort, there are n! possibilities. With 2 partitions of m & n-m which are mutually exclusive, the number of possibilities go down in several orders of magnitude. m! * (n-m)! is smaller by several orders than n! alone. imagine 5! vs 3! *2!. 5! has 10 times more possibilities than 2 partitions of 2 & 3 each . and extrapolate to 1 million factorial vs 900K!*100K! vs. So instead of worrying about establishing any order within a range or a partition,just establish order at a broader level in partitions and reduce the possibilities within a partition. Any order established earlier within a range will be disturbed later if the partitions themselves are not mutually exclusive.
Any bottom up order approach like merge sort or heap sort is like a workers or employee's approach where one starts comparing at a microscopic level early. But this order is bound to be lost as soon as an element in between them is found later on. These approaches are very stable & extremely predictable but do a certain amount of extra work.
Quick Sort is like Managerial approach where one is not initially concerned about any order , only about meeting a broad criterion with No regard for order. Then the partitions are narrowed until you get a sorted set. The real challenge in Quicksort is in finding a partition or criterion in the dark when you know nothing about the elements to sort. That is why we either need to spend some effort to find a median value or pick 1 at random or some arbitrary "Managerial" approach . To find a perfect median can take significant amount of effort and leads to a stupid bottom up approach again. So Quicksort says just a pick a random pivot and hope that it will be somewhere in the middle or do some work to find median of 3 , 5 or something more to find a better median but do not plan to be perfect & don't waste any time in initially ordering. That seems to do well if you are lucky or sometimes degrades to n^2 when you don't get a median but just take a chance. Any way data is random. right.
So I agree more with the top ->down logical approach of quicksort & it turns out that the chance it takes about pivot selection & comparisons that it saves earlier seems to work better more times than any meticulous & thorough stable bottom ->up approach like merge sort. But

Resources