For example, with HashSet, I know that getting one known element is usually O(1), but I want to find what is the time complexity for getting all elements (without knowing them, so an iteration).
I can't find this information anywhere in the standard library's documentation. I have also looked at SwissTable, without success.
Is it even measurable? Where can I find it?
TL;DR:
BTreeSet: O(N)
HashSet: O(capacity)
BTreeSet
The B-Tree data-structure is a Tree of Arrays of K elements, for some value of K.
The depth of the Tree is O(log N), and nodes are merged together when their arrays are not full enough. For our case, we can use the rule that a node is necessarily at least half-full, although any constant works.
In general, iteration is done from smallest to largest, which is an in-order traversal. This implies that moving from element to the next is not strictly O(1), indeed, moving from the right-most element of the left sub-tree to the root implies O(log N) steps.
It can be shown that the amortized complexity is O(1), and this leads to O(N) overall traversal complexity.
HashSet
There is no general iteration complexity for hash maps, or hash sets; it varies by implementation.
The implementation in Rust is an open-ended hash-table, essentially. This means a very large array of K elements (K = capacity), more or less sparsely populated.
As with most open-ended hash tables, there is no short-circuit to iteration. Instead, each element of the array is checked in turn.
The iteration time is thus proportional to the capacity, regardless of the number of elements. On a sparsely populated hash-table, that's quite expensive.
Note: the Swiss table uses a variation of open-ended hash-tables, this does not affect the fundamental properties of the various operations.
If I understood your question, you're asking how much time it takes to visit every item in a collection in no particular order. For any collection of n items, the best case is Omega(n) because you can't retrieve an item in less than one operation. Conversely, as long as you can retrieve the next item in a collection in a constant (or constant on average) number of operations, the worst case is O(n).
In principle, it's possible to do much worse than O(n) if you really try. For example, you could iterate over a HashMap containing n items by trying each of m > n keys, so that the complexity would be O(m) instead of O(n).
If you're really worried that iteration for a particular collection was implemented naively, for now it seems like the only way to know is to go digging through the source code. Following the bread-crumbs in HashMap, for example, eventually leads to this method which is used to iterate over the contents of this struct, but it's a bit difficult to interpret if (like me) you aren't really familiar with all of the implementation details.
Currently, our implementation simply performs naive linear search.
This provides excellent performance on small nodes of elements which
are cheap to compare. However in the future we would like to further
explore choosing the optimal search strategy based on the choice of B,
and possibly other factors. Using linear search, searching for a
random element is expected to take O(B * log(n)) comparisons, which is
generally worse than a BST. In practice, however, performance is
excellent.
Source: BTreeMap referenced from here.
From this reference, I'd assume that HashSet is more or less equal to HashMap:
The default hashing algorithm is currently SipHash 1-3, though this is
subject to change at any point in the future. While its performance is
very competitive for medium sized keys, other hashing algorithms will
outperform it for small keys such as integers as well as large keys
such as long strings, though those algorithms will typically not
protect against attacks such as HashDoS.
Source: HashMap
Since this doesn't state anything specific, I'd assume that O(1) should apply most of the time. This thread has (although for Java) some very good answers.
In very simple words: the complexity of an algorithm is defined by looking at the source code. For a two-dimensional array, the runtime (without doing anything in the inner loop) would be n² because you'd have two loops running n-times each:
for(int i = 0; i<arr.length; i++)
{
for(int j = 0; j<arr[0].length; j++)
{
// do something
}
}
For further reference, you may check out the Wikipedia article on Big O notation.
I am curious to know which algorithm is better :
Algorithm with O(n log n) time and O(1) space complexity
Algorithm with O(n) time and O(n) space complexity
Most of the algorithm which are solved in O(n long n) time and constant space can be solved in O(n) time by paying penalty in terms of space. Which algorithm is better ?
How do I decide between these two parameters ?
Example : Array Pair Sum
Can be solved in O(n logn) time by sorting
Can be solved using hash maps in O(n) time but with O(n) space
Without actually testing anything (a risky move!), I'm going to claim that the O(n log n)-time, O(1)-space algorithm is probably faster than the O(n)-time, O(n)-space algorithm, but is still probably not the optimal algorithm.
First, let's talk about this from a high-level perspective that ignores the particular details of the algorithms you're describing. One detail to keep in mind is that although O(n)-time algorithms are asymptotically faster than O(n log n)-time algorithms, they're only faster by a logarithmic factor. Keeping in mind that the number of atoms in the universe is about 1080 (thanks, physics!), the base-2 log of the number of atoms in the universe is about 240. From a practical perspective, this means that you can think of that extra O(log n) factor as just a constant. Consequently, to determine whether an O(n log n) algorithm will be faster or slower than an O(n) algorithm on a particular input, you'd need to know more about what constants are hidden by the big-O notation. An algorithm that runs in time 600n will be slower than an algorithm that runs in time 2n log n for any n that fits in the universe, for example. Therefore, in terms of wall-clock performance, to evaluate which algorithm is faster, you'd probably need to do a bit of profiling on the algorithm to see which one is faster.
Then there's the effects of caching and locality of reference. Computer memory has a huge number of caches in it that are optimized for the case where reads and writes are located next to one another. The cost of a cache miss can be huge - hundreds or thousands of times slower than a hit - so you want to try to minimize this. If an algorithm uses O(n) memory, then as n gets larger, you need to start worrying about how closely-packed your memory accesses will be. If they're spread out, then the cost of the cache misses might start to add up pretty quickly, significantly driving up the coefficient hidden in the big-O notation of the time complexity. If they're more sequential, then you probably don't need to worry too much about this.
You also need to be careful about total memory available. If you have 8GB of RAM on your system and get an array with one billion 32-bit integers, then if you need O(n) auxiliary space with even a reasonable constant, you're not going to be able to fit your auxiliary memory into main memory and it will start getting paged out by the OS, really killing your runtime.
Finally, there's the issue of randomness. Algorithms based on hashing have expected fast runtimes, but if you get a bad hash function, there's a chance that the algorithm will slow down. Generating good random bits is hard, so most hash tables just go for "reasonably good" hash functions, risking worst-case inputs that will make the algorithm's performance degenerate.
So how do these concerns actually play out in practice? Well, let's look at the algorithms. The O(n)-time, O(n)-space algorithm works by building a hash table of all the elements in the array so that you can easily check whether a given element is present in the array, then scanning over the array and seeing whether there is a pair that sums up to the total. Let's think about how this algorithm works given the factors above.
The memory usage is O(n) and, due to how hashing works, the accesses to the hash table are not likely to be sequential (an ideal hash table would have pretty much random access patterns). This means that you're going to have a lot of cache misses.
The high memory usage means that for large inputs, you have to worry about memory getting paged in and out, exacerbating the above problem.
As a result of the above two factors, the constant term hidden in the O(n) runtime is likely much higher than it looks.
Hashing is not worst-case efficient, so there may be inputs that cause performance to significantly degrade.
Now, think about the O(n log n)-time, O(1) space algorithm, which works by doing an in-place array sort (say, heapsort), then walking inwards from the left and right and seeing if you can find a pair that sums to the target. The second step in this process has excellent locality of reference - virtually all array accesses are adjacent - and pretty much all of the cache misses you're going to get are going to be in the sorting step. This will increase the constant factor hidden in the big-O notation. However, the algorithm has no degenerate inputs and its low memory footprint probably means that the locality of reference will be better than the hash table approach. Therefore, if I had to guess, I'd put my money on this algorithm.
... Well, actually, I'd put my money on a third algorithm: an O(n log n)-time, O(log n)-space algorithm that's basically the above algorithm, but using introsort instead of heapsort. Introsort is an O(n log n)-time, O(log n)-space algorithm that uses randomized quicksort to mostly sort the array, switching to heapsort if the quicksort looks like it's about to degenerate, and doing a final insertion sort pass to clean everything up. Quicksort has amazing locality of reference - this is why it's so fast - and insertion sort is faster on small inputs, so this is an excellent compromise. Plus, O(log n) extra memory is basically nothing - remember, in practice, log n is at most 240. This algorithm has about the best locality of reference that you can get, giving a very low constant factor hidden by the O(n log n) term, so it would probably outperform the other algorithms in practice.
Of course, I've got to qualify that answer as well. The analysis I did above assumes that we're talking about pretty large inputs to the algorithm. If you're only ever looking at small inputs, then this whole analysis goes out the window because the effects I was taking into account won't start to show up. In that case, the best option would just be to profile the approaches and see what works best. From there, you might be able to build a "hybrid" approach where you use one algorithm for inputs in one size range and a different algorithm for inputs in a different size range. Chances are that this would give an approach that beats any single one of the approaches.
That said, to paraphrase Don Knuth, "beware of the above analysis - I have merely proved it correct, not actually tried it." The best option would be to profile everything and see how it works. The reason I didn't do this was to go through the analysis of what factors to keep an eye out for and to highlight the weakness of a pure big-O analysis comparing the two algorithms. I hope that the practice bears this out! If not, I'd love to see where I got it wrong. :-)
From experience:
If you absolutely can't afford the space, head the O(1) space route.
When random access is unavoidable, head the O(n) space route. (It's usually simpler and has a smaller time constant.)
When random access is slow (e.g. seek times), head the O(1) space route. (You can usually figure out a way to be cache coherent.)
Otherwise, random access is fast -- head the O(n) space route. (It's usually simpler with a smaller time constant.)
Note that usually random access is "fast" if the problem fits within memory that's faster than the bottleneck storage. (e.g. if disks are the bottleneck, main memory is fast enough for random access --- if main memory is the bottleneck, CPU cache is fast enough for random access)
Using your specific algorithm example Array Pair Sum, the hash version O(n) time with O(n) space will be faster. Here's a little JavaScript benchmark you can play with http://jsfiddle.net/bbxb0bt4/1/
I used two different sorting algorithms, quick sort and radix sort in the benchmark. Radix sort in this instance (array of 32bit integers) is the ideal sorting algorithm and even it can barely compete with the single pass hash version.
If you want some generalized opinion, with regards to programming:
using the O(N) time with O(N) space algorithm is preferred because the implementation will be simpler, which means it will be easier to maintain and debug.
function apsHash(arr, x) {
var hash = new Set();
for(var i = 0; i < arr.length; i++) {
if(hash.has(x - arr[i])) {
return [arr[i], x - arr[i]];
}
hash.add(arr[i]);
}
return [NaN, NaN];
}
function apsSortQS(arr, x) {
arr = quickSortIP(arr);
var l = 0;
var r = arr.length - 1;
while(l < r) {
if(arr[l] + arr[r] === x) {
return [arr[l], arr[r]];
} else if(arr[l] + arr[r] < x) {
l++;
} else {
r--;
}
}
return [NaN, NaN];
}
To compare two algorithms, firstly it should be quiet clear that for what we are comparing them.
If our priority is space, the algorithm with T(n)=O(n log n) & S(n)=O(1) is better.
In general case, second one with T(n)=O(n) & S(n)=O(n) is better as space could be compensated but time couldn't.
That's not true that you can always substitute an O(n lg n) time O(1) space algorithm, with O(n) time O(n) space one. It really depends on the problem, and there are many different algorithms with different complexities for time and space, not just linear or linearithmic (e.g. n log n).
Note that O(1) space sometimes means (like in your example) that you need to modify the input array. So this actually means that you do need O(n) space, but you can somehow use the input array as your space (vs the case of really using only constant space). Changing the input array is not always possible or allowed.
As for choosing between the different algorithms with different time and space characteristics, it depends on your priorities. Often, the time is most important, so if you have enough memory, you would choose the fastest algorithm (remember that this memory is only used temporarily while the algorithm is running). If you really don't have the required space, then you would choose a slower algorithm which requires less space.
So, the general rule of thumb is to choose the fastest algorithm (not just by asymptotic complexity, but the actual real world fastest execution time for your regular work load) that it's possible to accomodate its space requirements.
One should keep three things in mind while selecting an algorithm approach.
Time in which the application will run smoothly in worst case scenario.
Space availability based on kind of environment the program will run in.
Re-usability of the functions created.
Given these three points, We may decide which approach will suit our application.
If I would be having a limited space and reasonable data supplied to it, then condition 2 will play prime role. Here, We may check the smoothness with O(nlogn) and try to optimize the code and give importance to condition 3.
(For example, Sorting algorithm utilized in Array Pair Sum can be reused at some other place in my code.)
If I would be having enough space, then improvising on time would be major concern. Here, instead re-usability, one would focus on writing time-efficient program.
Assuming that your assumption is true.
Giving the fact that in real life, unlimited resources do not exist and that while implementing a solution you would do your best to implement the most reliable solution (a solution that does not break because you consumed all your allowed memory), I would be wise and go with :
Algorithm with O(n log n) time and O(1) space complexity
Even if you have a big amount of memory and that you are sure you would never exhaust your memory using solutions that consume a lot of memory could cause many issues (I/O read/write speed, backup data on case of failure) and I guess no one likes application that uses 2Go of memory at start ups and keeps growing over time as if there was a memory leak.
i guess best is to write a test,
actual algorithm, amount of data (n),
and memory usage pattern will be important.
here a simple attempt to model it;
random() function calls and mod operations for time complexity,
random memory access (read/write) for space complexity.
#include <stdio.h>
#include <malloc.h>
#include <time.h>
#include <math.h>
int test_count = 10;
int* test (long time_cost, long mem_cost){
// memory allocation cost is also included
int* mem = malloc(sizeof(int) * mem_cost);
long i;
for (i = 0; i < time_cost; i++){
//random memory access, read and write operations.
*(mem + (random() % mem_cost)) = *(mem + (random() % mem_cost));
}
return mem;
}
int main(int argc, char** argv){
if (argc != 2) {
fprintf(stderr,"wrong argument count %d \nusage: complexity n", argc);
return -1;
}
long n = atol(argv[1]);
int *mem1, *mem2;
clock_t start,stop;
long long sum1 = 0;
long long sum2 = 0;
int i = 0;
for (i; i < test_count; i++){
start = clock();
mem1 = test(n * log(n), 1);
stop = clock();
free(mem1);
sum1 += (stop - start);
start = clock();
mem2 = test(n , n);
stop = clock();
free(mem2);
sum2 += (stop - start);
}
fprintf(stdout, "%lld \t", sum1);
fprintf(stdout, "%lld \n", sum2);
return 0;
}
disabling optimizations;
gcc -o complexity -O0 -lm complexity.c
testing;
for ((i = 1000; i < 10000000; i *= 2)); do ./complexity $i; done | awk -e '{print $1 / $2}'
results i got;
7.96269
7.86233
8.54565
8.93554
9.63891
10.2098
10.596
10.9249
10.8096
10.9078
8.08227
6.63285
5.63355
5.45705
up to some point O(n) is doing better in my machine,
after some point, O(n*logn) getting better, (i didn't use swap).
I made an algorithm for sorting but I then I thought perhaps I had just reinvented quicksort.
However I heard quicksort is O(N^2) worst case; I think my algorithm should be only O(NLogN) worst case.
Is this the same as quicksort?
The algorithm works by swapping values so that all values smaller than the median are moved to the left of the array. It then works recursively on each side.
The algorithm starts with i=0, j = n-1
i and j move towards each other with list[i] and list[j] being swapped if necessary.
Here is some code for the first iteration before the recursion:
_list = [1,-4,2,-5,3,-6]
def in_place(_list,i,j,median):
while i<j:
a,b = _list[i],_list[j]
if (a<median and b>=median):
i+=1
j-=1
elif (a>=median and b<median):
_list[i],_list[j]=b,a
i+=1
j-=1
elif a<median:
i+=1
else:
j-=1
print "changed to ", _list
def get_median(_list):
#approximate median in O(N) with O(1) space
return -4
median = get_median(_list)
in_place(_list,0,len(_list)-1,median)
"""
changed1 to [-6, -5, 2, -4, 3, 1]
"""
http://en.wikipedia.org/wiki/Quicksort#Selection-based_pivoting
Conversely, once we know a worst-case O(n) selection algorithm is
available, we can use it to find the ideal pivot (the median) at every
step of quicksort, producing a variant with worst-case O(n log n)
running time. In practical implementations, however, this variant is
considerably slower on average.
Another variant is to choose the Median of Medians as the pivot
element instead of the median itself for partitioning the elements.
While maintaining the asymptotically optimal run time complexity of
O(n log n) (by preventing worst case partitions), it is also
considerably faster than the variant that chooses the median as pivot.
For starters, I assume there is other code not shown, as I'm pretty sure that the code you've shown on its own would not work.
I'm sorry to steal your fire, but I'm afraid what code you do show seems to be Quicksort, and not only that, but the code seems to possibly suffer from some bugs.
Consider the case of sorting a list of identical elements. Your _in_place method, which seems to be what is traditionally called partition in Quicksort, would not move any elements correctly, but at the end the j and i seem to reflect the list having only one partition containing the whole list, in which case you would recurse again on the whole list forever. My guess is, as as mentioned, you don't return anything from it, or seem to actually fully sort anywhere, so I am left guessing how this would be used.
I'm afraid using the real median for Quicksort is not only a possibly fairly slow strategy in the average case, it also doesn't avoid the O(n^2) worst case, again a list of identical elements would provide such a worst case. However, I think a three way partition Quicksort with such a median selection algorithm would guarantee O(n*log n) time. Nonetheless, this is a known option for pivot choice and not a new algorithm.
In short, this appears to be an incomplete and possibly buggy Quicksort, and without three way partitioning, using the median would not guarantee you O(n*log n). However, I do feel that it is a good thing and worth congratulations that you did think of the idea of using the median yourself - even if it has been thought of by others before.
I was asked this question during an interview. They're both O(nlogn) and yet most people use Quicksort instead of Mergesort. Why is that?
Quicksort has O(n2) worst-case runtime and O(nlogn) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out.
In particular, the often-quoted runtime of sorting algorithms refers to the number of comparisons or the number of swaps necessary to perform to sort the data. This is indeed a good measure of performance, especially since it’s independent of the underlying hardware design. However, other things – such as locality of reference (i.e. do we read lots of elements which are probably in cache?) – also play an important role on current hardware. Quicksort in particular requires little additional space and exhibits good cache locality, and this makes it faster than merge sort in many cases.
In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).
In practice, many modern implementations of quicksort (in particular libstdc++’s std::sort) are actually introsort, whose theoretical worst-case is O(nlogn), same as merge sort. It achieves this by limiting the recursion depth, and switching to a different algorithm (heapsort) once it exceeds logn.
As many people have noted, the average case performance for quicksort is faster than mergesort. But this is only true if you are assuming constant time to access any piece of memory on demand.
In RAM this assumption is generally not too bad (it is not always true because of caches, but it is not too bad). However if your data structure is big enough to live on disk, then quicksort gets killed by the fact that your average disk does something like 200 random seeks per second. But that same disk has no trouble reading or writing megabytes per second of data sequentially. Which is exactly what mergesort does.
Therefore if data has to be sorted on disk, you really, really want to use some variation on mergesort. (Generally you quicksort sublists, then start merging them together above some size threshold.)
Furthermore if you have to do anything with datasets of that size, think hard about how to avoid seeks to disk. For instance this is why it is standard advice that you drop indexes before doing large data loads in databases, and then rebuild the index later. Maintaining the index during the load means constantly seeking to disk. By contrast if you drop the indexes, then the database can rebuild the index by first sorting the information to be dealt with (using a mergesort of course!) and then loading it into a BTREE datastructure for the index. (BTREEs are naturally kept in order, so you can load one from a sorted dataset with few seeks to disk.)
There have been a number of occasions where understanding how to avoid disk seeks has let me make data processing jobs take hours rather than days or weeks.
Actually, QuickSort is O(n2). Its average case running time is O(nlog(n)), but its worst-case is O(n2), which occurs when you run it on a list that contains few unique items. Randomization takes O(n). Of course, this doesn't change its worst case, it just prevents a malicious user from making your sort take a long time.
QuickSort is more popular because it:
Is in-place (MergeSort requires extra memory linear to number of elements to be sorted).
Has a small hidden constant.
"and yet most people use Quicksort instead of Mergesort. Why is that?"
One psychological reason that has not been given is simply that Quicksort is more cleverly named. ie good marketing.
Yes, Quicksort with triple partioning is probably one of the best general purpose sort algorithms, but theres no getting over the fact that "Quick" sort sounds much more powerful than "Merge" sort.
As others have noted, worst case of Quicksort is O(n^2), while mergesort and heapsort stay at O(nlogn). On the average case, however, all three are O(nlogn); so they're for the vast majority of cases comparable.
What makes Quicksort better on average is that the inner loop implies comparing several values with a single one, while on the other two both terms are different for each comparison. In other words, Quicksort does half as many reads as the other two algorithms. On modern CPUs performance is heavily dominated by access times, so in the end Quicksort ends up being a great first choice.
I'd like to add that of the three algoritms mentioned so far (mergesort, quicksort and heap sort) only mergesort is stable. That is, the order does not change for those values which have the same key. In some cases this is desirable.
But, truth be told, in practical situations most people need only good average performance and quicksort is... quick =)
All sort algorithms have their ups and downs. See Wikipedia article for sorting algorithms for a good overview.
From the Wikipedia entry on Quicksort:
Quicksort also competes with
mergesort, another recursive sort
algorithm but with the benefit of
worst-case Θ(nlogn) running time.
Mergesort is a stable sort, unlike
quicksort and heapsort, and can be
easily adapted to operate on linked
lists and very large lists stored on
slow-to-access media such as disk
storage or network attached storage.
Although quicksort can be written to
operate on linked lists, it will often
suffer from poor pivot choices without
random access. The main disadvantage
of mergesort is that, when operating
on arrays, it requires Θ(n) auxiliary
space in the best case, whereas the
variant of quicksort with in-place
partitioning and tail recursion uses
only Θ(logn) space. (Note that when
operating on linked lists, mergesort
only requires a small, constant amount
of auxiliary storage.)
Mu!
Quicksort is not better, it is well suited for a different kind of application, than mergesort.
Mergesort is worth considering if speed is of the essence, bad worst-case performance cannot be tolerated, and extra space is available.1
You stated that they «They're both O(nlogn) […]». This is wrong. «Quicksort uses about n^2/2 comparisons in the worst case.»1.
However the most important property according to my experience is the easy implementation of sequential access you can use while sorting when using programming languages with the imperative paradigm.
1 Sedgewick, Algorithms
I would like to add to the existing great answers some math about how QuickSort performs when diverging from best case and how likely that is, which I hope will help people understand a little better why the O(n^2) case is not of real concern in the more sophisticated implementations of QuickSort.
Outside of random access issues, there are two main factors that can impact the performance of QuickSort and they are both related to how the pivot compares to the data being sorted.
1) A small number of keys in the data. A dataset of all the same value will sort in n^2 time on a vanilla 2-partition QuickSort because all of the values except the pivot location are placed on one side each time. Modern implementations address this by methods such as using a 3-partition sort. These methods execute on a dataset of all the same value in O(n) time. So using such an implementation means that an input with a small number of keys actually improves performance time and is no longer a concern.
2) Extremely bad pivot selection can cause worst case performance. In an ideal case, the pivot will always be such that 50% the data is smaller and 50% the data is larger, so that the input will be broken in half during each iteration. This gives us n comparisons and swaps times log-2(n) recursions for O(n*logn) time.
How much does non-ideal pivot selection affect execution time?
Let's consider a case where the pivot is consistently chosen such that 75% of the data is on one side of the pivot. It's still O(n*logn) but now the base of the log has changed to 1/0.75 or 1.33. The relationship in performance when changing base is always a constant represented by log(2)/log(newBase). In this case, that constant is 2.4. So this quality of pivot choice takes 2.4 times longer than the ideal.
How fast does this get worse?
Not very fast until the pivot choice gets (consistently) very bad:
50% on one side: (ideal case)
75% on one side: 2.4 times as long
90% on one side: 6.6 times as long
95% on one side: 13.5 times as long
99% on one side: 69 times as long
As we approach 100% on one side the log portion of the execution approaches n and the whole execution asymptotically approaches O(n^2).
In a naive implementation of QuickSort, cases such as a sorted array (for 1st element pivot) or a reverse-sorted array (for last element pivot) will reliably produce a worst-case O(n^2) execution time. Additionally, implementations with a predictable pivot selection can be subjected to DoS attack by data that is designed to produce worst case execution. Modern implementations avoid this by a variety of methods, such as randomizing the data before sort, choosing the median of 3 randomly chosen indexes, etc. With this randomization in the mix, we have 2 cases:
Small data set. Worst case is reasonably possible but O(n^2) is not catastrophic because n is small enough that n^2 is also small.
Large data set. Worst case is possible in theory but not in practice.
How likely are we to see terrible performance?
The chances are vanishingly small. Let's consider a sort of 5,000 values:
Our hypothetical implementation will choose a pivot using a median of 3 randomly chosen indexes. We will consider pivots that are in the 25%-75% range to be "good" and pivots that are in the 0%-25% or 75%-100% range to be "bad". If you look at the probability distribution using the median of 3 random indexes, each recursion has an 11/16 chance of ending up with a good pivot. Let us make 2 conservative (and false) assumptions to simplify the math:
Good pivots are always exactly at a 25%/75% split and operate at 2.4*ideal case. We never get an ideal split or any split better than 25/75.
Bad pivots are always worst case and essentially contribute nothing to the solution.
Our QuickSort implementation will stop at n=10 and switch to an insertion sort, so we require 22 25%/75% pivot partitions to break the 5,000 value input down that far. (10*1.333333^22 > 5000) Or, we require 4990 worst case pivots. Keep in mind that if we accumulate 22 good pivots at any point then the sort will complete, so worst case or anything near it requires extremely bad luck. If it took us 88 recursions to actually achieve the 22 good pivots required to sort down to n=10, that would be 4*2.4*ideal case or about 10 times the execution time of the ideal case. How likely is it that we would not achieve the required 22 good pivots after 88 recursions?
Binomial probability distributions can answer that, and the answer is about 10^-18. (n is 88, k is 21, p is 0.6875) Your user is about a thousand times more likely to be struck by lightning in the 1 second it takes to click [SORT] than they are to see that 5,000 item sort run any worse than 10*ideal case. This chance gets smaller as the dataset gets larger. Here are some array sizes and their corresponding chances to run longer than 10*ideal:
Array of 640 items: 10^-13 (requires 15 good pivot points out of 60 tries)
Array of 5,000 items: 10^-18 (requires 22 good pivots out of 88 tries)
Array of 40,000 items:10^-23 (requires 29 good pivots out of 116)
Remember that this is with 2 conservative assumptions that are worse than reality. So actual performance is better yet, and the balance of the remaining probability is closer to ideal than not.
Finally, as others have mentioned, even these absurdly unlikely cases can be eliminated by switching to a heap sort if the recursion stack goes too deep. So the TLDR is that, for good implementations of QuickSort, the worst case does not really exist because it has been engineered out and execution completes in O(n*logn) time.
This is a common question asked in the interviews that despite of better worst case performance of merge sort, quicksort is considered better than merge sort, especially for a large input. There are certain reasons due to which quicksort is better:
1- Auxiliary Space: Quick sort is an in-place sorting algorithm. In-place sorting means no additional storage space is needed to perform sorting. Merge sort on the other hand requires a temporary array to merge the sorted arrays and hence it is not in-place.
2- Worst case: The worst case of quicksort O(n^2) can be avoided by using randomized quicksort. It can be easily avoided with high probability by choosing the right pivot. Obtaining an average case behavior by choosing right pivot element makes it improvise the performance and becoming as efficient as Merge sort.
3- Locality of reference: Quicksort in particular exhibits good cache locality and this makes it faster than merge sort in many cases like in virtual memory environment.
4- Tail recursion: QuickSort is tail recursive while Merge sort is not. A tail recursive function is a function where recursive call is the last thing executed by the function. The tail recursive functions are considered better than non tail recursive functions as tail-recursion can be optimized by compiler.
Quicksort is the fastest sorting algorithm in practice but has a number of pathological cases that can make it perform as badly as O(n2).
Heapsort is guaranteed to run in O(n*ln(n)) and requires only finite additional storage. But there are many citations of real world tests which show that heapsort is significantly slower than quicksort on average.
Quicksort is NOT better than mergesort. With O(n^2) (worst case that rarely happens), quicksort is potentially far slower than the O(nlogn) of the merge sort. Quicksort has less overhead, so with small n and slow computers, it is better. But computers are so fast today that the additional overhead of a mergesort is negligible, and the risk of a very slow quicksort far outweighs the insignificant overhead of a mergesort in most cases.
In addition, a mergesort leaves items with identical keys in their original order, a useful attribute.
Wikipedia's explanation is:
Typically, quicksort is significantly faster in practice than other Θ(nlogn) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the probability of requiring quadratic time.
Quicksort
Mergesort
I think there are also issues with the amount of storage needed for Mergesort (which is Ω(n)) that quicksort implementations don't have. In the worst case, they are the same amount of algorithmic time, but mergesort requires more storage.
Why Quicksort is good?
QuickSort takes N^2 in worst case and NlogN average case. The worst case occurs when data is sorted.
This can be mitigated by random shuffle before sorting is started.
QuickSort doesn't takes extra memory that is taken by merge sort.
If the dataset is large and there are identical items, complexity of Quicksort reduces by using 3 way partition. More the no of identical items better the sort. If all items are identical, it sorts in linear time. [This is default implementation in most libraries]
Is Quicksort always better than Mergesort?
Not really.
Mergesort is stable but Quicksort is not. So if you need stability in output, you would use Mergesort. Stability is required in many practical applications.
Memory is cheap nowadays. So if extra memory used by Mergesort is not critical to your application, there is no harm in using Mergesort.
Note: In java, Arrays.sort() function uses Quicksort for primitive data types and Mergesort for object data types. Because objects consume memory overhead, so added a little overhead for Mergesort may not be any issue for performance point of view.
Reference: Watch the QuickSort videos of Week 3, Princeton Algorithms Course at Coursera
Unlike Merge Sort Quick Sort doesn't uses an auxilary space. Whereas Merge Sort uses an auxilary space O(n).
But Merge Sort has the worst case time complexity of O(nlogn) whereas the worst case complexity of Quick Sort is O(n^2) which happens when the array is already is sorted.
The answer would slightly tilt towards quicksort w.r.t to changes brought with DualPivotQuickSort for primitive values . It is used in JAVA 7 to sort in java.util.Arrays
It is proved that for the Dual-Pivot Quicksort the average number of
comparisons is 2*n*ln(n), the average number of swaps is 0.8*n*ln(n),
whereas classical Quicksort algorithm has 2*n*ln(n) and 1*n*ln(n)
respectively. Full mathematical proof see in attached proof.txt
and proof_add.txt files. Theoretical results are also confirmed
by experimental counting of the operations.
You can find the JAVA7 implmentation here - http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/Arrays.java
Further Awesome Reading on DualPivotQuickSort - http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
In merge-sort, the general algorithm is:
Sort the left sub-array
Sort the right sub-array
Merge the 2 sorted sub-arrays
At the top level, merging the 2 sorted sub-arrays involves dealing with N elements.
One level below that, each iteration of step 3 involves dealing with N/2 elements, but you have to repeat this process twice. So you're still dealing with 2 * N/2 == N elements.
One level below that, you're merging 4 * N/4 == N elements, and so on. Every depth in the recursive stack involves merging the same number of elements, across all calls for that depth.
Consider the quick-sort algorithm instead:
Pick a pivot point
Place the pivot point at the correct place in the array, with all smaller elements to the left, and larger elements to the right
Sort the left-subarray
Sort the right-subarray
At the top level, you're dealing with an array of size N. You then pick one pivot point, put it in its correct position, and can then ignore it completely for the rest of the algorithm.
One level below that, you're dealing with 2 sub-arrays that have a combined size of N-1 (ie, subtract the earlier pivot point). You pick a pivot point for each sub-array, which comes up to 2 additional pivot points.
One level below that, you're dealing with 4 sub-arrays with combined size N-3, for the same reasons as above.
Then N-7... Then N-15... Then N-32...
The depth of your recursive stack remains approximately the same (logN). With merge-sort, you're always dealing with a N-element merge, across each level of the recursive stack. With quick-sort though, the number of elements that you're dealing with diminishes as you go down the stack. For example, if you look at the depth midway through the recursive stack, the number of elements you're dealing with is N - 2^((logN)/2)) == N - sqrt(N).
Disclaimer: On merge-sort, because you divide the array into 2 exactly equal chunks each time, the recursive depth is exactly logN. On quick-sort, because your pivot point is unlikely to be exactly in the middle of the array, the depth of your recursive stack may be slightly greater than logN. I haven't done the math to see how big a role this factor and the factor described above, actually play in the algorithm's complexity.
This is a pretty old question, but since I've dealt with both recently here are my 2c:
Merge sort needs on average ~ N log N comparisons. For already (almost) sorted sorted arrays this gets down to 1/2 N log N, since while merging we (almost) always select "left" part 1/2 N of times and then just copy right 1/2 N elements. Additionally I can speculate that already sorted input makes processor's branch predictor shine but guessing almost all branches correctly, thus preventing pipeline stalls.
Quick sort on average requires ~ 1.38 N log N comparisons. It does not benefit greatly from already sorted array in terms of comparisons (however it does in terms of swaps and probably in terms of branch predictions inside CPU).
My benchmarks on fairly modern processor shows the following:
When comparison function is a callback function (like in qsort() libc implementation) quicksort is slower than mergesort by 15% on random input and 30% for already sorted array for 64 bit integers.
On the other hand if comparison is not a callback, my experience is that quicksort outperforms mergesort by up to 25%.
However if your (large) array has a very few unique values, merge sort starts gaining over quicksort in any case.
So maybe the bottom line is: if comparison is expensive (e.g. callback function, comparing strings, comparing many parts of a structure mostly getting to a second-third-forth "if" to make difference) - the chances are that you will be better with merge sort. For simpler tasks quicksort will be faster.
That said all previously said is true:
- Quicksort can be N^2, but Sedgewick claims that a good randomized implementation has more chances of a computer performing sort to be struck by a lightning than to go N^2
- Mergesort requires extra space
Quicksort has a better average case complexity but in some applications it is the wrong choice. Quicksort is vulnerable to denial of service attacks. If an attacker can choose the input to be sorted, he can easily construct a set that takes the worst case time complexity of o(n^2).
Mergesort's average case complexity and worst case complexity are the same, and as such doesn't suffer the same problem. This property of merge-sort also makes it the superior choice for real-time systems - precisely because there aren't pathological cases that cause it to run much, much slower.
I'm a bigger fan of Mergesort than I am of Quicksort, for these reasons.
That's hard to say.The worst of MergeSort is n(log2n)-n+1,which is accurate if n equals 2^k(I have already proved this).And for any n,it's between (n lg n - n + 1) and (n lg n + n + O(lg n)).But for quickSort,its best is nlog2n(also n equals 2^k).If you divide Mergesort by quickSort,it equals one when n is infinite.So it's as if the worst case of MergeSort is better than the best case of QuickSort,why do we use quicksort?But remember,MergeSort is not in place,it require 2n memeroy space.And MergeSort also need to do many array copies,which we don't include in the analysis of algorithm.In a word,MergeSort is really faseter than quicksort in theroy,but in reality you need to consider memeory space,the cost of array copy,merger is slower than quick sort.I once made an experiment where I was given 1000000 digits in java by Random class,and it took 2610ms by mergesort,1370ms by quicksort.
Quick sort is worst case O(n^2), however, the average case consistently out performs merge sort. Each algorithm is O(nlogn), but you need to remember that when talking about Big O we leave off the lower complexity factors. Quick sort has significant improvements over merge sort when it comes to constant factors.
Merge sort also requires O(2n) memory, while quick sort can be done in place (requiring only O(n)). This is another reason that quick sort is generally preferred over merge sort.
Extra info:
The worst case of quick sort occurs when the pivot is poorly chosen. Consider the following example:
[5, 4, 3, 2, 1]
If the pivot is chosen as the smallest or largest number in the group then quick sort will run in O(n^2). The probability of choosing the element that is in the largest or smallest 25% of the list is 0.5. That gives the algorithm a 0.5 chance of being a good pivot. If we employ a typical pivot choosing algorithm (say choosing a random element), we have 0.5 chance of choosing a good pivot for every choice of a pivot. For collections of a large size the probability of always choosing a poor pivot is 0.5 * n. Based on this probability quick sort is efficient for the average (and typical) case.
When I experimented with both sorting algorithms, by counting the number of recursive calls,
quicksort consistently has less recursive calls than mergesort.
It is because quicksort has pivots, and pivots are not included in the next recursive calls. That way quicksort can reach recursive base case more quicker than mergesort.
While they're both in the same complexity class, that doesn't mean they both have the same runtime. Quicksort is usually faster than mergesort, just because it's easier to code a tight implementation and the operations it does can go faster. It's because that quicksort is generally faster that people use it instead of mergesort.
However! I personally often will use mergesort or a quicksort variant that degrades to mergesort when quicksort does poorly. Remember. Quicksort is only O(n log n) on average. It's worst case is O(n^2)! Mergesort is always O(n log n). In cases where realtime performance or responsiveness is a must and your input data could be coming from a malicious source, you should not use plain quicksort.
All things being equal, I'd expect most people to use whatever is most conveniently available, and that tends to be qsort(3). Other than that quicksort is known to be very fast on arrays, just like mergesort is the common choice for lists.
What I'm wondering is why it's so rare to see radix or bucket sort. They're O(n), at least on linked lists and all it takes is some method of converting the key to an ordinal number. (strings and floats work just fine.)
I'm thinking the reason has to do with how computer science is taught. I even had to demonstrate to my lecturer in Algorithm analysis that it was indeed possible to sort faster than O(n log(n)). (He had the proof that you can't comparison sort faster than O(n log(n)), which is true.)
In other news, floats can be sorted as integers, but you have to turn the negative numbers around afterwards.
Edit:
Actually, here's an even more vicious way to sort floats-as-integers: http://www.stereopsis.com/radix.html. Note that the bit-flipping trick can be used regardless of what sorting algorithm you actually use...
Small additions to quick vs merge sorts.
Also it can depend on kind of sorting items. If access to items, swap and comparisons is not simple operations, like comparing integers in plane memory, then merge sort can be preferable algorithm.
For example , we sort items using network protocol on remote server.
Also, in custom containers like "linked list", the are no benefit of quick sort.
1. Merge sort on linked list, don't need additional memory.
2. Access to elements in quick sort is not sequential (in memory)
Quick sort is an in-place sorting algorithm, so its better suited for arrays. Merge sort on the other hand requires extra storage of O(N), and is more suitable for linked lists.
Unlike arrays, in liked list we can insert items in the middle with O(1) space and O(1) time, therefore the merge operation in merge sort can be implemented without any extra space. However, allocating and de-allocating extra space for arrays have an adverse effect on the run time of merge sort. Merge sort also favors linked list as data is accessed sequentially, without much random memory access.
Quick sort on the other hand requires a lot of random memory access and with an array we can directly access the memory without any traversing as required by linked lists. Also quick sort when used for arrays have a good locality of reference as arrays are stored contiguously in memory.
Even though both sorting algorithms average complexity is O(NlogN), usually people for ordinary tasks uses an array for storage, and for that reason quick sort should be the algorithm of choice.
EDIT: I just found out that merge sort worst/best/avg case is always nlogn, but quick sort can vary from n2(worst case when elements are already sorted) to nlogn(avg/best case when pivot always divides the array in two halves).
Consider time and space complexity both.
For Merge sort :
Time complexity : O(nlogn) ,
Space complexity : O(nlogn)
For Quick sort :
Time complexity : O(n^2) ,
Space complexity : O(n)
Now, they both win in one scenerio each.
But, using a random pivot you can almost always reduce Time complexity of Quick sort to O(nlogn).
Thus, Quick sort is preferred in many applications instead of Merge sort.
In c/c++ land, when not using stl containers, I tend to use quicksort, because it is built
into the run time, while mergesort is not.
So I believe that in many cases, it is simply the path of least resistance.
In addition performance can be much higher with quick sort, for cases where the entire dataset does not fit into the working set.
One of the reason is more philosophical. Quicksort is Top->Down philosophy. With n elements to sort, there are n! possibilities. With 2 partitions of m & n-m which are mutually exclusive, the number of possibilities go down in several orders of magnitude. m! * (n-m)! is smaller by several orders than n! alone. imagine 5! vs 3! *2!. 5! has 10 times more possibilities than 2 partitions of 2 & 3 each . and extrapolate to 1 million factorial vs 900K!*100K! vs. So instead of worrying about establishing any order within a range or a partition,just establish order at a broader level in partitions and reduce the possibilities within a partition. Any order established earlier within a range will be disturbed later if the partitions themselves are not mutually exclusive.
Any bottom up order approach like merge sort or heap sort is like a workers or employee's approach where one starts comparing at a microscopic level early. But this order is bound to be lost as soon as an element in between them is found later on. These approaches are very stable & extremely predictable but do a certain amount of extra work.
Quick Sort is like Managerial approach where one is not initially concerned about any order , only about meeting a broad criterion with No regard for order. Then the partitions are narrowed until you get a sorted set. The real challenge in Quicksort is in finding a partition or criterion in the dark when you know nothing about the elements to sort. That is why we either need to spend some effort to find a median value or pick 1 at random or some arbitrary "Managerial" approach . To find a perfect median can take significant amount of effort and leads to a stupid bottom up approach again. So Quicksort says just a pick a random pivot and hope that it will be somewhere in the middle or do some work to find median of 3 , 5 or something more to find a better median but do not plan to be perfect & don't waste any time in initially ordering. That seems to do well if you are lucky or sometimes degrades to n^2 when you don't get a median but just take a chance. Any way data is random. right.
So I agree more with the top ->down logical approach of quicksort & it turns out that the chance it takes about pivot selection & comparisons that it saves earlier seems to work better more times than any meticulous & thorough stable bottom ->up approach like merge sort. But