Quick Sort vs Insertion Sort - sorting

When building a sorting algorithm to sort an array, how many n elements in the array is quick sort faster that Insertion sort? I know that Quick sort is good for more elements and that Insertion sort is great for smaller size. But was wondering around what size is Quick Sort a far better option than Insertion Sort?

These algorithms depend on more than just the size of the arrays to determine their run time. For quicksort, the pivot your algorithm selects can have a significant effect on runtime. If the pivot is consistently the greatest or least element, then the quicksort takes O(n^2). Insertion sort is also influenced by factors besides array size. If you are inserting elements in order, the algorithm might allow for a runtime of O(n) regardless of array size. However, if you are inserting in reverse-order, this algorithm will take O(n^2). Due to these factors, there is no size n for which one algorithm is guaranteed to perform better than the other. If you are concerned with the runtimes of sorting algorithms for large arrays, you should check out heapsort or mergesort, they are both O(n log n) and are much faster!

Related

What kind of input data are the following sorting algorithms good/bad for?

What kind of data input are the following sorting algorithms efficient on/not efficient on? Quicksort, Mergesort, Heapsort, Insertion sort etc.
I know there are at least 2 factors that affect the performance of a sorting algorithm: 1) The size of the input, and 2) whether or not the data is already mostly sorted. But I don't know exactly how these factors affect the efficiency of the algorithms.
I'd like to study this in detail, so if there are any sources/links that you can point me to, that'd be great.
Assuming quicksort is based on Hoare partition scheme (middle value as pivot), then it won't degrade to worst case time complexity of O(n^2) for almost sorted data.
https://en.wikipedia.org/wiki/Quicksort#Hoare_partition_scheme
Mergesort always does n ⌈log2(n)⌉ moves. If data is already sorted, then the number of compares is about (⌈n ⌈log2(n)⌉)/2.
Heapsort time complexity remains about the same (duplicates may reduce running time).
Insertion sort is the only sort in this list that is faster if the data is nearly sorted, but it's time complexity is O(n^2). I'm thinking that for nearly sorted data, the time complexity would be ~ O(m n), where m is the number of elements out of place.
Variations of natural merge sort, which might use insertion sort on small runs while scanning and identifying already sorted runs, would have time complexity O(n) on already sorted data.

O(nlogn) in-place sorting algorithm

This question was in the preparation exam for my midterm in introduction to computer science.
There exists an algorithm which can find the kth element in a list in
O(n) time, and suppose that it is in place. Using this algorithm,
write an in place sorting algorithm that runs in worst case time
O(n*log(n)), and prove that it does. Given that this algorithm exists,
why is mergesort still used?
I assume I must write some alternate form of the quicksort algorithm, which has a worst case of O(n^2), since merge-sort is not an in-place algorithm. What confuses me is the given algorithm to find the kth element in a list. Isn't a simple loop iteration through through the elements of an array already a O(n) algorithm?
How can the provided algorithm make any difference in the running time of the sorting algorithm if it does not change anything in the execution time? I don't see how used with either quicksort, insertion sort or selection sort, it could lower the worst case to O(nlogn). Any input is appreciated!
Check wiki, namely the "Selection by sorting" section:
Similarly, given a median-selection algorithm or general selection algorithm applied to find the median, one can use it as a pivot strategy in Quicksort, obtaining a sorting algorithm. If the selection algorithm is optimal, meaning O(n), then the resulting sorting algorithm is optimal, meaning O(n log n). The median is the best pivot for sorting, as it evenly divides the data, and thus guarantees optimal sorting, assuming the selection algorithm is optimal. A sorting analog to median of medians exists, using the pivot strategy (approximate median) in Quicksort, and similarly yields an optimal Quicksort.
The short answer why mergesort is prefered over quicksort in some cases is that it is stable (while quicksort is not).
Reasons for merge sort. Merge Sort is stable. Merge sort does more moves but fewer compares than quick sort. If the compare overhead is greater than move overhead, then merge sort is faster. One situation where compare overhead may be greater is sorting an array of indices or pointers to objects, like strings.
If sorting a linked list, then merge sort using an array of pointers to the first nodes of working lists is the fastest method I'm aware of. This is how HP / Microsoft std::list::sort() is implemented. In the array of pointers, array[i] is either NULL or points to a list of length pow(2,i) (except the last pointer points to a list of unlimited length).
I found the solution:
if(start>stop) 2 op.
pivot<-partition(A, start, stop) 2 op. + n
quickSort(A, start, pivot-1) 2 op. + T(n/2)
quickSort(A, pibvot+1, stop) 2 op. + T(n/2)
T(n)=8+2T(n/2)+n k=1
=8+2(8+2T(n/4)+n/2)+n
=24+4T(n/4)+2n K=2
...
=(2^K-1)*8+2^k*T(n/2^k)+kn
Recursion finishes when n=2^k <==> k=log2(n)
T(n)=(2^(log2(n))-1)*8+2^(log2(n))*2+log2(n)*n
=n-8+2n+nlog2(n)
=3n+nlog2(n)-8
=n(3+log2(n))-8
is O(nlogn)
Quick sort have worstcase O(n^2), but that only occurs if you have bad luck when choosing the pivot. If you can select the kth element in O(n) that means you can choose a good pivot by doing O(n) extra steps. That yields a woest-case O(nlogn) algorithm. There are a couple of reasons why mergesort is still used. First, this selection algorithm is more or less cumbersome to implement in-place, and also adds several extra operations to the regular quicksort, so it is not that fastest than merge sort, as one might expect.
Nevertheless, MergeSort is not still used because of its worst time complexity, in fact HeapSort achieves the same worst case bounds and is also in place, and didn't replace MergeSort, though it has also other disadvantages against quicksort. The main reason why MergeSort survives is because it is the fastest stable sort algorithm know so far. There are several applications in which is paramount to have an stable sorting algorithm. And that is the strength of MergeSort.
A stable sort is such that the equal items preserve the original relative order. For example, this is very useful when you have two keys, and you want to sort by first key first and then by second key, preserving the first key order.
The problem with HeapSort against quicksort is that it is cache inefficient, since you swap/compare elements too far from each other in the array, while quicksort compares consequent elements, these elements are more likely to be in the cache at the same time.

Why quick sort is considered as fastest sorting algorithm?

Quick sort has worst case time complexity as O(n^2) while others like heap sort and merge sort has worst case time complexity as O(n log n) ..still quick sort is considered as more fast...Why?
On a side note, if sorting an array of integers, then counting / radix sort is fastest.
In general, merge sort does more moves but fewer compares than quick sort. The typical implementation of merge sort uses a temp array of the same size as the original array, or 1/2 the size (sort 2nd half into second half, sort first half into temp array, merge temp array + 2nd half into original array), so it needs more space than quick sort which optimally only needs log2(n) levels of nesting, and to avoid worst case nesting, a nesting check may be used and quick sort changed to heap sort, (this is called introsort).
If the compare overhead is greater than the move overhead, then merge sort is faster. A common example where compares take longer than moves would be sorting an array of pointers to strings. Only the (4 or 8 byte) pointers are moved, while the strings may be significantly larger (and similar for a large number of strings).
If there is significant pre-ordering of the data to be sorted, then timsort (fixed sized runs) or a "natural" merge sort (variable sized runs) will be faster.
While it is true that quicksort has worst case time complexity of O(n^2), as long as the quicksort implementation properly randomizes the input, its average case (expected) running time is O(n log n).
Additionally, the constant factors hidden by the asymptotic notation that do matter in practice are pretty small as compared to other popular choices such as merge sort. Thus, in expectation, quicksort will outperform other O(n log n) comparison sorts despite the less savory worst case bounds
Not exactly like that. Quicksort is the best in most cases, however it's pesimistic time complexity can be O(n^2), it doesn't mean it always is. The issue lies in choosing the right point of pivot, if you choose it correctly you have time complexity O(n log n).
In addition, quicksort is one of the cheapest/easiest in implementation.

When is mergesort preferred over quicksort?

Quicksort is better than mergesort in many cases. But when might mergesort be better than quicksort?
For example, mergesort works better when all data cannot be loaded to memory at once. Are there any other cases?
Answers to the suggested duplicate question list advantages of using quicksort over mergesort. I'm asking about the possible cases and applications where mergesort would be better than quicksort.
Both quicksort and mergesort can work just fine if you can't fit all data into memory at once. You can implement quicksort by choosing a pivot, then streaming elements in from disk into memory and writing elements into one of two different files based on how that element compares to the pivot. If you use a double-ended priority queue, you can actually do this even more efficiently by fitting the maximum number of possible elements into memory at once.
Mergesort is worst-case O(n log n). That said, you can easily modify quicksort to produce the introsort algorithm, a hybrid between quicksort, insertion sort, and heapsort, that's worst-case O(n log n) but retains the speed of quicksort in most cases.
It might be helpful to see why quicksort is usually faster than mergesort, since if you understand the reasons you can pretty quickly find some cases where mergesort is a clear winner. Quicksort usually is better than mergesort for two reasons:
Quicksort has better locality of reference than mergesort, which means that the accesses performed in quicksort are usually faster than the corresponding accesses in mergesort.
Quicksort uses worst-case O(log n) memory (if implemented correctly), while mergesort requires O(n) memory due to the overhead of merging.
There's one scenario, though, where these advantages disappear. Suppose you want to sort a linked list of elements. The linked list elements are scattered throughout memory, so advantage (1) disappears (there's no locality of reference). Second, linked lists can be merged with only O(1) space overhead instead of O(n) space overhead, so advantage (2) disappears. Consequently, you usually will find that mergesort is a superior algorithm for sorting linked lists, since it makes fewer total comparisons and isn't susceptible to a poor pivot choice.
A single most important advantage of merge sort over quick sort is its stability: the elements compared equal retain their original order.
MergeSort is stable by design, equal elements keep their original order.
MergeSort is well suited to be implemented parallel (multithreading).
MergeSort uses (about 30%) less comparisons than QuickSort. This is an often overlooked advantage, because a comparison can be quite expensive (e.g. when comparing several fields of database rows).
Quicksort is average case O(n log n), but has a worst case of O(n^2). Mergesort is always O(n log n). Besides the asymptotic worst case and the memory-loading of mergesort, I can't think of another reason.
Scenarios when quicksort is worse than mergesort:
Array is already sorted.
All elements in the array are the same.
Array is sorted in reverse order.
Take mergesort over quicksort if you don't know anything about the data.
Merge sort has a guaranteed upper limit of O(N log2N). Quick sort has such limit, too, but it is much higher - it is O(N2). When you need a guaranteed upper bound on the timing of your code, use merge sort over quick sort.
For example, if you write code for a real-time system that relies on sorting, merge sort would be a better choice.
Merge Sort Worst case complexity is O(nlogn) whereas Quick Sort worst case is O(n^2).
Merge Sort is a stable sort which means that the same element in an array maintain their original positions with respect to each other.

Linear vs Insertion vs Binary vs Merge Sort

So I know the O(N) for linear is n, insertion is n**2, binary is log(n) and merge is nlogn
So Merge Sort is the best search for large lists. Which of the above is the best for small lists i.e. how small? Thanks
You're mixing up sort and search algorithms. Linear search and binary search are algorithms for finding a value in an array, not sorting the array. Insertion sort and mergesort are sorting algorithms.
Insertion sort tends run faster for small arrays. Many high-performance sorting routines, including Python's adaptive mergesort, automatically switch to insertion sort for small input sizes. The best size for the switch to occur is generally determined by testing. Java uses insertion sort for <= 6 elements in the primitive array versions of Arrays.sort; I'm not sure exactly how Python behaves.
You have got your facts wrong,
There is nothing called Linear Sort.
Insertion Sort is O(N^2)
There is nothing called Binary Sort
Though it could be Heap Sort which is O(NlogN)
MergeSort is O(NlogN)
QuickSoty is O(NlogN)
It is better to switch to insertion sort from merge sort if number of elements is less than 7
It is better to switch to insertion sort from quick sort if number of elements is less than 13

Resources