Quicksort median 3 running time - sorting

I am considering a quicksort algorithm,The pivot in the partition function is chosen as the median of three (uniformly) randomly chosen items in the array. Does anyone know the running time of the average case?

Small assumption: all the elements are different.
With this assumption, even if you choose pivot as just a random element, the average running time is O(n * log n).
The most optimistic scenario is also O(n * log n).
Choosing a median of three random elements doesn't make the average case any worse, so it must also be O(n * log n).
What you gain by looking at the median is a smaller deviation from the average.
Such a version of quick-sort is more bad-luck redundant.
If the elements don't have to be different, please consider a case, where they are all equal.
If your algorithm works still in O(n * log n) time, it will probably be all the same.

Related

Quicksort Analysis

Question:
Here's a modification of quick sort: whenever we have ten items or fewer in a sublist, we sort the sublist using selection sort rather than further recursing with quicksort. Does this change the big-oh time complexity of quicksort? Explain.
In my opinion the big-oh time complexity would change. We know that selection sort is O(n^2) and therefore sorting the sublist of ten items or fewer would take O(n^2). Until we get to a sublist that has ten or fewer items we would use quicksort and keep partitioning the list. So in the end we would have O( nlogn + n^2) which is O(n^2).
Am I correct? If not, could someone explain why?
The reason that the time complexity is actually unaffected is that 10 is a constant term. No matter how large the total array is, it always takes a constant amount of time to sort subarrays of size 10 or less. If you are sorting a list with one million elements, that constant 10 is going to play a very small role in the actual time it takes to sort the list (most of the time will be spent partitioning the original array into subarrays recursively).
If sorting a list of 10 elements takes constant time, partitioning the array at each recursive step is linear, and you end up with log n subarrays of 10 items or fewer, you end up with O(n log n + log n), which is the same as O(n log n).
Saying that selection sort is O(n^2) means that the running time of the algorithm increases quadratically with the size of the input. Running selection sort on an array with a constant number of elements will always take constant time on its own, but if you were to compare the running time of selection sort on arrays of varying sizes, you would see a quadratic increase in the running time as input size varies linearly.
The big O complexity does not change. Please read up on the Master Method (aka Master Theorem) https://en.wikipedia.org/wiki/Master_theorem
If you think through the algorithm as the size of the sorting list grows exceptionally large the time to sort the final ten in any given recursion substree will make insignificant contributions to overall running time.

TriMerge vs Merge Sort

Can some one tell me which is better of the two algorithms TriMergeSort and MergeSort.
The time complexity of the MergeSort would be nlogn base 2.
The time complexity of the TriMergeSort is nlogn base 3.
Since TriMergeSort is base 3 and MergeSort is base 2 I am considering TriMergeSort is faster than that of MergeSort.
Please correct me if I am wrong.
While you are right that the number of levels in the recursive structure is log2 n in the case of regular mergesort and log3 n in the case of three-way mergesort, it's important to remember that the work done per level increases as the number of levels increases. Specifically, in your merge step, you need to switch from a normal 2-way merge to a special 3-way merge. At each step in the merge, you need to determine which of the lists has the smallest unused element. In a two-way merge, you just compare the front elements of the two lists against one another. In a three-way merge, there are more comparisons required because you have to find the lowest element out of three elements.
Generalizing this to a k-way mergesort, the number of layers will be logk n, but the work for the merge will be higher than this. It's possible to do a k-way merge of n total elements in time O(n log k) by using binary heaps, so more work is required as k increases.
Interestingly, if we talk about the amount of work required overall, then we can see that we need to do O(n log k) work across logk n levels. This gives us a total runtime of O(n log k logk n). Using the change-of-base formula for logarithms, which says that logk n = log2 n / log2 k, we see that the runtime will be
O(n log k logk n)
= O(n log k (log n / log k))
= O(n log n)
In other words, there isn't an asymptotic difference between the algorithms when you choose different values of k. The drop in levels due to a higher splitting factor is offset by an increased amount of work per level.
To figure out which algorithm is best, the best option would be to run them all and see what happens. Due to caching effects and locality of reference, I suspect that the answer might at some level depend on the particular architecture you're using.
As far as Big-O complexity, it doesn't matter.
Regular merge sort is n * log_2(n) which is equivalent to n * (log(n) / log(2)). The log(2) is constant, so merge sort is simply n * log(n)
Tri-merge sort is n * log_3(n) which, using the same logic for regular merge sort, is simply n * log(n)
Given that both reduce to O(n * log(n)), it's not really possible to say which is better.
An alternate way to demonstrate why you can't just assume tri-merge to be better:
Assume a 3-way merge is better than a 2-way merge.
In general, assume an (N+1)-way merge is better than an N-way merge.
If this were true, it would be best to use an N-way merge where N is the number of elements you're sorting. However, the merge step requires choosing the least element from N sources which requires O(N) time.
This means that the N-way merge sort runs in O(N^2) time, effectively making it selection sort.

Quicksort vs Median asymptotic behavior

Quicksort and Median use the same method (Divide and concuer), why is it then that they have different asymptotic behavior?
Is it that quicksort may not use the proper pivot?
When you use method partition in Quicksort (see method in the link) to find the median, the method return index of element which have correct position, based on this position, you only need to check for selected parts which contains the median.
For example array length is 5, so median is 3. The partition method return 2, so you only need to check the upper part of the array from 2 to 5, not the whole array as Quicksort.
If you use Hoare's original select algorithm, you can get the same sort of poor worst case performance that you can from Quicksort.
If you use the median of medians, then you limit the worst case, at the expense of being slower in most typical cases.
You could use the median of medians to find a pivot for Quicksort, which would have roughly the same effect--limit the worst case, at the expense of being slower in most cases.
Of course, for the sort (in general) each partition operation is O(N), and you expect to do about log(N) partition operations, so you get approximately O(N log N) overall complexity.
With median finding, you also expect to do approximately O(log N) steps, but you only consider the partition from the previous step that can include the median (or quartile, etc. that you care about). You expect the sizes of those partitions to divide by (approximately) two at every step, rather than always having to partition the entire input, so you end up with approximately O(N) complexity instead of O(N log N) overall.
[Note that throughout this, I'm sort of abusing big-O notation to represent expected complexity whereas big-O is really supposed to represent the upper-bound (i.e., worst-case) complexity.]

Using median selection in quicksort?

I have a slight question about Quicksort. In the case where the minimun or maximum value of the array is selected, the pivot value the partition is very inefficient as the array size decreases by 1 one only.
However if I add code of selecting the median of that array, I think then Ii will be more efficient. Since partition algorithm is already O(N), it will give an O(N log N) algorithm.
Can this be done?
You absolutely can use a linear-time median selection algorithm to compute the pivot in quicksort. This gives you a worst-case O(n log n) sorting algorithm.
However, the constant factor on linear-time selection tends to be so high that the resulting algorithm will, in practice, be much, much slower than a quicksort that just randomly chooses the pivot on each iteration. Therefore, it's not common to see such an implementation.
A completely different approach to avoiding the O(n2) worst-case is to use an approach like the one in introsort. This algorithm monitors the recursive depth of the quicksort. If it appears that the algorithm is starting to degenerate, it switches to a different sorting algorithm (usually, heapsort) with a guaranteed worst-case O(n log n). This makes the overall algorithm O(n log n) without noticeably decreasing performance.
Hope this helps!

Quicksort with median in O(n log n)

I don't really understand why we don't just always select the median element as the pivot. This can be done in O(n) and thus results in a total run time of O(n log n).
I just assume that probably there is a large constant hidden in the O(n) for the median search.
From the Wikipedia Quicksort page:
Conversely, once we know a worst-case selection algorithm is available, we can use it to find the ideal pivot (the median) at every step of quicksort, producing a variant with worst-case O(n log n) running time. In practical implementations, however, this variant is considerably slower on average.
In other words, the cost of forcing it to be guaranteed O(n log n) is generally not worth paying. There's more information on that page, as well as on the selection algorithms page.
use randomized quicksort and you have worstcase run time of O(n log n) with very high probability.
Apparently it seems that the running time of finding the median is O(n) using randomized version of partition, but actually when the partition is again unbalanced at its extreme then the running time goes to O(n2). So you can make no improvement right from here.
But still there's a hope. If you go through "CORMEN" then you will find that finding ith order statistic can be done in linear time even in worst case scenario. The technique that is used is to use the median of median as the pivot element and then find the nedian which guarantees the linear running time in any case.
So we can use that technique in quicksort also to get O(nlgn) running time

Resources