how does median of medians work with quickselect - algorithm

From my understanding, the medians of medians algorithm calls quickselect recursively. What i'm having trouble understanding is what median of medians should return. The idea is that it returns a good pivot value, but to perform quickselect we need a pivot index not a pivot value. Is there gap in my understanding? I've looked at online resources and still dont get it.

Your understanding is flawed.
Quickselect (and quicksort) require a pivot value, not a pivot index. The partition function returns a pivot index, which is where the pivot value ended up after the partition. There's no way to predict where that will be.
More accurately, the algorithms require that the pivot value have an index, because the pivot must be an element of the array, a point which is possibly not sufficiently emphasised in descriptions of the algorithm. The pivot is not part of either partition (although other elements with the same value may be, unless you use a three-way partition). That's important because it guarantees that the algorithm will eventually terminate, because both partitions must be strictly smaller than the original array. If the pivot value were not in the array, it would be possible for one of the partitions to be empty and the other one to be the original array, which could be an endless loop.
So the pivot must have an index, but it doesn't matter which index. Typically, partitioning algorithms start by swapping the pivot value to the beginning (or end) of the array. Some partitioning algorithms will then naturally move the pivot to the correct point as the partition proceeds; other algorithms swap the pivot into the correct point when it is known. But none of them are actually influenced by where it was to begin with.
Median-of-medians finds a pivot value guaranteed to be not too far from the middle, which is enough to guarantee linear time complexity (for quickselect). Nonetheless, it's really only of theoretical interest. Selecting the pivot at random is much faster (and a lot less code), which more than compensates for the occasional bad pivot selection.

Related

Quicksort to already sorted array

In this question: https://www.quora.com/What-is-randomized-quicksort
Alejo Hausner told in: Cost of quicksort, in the worst case, that
Ironically, if you apply quicksort to an array that is already sorted, you will get probably get this costly behavior
I cannot get it. Can someone explain it to me.
https://www.quora.com/What-will-be-the-complexity-of-quick-sort-if-array-is-already-sorted may be answer to this, but that did not get me a complete response.
The Quicksort algorithm is this:
select a pivot
move elements smaller than the pivot to the beginning, and elements larger than pivot to the end
now the array looks like [<=p, <=p, <=p, p, >p, >p, >p]
recursively sort the first and second "halves" of the array
Quicksort will be efficient, with a running time close to n log n, if the pivot always end up close to the middle of the array. This works perfectly if the pivot is the median value. But selecting the actual median would be costly in itself. If the pivot happens, out of bad luck, to be the smallest or largest element in the array, you'll get an array like this: [p, >p, >p, >p, >p, >p, >p]. If this happens too often, your "quicksort" effectively behaves like selection sort. In that case, since the size of the subarray to be recursively sorted only reduces by 1 at every iteration, there will be n levels of iteration, each one costing n operations, so the overall complexity will be `n^2.
Now, since we're not willing to use costly operations to find a good pivot, we might as well pick an element at random. And since we also don't really care about any kind of true randomness, we can just pick an arbitrary element from the array, for instance the first one.
If the array was shuffled uniformly at random, then picking the first element is great. You can reasonably hope it will regularly give you an "average" element. But if the array was already sorted... Then by definition the first element is the smallest. So we're in the bad case where the complexity is n^2.
A simple way to avoid "bad lists" is to pick a true random element instead of an arbitrary element. Or if you have reasons to believe that quicksort will often be called on lists that are almost sorted, you could pick the element in position n/2 instead of the one in position 1.
There are also several research papers about different ways to select the pivot, with precise calculations on the impact on complexity. For instance, you could pick three random elements, rank them from smallest to largest and keep the middle one. But the conclusion usually is: if you try to write a better pivot-selection, then it will also be more costly, and the overall complexity of the algorithm won't be improved that much.
Depending on the implementations there are several 'common' ways to choose the pivot.
In general for 'unsorted' source there is no good or bad way to choose it.
So some implementations just take the first element as pivot.
In the case of a already sorted source this results in the worst pivot possible because the lest interval will always be empty.
-> recursion steps = O(n) instead the desired O(log n).
This leads to O(n²) complexity, which is very bad for sorting.
Choosing the pivot by random avoids this behavior. It is extremely unlikely that the random chosen pivot will have the same bad characteristics in every recursion as described above.
Also on purpose bad source is not possible to generate because you cannot predict the choices of the random generator (if it's a good one)

Pivot element selection in Quick Select

Is there any significance of selecting a random pivot over last element for quick select?
I can still find the required element by always selecting the last element as pivot. Will it effect the runtime.
The choice of pivot has a huge influence on the runtime of quickselect on a given array.
In deterministic quickselect, if you always choose the last element as your pivot, imagine what happens if you try to select out of an always-sorted list. Your pivot will always be the worst possible pivot and will only eliminate one number from the array, leading to an Θ(n2) runtime.
In randomized quickselect, it's still technically possible that the runtime will be Θ(n2) if the algorithm always makes the worst possible choice on each recursive call, but this is extraordinarily unlikely. The runtime will be O(n) with high probability, and there is no "killer" input.
In other words, quickselect with a deterministically-chosen pivot will always have at least one "killer" input that forces it to run in time Θ(n2), while quickselect with a random pivot has no killer inputs and has excellent average time guarantees. The analyses are totally different as a result.
Hope this helps!

Choice of pivot in Quick Sort Algorithm

I am learning Quick Sort. I know that Quick Sort performs badly when the pivot value does an unbalanced partition , and so first element or last element is not a good choice because if the list is almost sorted the partition would be unbalanced.
As i searched i found 2 options:
One was to choose a pivot randomly between low(lowest index) and up(highest index).It seems a safe option but random number generators are time consuming.
Second would be to take the median of all the elements. This option is costly so the median of first,last and middle element can be used as the pivot element.
Which method proves out to be the most efficient for Quick Sort?.. Is there any other method available for making the choice of pivot element?
Yes, if you're worried about the array being sorted or nearly sorted, you can apply successively more effort to choosing a good pivot, as you suggest, but at the cost of slowing the algorithm down if your data is unsorted. Skienna, in The Algorithm Design Manual, has a good discussion of pivot selection and he suggests you could go as far as to randomize the array before applying quicksort, but my guess is another sorting algorithm would perform better if you're that worried.
Which method proves out to be the most efficient for Quick Sort?
The key point here is to perform performance measurements on your data.
There is no single “most efficient” choice for quicksort. Either you slow down your sort for some (many?) cases by spending extra time selecting each pivot, or you have pathological (O(N2)) behavior for some inputs. Spending more time selecting the pivot slows down sorting for some inputs while speeding up other cases. It's always a trade-off. You choose a trade-off that improves your speed for the kind of inputs you expect.
In the real world, we can prevent the pathological cases fairly cheaply using introsort. One characteristic of a pathological case is deep recursion, so introsort detects deep recursion and switches to a different (but guaranteed O(N log N)) algorithm.
If you are really worried about worse case scenario, randomize the subarray in each recursive call and this should protect you against the worst case.

What's the difference between good pivot VS "bad pivot" in Select algorithm?

I'm trying to understand the Select algorithm and I came across a good pivot VS a bad pivot . I can see that the algorithm is using Partition algorithm for separating the bigger elements on the right of
the pivot , and the smaller elements on the left of the pivot .
But what does it mean a bad pivot ?
How can a bad pivot throw the total run time into O(n^2) ?
Thanks
The selection algorithm will be fast if it can discard huge chunks of the array at each step. A good pivot is one that for some definition of "a lot" causes the algorithm to discard "a lot" of the array elements. A bad pivot is one for which the algorithm discards very title of the array.
In the worst case, the pivot might be the largest or smallest element of the array. If this happens, then the algorithm will partition the elements in a way where one group of values is empty, since there will be either no elements less than the pivot or no elements greater than the pivot. This partitioning step takes time O(n), and will have to be run O(n) times, since each iteration decreases the size of the array by one. This degrades the algorithm runtime to O(n2). Interestingly, this is also the way to get quick sort to degenerate to time O(n2).
Hope this helps!

Quick sort Worst case

I'm working on the program just needed in the following to understand it better.
What is the worst case running time for Quicksort and what may cause this worse case performance? How can we modify quicksort program to mitigate this problem?
I know that it has worst case O(n^2) and I know it occurs when the pivot unique minimum or maximum element. My question is how can I modify the program to mitigate this problem.
A good algorithm will be good.
Quicksort's performance is dependent on your pivot selection algorithm. The most naive pivot selection algorithm is to just choose the first element as your pivot. It's easy to see that this results in worst case behavior if your data is already sorted (the first element will always be the min).
There are two common algorithms to solve this problem: randomly choose a pivot, or choose the median of three. Random is obvious so I won't go into detail. Median of three involves selecting three elements (usually the first, middle and last) and choosing the median of those as the pivot.
Since random number generators are typically pseudo-random (therefore deterministic) and a non-random median of three algorithm is deterministic, it's possible to construct data that results in worst case behavior, however it's rare for it to come up in normal usage.
You also need to consider the performance impact. The running time of your random number generator will affect the running time of your quicksort. With median of three, you are increasing the number of comparisons.
Worst Performance Condition:
When each time pivot chosen is 'greatest' or 'smallest' and this pattern repeats
So for 1 3 5 4 2
If pivots are chosen in order 1,2,3,4,5 Or 5,4,3,2,1
then the worst case running time is O(n*n)
How avoid the worst case:
(1)Divide the array into five sets.So if 1..100 the sets are (1..20) (21..40) (41..60) (61..80) (81..100)
(2)Choose median of first five elements in each of set so (3) (23) (43) (63) (83)
(3)Now choose the median among them as the pivot so here its (43)
An easy modification is to choose the pivot randomly. This gives good results with high probability.
It's been a while, but I think the worst case for quicksort was when the data was already sorted. A quick check to see if the data is already sorted could help alleviate this problem.
The worst case running time depends on the partition method within quick-sort. That has two aspects:
selecting the pivot
how to partition around the pivot
Good strategies to select the pivot have been outlinied in previous posts (median of medians, or median of three or randomization). But even if the pivot is wisely selected, in the extreme, if an array has all equal elements it will lead to worst case runtime if only two partitions are built, because one will carry the equal elements, that is all elements:
this causes partion to be called n times, each of it taking n/2 in average leading to O(n²)
this is not good, because it's not a theoretical worst case scenario but a quite common one
note that it is not solved by detecting the empty partition, because the pivot could have the highest or lowest element value (e.g. median is 5 which is as well the highest element value, but there might still be a few misplaced < 5 values)
A way around this problem is to partition into three partitions, a lower (elements < pivot), an equal (elements = pivot) and an upper partition. The "=pivot elements" are in their final position. Lower and upper partition needs still to be sorted if not empty.
Together with randomization, median of medians or some combination to select a pivot a worst case scenario is quite rare but not impossible, which leaves the algorithm with a worst case upper bound of O(n²).
The question I wonder is frequently asked. AFAI research there are 2 keys of its worstness.
If array is already sorted no matter ascending or descending in addition to selecting pivot as minimum(smallest) or maximum(greatest) element of the list. [2,3,4] or [4,3,2]
If all elements are same. [2,2,2]

Resources