Finding the median of medians of quicksort - algorithm

I am working on quick-sort with median of medians algorithm. I normally use the selection-sort to get the median of the subarrays of 5 elements. However, if there are thousands of subarrays, it means that I have to find a median of thousand medians. I think I cannot use the selection-sort to find that median because it is not optimal.
Question:
Can anyone suggest me a better way to find that median?
Thanks in advance.

The median-of-medians algorithm doesn't work by finding the median of each block of size 5 and then running a sorting algorithm on them to find the median. Instead, you typically would sort each block, take the median of each, then recursively invoke the median-of-medians algorithm on these medians to get a good pivot. It's very uncommon to see the median-of-medians algorithm used in quicksort, since the constant factor in the O(n) runtime of the median-of-medians algorithm is so large that it tends to noticeably degrade performance.
There are several possible improvements you can try over this original approach. The simplest way to get a good pivot is just to pick a random element - this leads to Θ(n log n) runtime with very high probability. If you're not comfortable using randomness, you can try using the introselect algorithm, which is a modification of the median-of-medians algorithm that tries to lower the constant factor by guessing an element that might be a good pivot and cutting off the recursion early if one is found. You could also try writing introsort, which uses quicksort and switches to a different algorithm (usually heapsort) if it appears that the algorithm is degenerating.
Hope this helps!

Related

K Closest with unsorted array

I am prepping for interview leet-code type problems and I came across the k closest problem, but given a sorted array. This problem requires finding the k closest elements by value to an input value from the array. The answer to this problem was fairly straight forward and I did not have any issues determining a linear-time algorithm to solve it.
However, working on this problem got me thinking. Is it possible to solve this problem given an unsorted array in linear time? My first thought was to use a heap and that would give an O(nlogk) time complexity solution, but I am trying to determine if its possible to come up with an O(n) solution? I was thinking about possibly using something like quickselect, but the issue is that this has an expected time of O(n), not a worst case time of O(n).
Is this even possible?
The median-of-medians algorithm makes Quickselect take O(n) time in the worst case.
It is used to select a pivot:
Divide the array into groups of 5 (O(n))
Find the median of each group (O(n))
Use Quickselect to find the median of the n/5 medians (O(n))
The resulting pivot is guaranteed to be greater and less than 30% of the elements, so it guarantees linear time Quickselect.
After selecting the pivot, of course, you have to continue on with the rest of Quickselect, which includes a recursive call like the one we made to select the pivot.
The worst case total time is T(n) = O(n) + T(0.7n) + T(n/5), which is still linear. Compared to the expected time of normal Quickselect, though, it's pretty slow, which is why we don't often use this in practice.
Your heap solution would be very welcome at an interview, I'm sure.
If you really want to get rid of the logk, which in practical applications should seldom be a problem, then yes, using Quickselect would be another option. Something like this:
Partition your array in values smaller and larger than x. <- O(n).
For the lower half, run Quickselect to find the kth largest number, then take the right-side partition which are your k largest numbers. <- O(n)
Repeat step 2 for the higher half, but for the k smallest numbers. <- O(n)
Merge your k smallest and k largest numbers and extract the k closest numbers. <- O(k)
This gives you a total time complexity of O(n), as you said.
However, a few points about your worry about expected time vs worst-case time. I understand that if an interview question explicitly insists on worst-case O(n), then this solution might not be accepted, but otherwise, this can well be considered O(n) in practice.
The key here being that for randomized quickselect and random or well-behaved input, the probability that the time complexity goes beyond O(n) decreases exponentially as the input grows. Meaning that already at largeish inputs, the probability is as small as guessing at a specific atom in the known universe. The assumption on well-behaved input concerns being somewhat random in nature and not adversarial. See this discussion on a similar (not identical) problem.

find approx median in unsorted list

i want to find approx median in unsorted list,i know two algorithm
algorithm 1- quickselect
algorithm 2- Median of medians
i can't use quickselect in my project because it take O(n^2) in worst case.
i heard about Median of medians,but my colleagues suggest that it takes O(n) with some constant factor.therefore its time complexity is Cn and constant factor is is large compare to quickselect. i want to know what is the constant factor associated with Median of medians ?and why Median of medians not use pseudo median of 9 element ?
or is their any other algorithm to find approx median in linear time O(n) ?
Although I wouldn't be quick to discard quickselect, since its worst-case performance is greatly improbable with properly-chosen pivots...
Perhaps introselect:
Introselect (short for "introspective selection") is a selection algorithm that is a hybrid of quickselect and median of medians which has fast average performance and optimal worst-case performance.
Introselect works by optimistically starting out with quickselect and only switching to the worst-time linear algorithm if it recurses too many times without making sufficient progress. The switching strategy is the main technical content of the algorithm. Simply limiting the recursion to constant depth is not good enough, since this would make the algorithm switch on all sufficiently large lists. Musser discusses a couple of simple approaches:
Keep track of the list of sizes of the subpartitions processed so far. If at any point k recursive calls have been made without halving the list size, for some small positive k, switch to the worst-case linear algorithm.
Sum the size of all partitions generated so far. If this exceeds the list size times some small positive constant k, switch to the worst-case linear algorithm. This sum is easy to track in a single scalar variable.
Both approaches limit the recursion depth to k ⌈log n⌉ = O(log n) and the total running time to O(n).

Selection Algorithm Runtime

I am trying to figure out the most optimal way to compute a top-k query on some aggregation of data, lets say an array. I used to think the best way was to run through the array and maintain a heap or balanced binary tree of size k, leveraging that to compute the top-k value. Now, I have run across the Selection Algorithm which supposedly runs even faster. I understand how the Selection Algorithm works and how to implement it, I am just a little confused as to how it runs in O(n). I feel like in order for it to run in O(n) you would have to be extremely lucky. If you keep picking a random pivot point and partitioning around it, it could very well be the case that you just end up basically sorting almost the entire array before stumbling upon your kth index. Are there any optimizations such as maybe not picking a random pivot? Or is my maintaining a heap/tree method good enough for most cases.
What you're talking about there is quickselect, also known as Hoare's selection algorithm.
It does have O(n) average case performance, but its worst-case performance is O(n2).
Like quicksort, the quickselect has good average performance, but is sensitive to the pivot that is chosen. If good pivots are chosen, meaning ones that consistently decrease the search set by a given fraction, then the search set decreases in size exponentially and by induction (or summing the geometric series) one sees that performance is linear, as each step is linear and the overall time is a constant times this (depending on how quickly the search set reduces). However, if bad pivots are consistently chosen, such as decreasing by only a single element each time, then worst-case performance is quadratic: O(n2).
In terms of choosing pivots:
The easiest solution is to choose a random pivot, which yields almost certain linear time. Deterministically, one can use median-of-3 pivot strategy (as in quicksort), which yields linear performance on partially sorted data, as is common in the real world. However, contrived sequences can still cause worst-case complexity; David Musser describes a "median-of-3 killer" sequence that allows an attack against that strategy, which was one motivation for his introselect algorithm.
One can assure linear performance even in the worst case by using a more sophisticated pivot strategy; this is done in the median of medians algorithm. However, the overhead of computing the pivot is high, and thus this is generally not used in practice. One can combine basic quickselect with median of medians as fallback to get both fast average case performance and linear worst-case performance; this is done in introselect.
(quotes from Wikipedia)
So you're fairly likely to get O(n) performance with random pivots, but, if k is small and n is large, or if you're just unlikely, the O(n log k) solution using a size k heap or BST could outperform this.
We can't tell you with certainty which one will be faster when - it depends on (1) the exact implementations, (2) the machine it's run on, (3) the exact sizes of n and k and finally (4) the actual data. The O(n log k) solution should be sufficient for most purposes.

Need an efficient selection algorithm?

I am looking for an algorithm for selecting A [N/4] the element in an unsorted array A where N is the Number of elements of the array A. I want the algorithm to do the selection in sublinear times .I have knowledge of basic structures like a BST etc? Which one will be the best algorithm for me keeping in mind I want it to be the fastest possible and should not be too tough for me to implement.Here N can vary upto 250000.Any help will be highly appreciated.Note array can have non unique elements
As #Jerry Coffin mentioned, you cannot hope to get a sublinear time algorithm here unless you are willing to do some preprocessing up front. If you want a linear-time algorithm for this problem, you can use the quickselect algorithm, which runs in expected O(n) time with an O(n2) worst-case. The median-of-medians algorithm has worst-case O(n) behavior, but has a high constant factor. One algorithm that you might find useful is the introselect algorithm, which combines the two previous algorithms to get a worst-case O(n) algorithm with a low constant factor. This algorithm is typically what's used to implement the std::nth_element algorithm in the C++ standard library.
If you are willing to do some preprocessing ahead of time, you can put all of the elements into an order statistic tree. From that point forward, you can look up the kth element for any k in time O(log n) worst-case. The preprocessing time required is O(n log n), though, so unless you are making repeated queries this is unlikely to be the best option.
Hope this helps!

Is it possible to calculate median of a list of numbers better than O(n log n)?

I know that it is possible to calculate the mean of a list of numbers in O(n). But what about the median? Is there any better algorithm than sort (O(n log n)) and lookup middle element (or mean of two middle elements if an even number of items in list)?
Yes. You can do it (deterministically) in O(n).
What you're talking about is a selection algorithm, where k = n/2. There is a method based on the same partitioning function used in quicksort which works. It is called, not surprisingly, quickselect. While it can, like quicksort, have a O(n2) worst case, this can be brought down to linear time using the proper pivot selection.
Partially irrelevant, but: a quick tip on how to quickly find answers to common basic questions like this on the web.
We're talking about medians? So Gg to the page about medians in wikipedia
Search page for algorithm:
Efficient computation of the sample median
Even though sorting n items takes in general O(n log n) operations, by using a "divide and conquer" algorithm the median of n items can be computed with only O(n) operations (in fact, you can always find the k-th element of a list of values with this method; this is called the selection problem).
Follow the link to the selection problem for the description of algorithm. Read intro:
... There are worst-case linear time selection algorithms. ...
And if you're interested read about the actual ingenious algorithm.
If the numbers are discrete (e.g. integers) and there is a manageable number of distinct values, you can use a "bucket sort" which is O(N), then iterate over the buckets to figure out which bucket holds the median. The complete calculation is O(N) in time and O(B) in space.
Just for fun (and who knows, it may be faster) there's another randomized median algorithm, explained technically in Mitzenmacher's and Upfall's book. Basically, you choose a polynomially-smaller subset of the list, and (with some fancy bookwork) such that it probably contains the real median, and then use it to find the real median. The book is on google books, and here's a link. Note: I was able to read the pages of the algorthm, so assuming that google books reveals the same pages to everyone, you can read them too.
It is a randomized algorithm s.t. if it finds the answer, it is 100% certain that it is the correct answer (this is called Las Vegas style). The randomness arises from the runtime --- occasionally (with probability 1/(sqrt(n)), I think) it FAILS to find the median, and must be re-run.
Asymptotically, it is exactly linear when you take into the chance of failure --- that is to say, it is a wee bit less than linear, exactly such that when you take into account the number of times you may need to re-run it, it becomes linear.
Note: I'm not saying this is better or worse --- I certainly haven't done a real-life runtime comparison between these algorithms! I'm simply presenting an additional algorithm that has linear runtime, but works in a significantly different way.
This link has popped up recently on calculating median: http://matpalm.com/median/question.html .
In general I think you can't go beyond O(n log n) time, but I don't have any proof on that :). No matter how much you make it parallel, aggregating the results into a single value takes at least log n levels of execution.
Try the randomized algorithm, the sampling size (e.g. 2000) is independent from the data size n, still be able to get sufficiently high (99%) accuracy. If you need higher accuracy, just increase sampling size. Using Chernoff bound can proof the probability under a certain sampling size. I've write some JavaScript Code to implement the algorithm, feel free to take it. http://www.sfu.ca/~wpa10

Resources