Time Complexity of Simplified 3-Way Partition Sort - algorithm

Below is my algorithm that's a simplified take on Dijkstra's 3-way partition algorithm for a generic list:
static <T extends Comparable> void dutchSort(List<T> list, int left, int right) {
if (left >= right) return;
T pivot = list.get(left);
// smaller - index of the last element smaller than pivot value
// equal - index of the last element equal to pivot value
// larger - index of the first element larger than pivot value
int smaller = left-1, equal = left, larger = right;
// before sorting is completed, 'equal' is the current value
// much like 'i' in a for-loop
// O(N) time
while (equal < larger) {
if (list.get(equal).compareTo(pivot) < 0)
Collections.swap(list, equal, ++smaller);
else if (list.get(equal).equals(pivot))
equal++;
else
Collections.swap(list, equal, --larger);
}
// recursively sort smaller subarray
dutchSort(list, left, smaller+1);
// recursively sort larger subarray
dutchSort(list, equal, list.size());
}
This is O(1) space, and I think it's O(N^N) time, but I'm not sure. Toptal's post on 3-way QuickSort says it's O(N^2), but the difference is my algorithm is much more naive. My thought process is: the while loop takes O(N) time and in the worst case (all N elements are distinct?) the problem is broken down into N subarrays of size 1.
I tried the Master Theorem, but I was not sure about any of the variable values. I think the number of subproblems is 2, each recursive call reduces the problem by a factor of 2, and merging the subproblems takes O(1) work.
All this is just educated guessing and I'm likely pretty off, so I'd really like to rigorously solve the time complexity.
Is O(N^N) time correct? And if so, why?
Thanks so much :)

So the while loop is O(n) on the initial call. If we assume an array of [1, 2, 3, 4, 5], then the first time through the loop list[equal] == pivot, and we increment equal.
The second and subsequent times through the loop, list[equal] > pivot, so we decrement larger and swap with that element. When the loop is finished, you have equal=1, and smaller hasn't changed. Your recursive calls become:
dutchSort(list, 0, 0)
dutchSort(list, 1, n)
So one of the items has dropped off.
Do the same mental exercise a for a few more recursion depths, and I think you'll get an idea of how the partitioning works.
For your algorithm to be O(N^N), it would have to compare every element against every other element multiple times. But that doesn't happen because at each level of recursion you're splitting the problem into two parts. Once something is split into the left half of the array, it can't ever be compared with something that was moved into the right half of the array. So the worst case is that every element is compared against every other element. That would be O(N^2).
When all elements are equal, the algorithm is O(N).
I think the algorithm's complexity is determined by the number of unique items. It doesn't appear that initial array order will have any effect.

Related

Why is this the cost?

The algorithm of the Quicksort is:
Quicksort(A,p,r)
if p<r then
q<- partition(A,p,r)
Quicksort(A,p,q-1)
Quicksort(A,q+1,r)
According to my notes,the cost of Quicksort(A,1,n) is T(n)=T(q)+T(n-q)+ cost of partition.
Why is the cost like that and not : T(n)=T(q-1)+T(n-q)+cost of partition?
And also why is the cost of the worst case T(n)=T(n-1)+Θ(n) ?
I'm more confident about the answer to your second question.
In the worst case, the pivot can always turn out to be the lowest number (or the highest number) in the array. In that case, the divided arrays shall be of length n-1 and 0 respectively. Hence the recurrence relation shall be:
T(n)= T(n-1)+T(0) + Work done for partition
= T(n-1) + 0 + O(n)
For example in the worst case if the array is originally sorted in ascended order and you decide to choose the 1st element as the pivot always.
Initial Array: {1, 2, 3, 4, 5}
Pivot Element: 1.
Partitioned arrays: {} and {2,3,4,5}
Next pivot element: 2
Partitioned arrays: {} {3,4,5}
...
Here you can see that at each partition, the size of problem decreases by just 1 and not by a factor of half.
Hence T(n) = T(n-1) + Work done for partitioning( O(n) )
Only the terms with the highest indices are considered when performing time complexity analysis. This is because only the terms with the highest indices remain relevant as the input gets larger. For example: O(0.0001n^3 + 0.002n^2 + 0.1n + 1000000) = O(n^3). It follows that T(q-1) = T(q), since -1 is irrelevant for large values of q.
I am not sure if your note is entirely accurate. user1990169 has kindly answered why the general Quicksort has the worst case time complexity of O(n^2), but it's actually possible to spend O(n) time to determine the median in an unsorted array of n elements, meaning we can always pick the median value (the best value) for the pivot in each iteration. The time complexity of T(n)=T(n-1)+Θ(n) may result from an array where all elements have the same value, in which case, depending on implementation, all elements other than the pivot may get put into the LEFT partition or the RIGHT partition. However, even this can be avoided with some clever implementation. Thus the complexity analysis of T(n)=T(n-1)+Θ(n) may be from a specific implementation of Quicksort, rather than an optimal one.

Dividing the elements of an array in 3 groups

I have to divide the elements of an array into 3 groups. This needs to be done without sorting the array. Consider the example
we have 120 unsorted values thus the smallest 40 values need to be in the first group and next 40 in the second and the largest 40 in the third group
I was thinking of the median of median approach but not able to apply it to my problem, kindly suggest an algorithm.
You can call quickselect twice on your array to do this in-place and in average case linear time. The worst case runtime can also be improved to O(n) by using the linear time median of medians algorithm to choose an optimal pivot for quickselect.
For both calls to quickselect, use k = n / 3. On your first call, use quickselect on the entire array, and on your second call, use it from arr[k..n-1] (for a 0-indexed array).
Wikipedia explanation of quickselect:
Quickselect uses the same overall approach as quicksort, choosing one
element as a pivot and partitioning the data in two based on the
pivot, accordingly as less than or greater than the pivot. However,
instead of recursing into both sides, as in quicksort, quickselect
only recurses into one side – the side with the element it is
searching for. This reduces the average complexity from O(n log n) (in
quicksort) to O(n) (in quickselect).
As with quicksort, quickselect is generally implemented as an in-place
algorithm, and beyond selecting the kth element, it also partially
sorts the data. See selection algorithm for further discussion of the
connection with sorting.
To divide the elements of the array into 3 groups, use the following algorithm written in Python in combination with quickselect:
k = n / 3
# First group smallest elements in array
quickselect(L, 0, n - 1, k) # Call quickselect on your entire array
# Then group middle elements in array
quickselect(L, k, n - 1, k) # Call quickselect on subarray
# Largest elements in array are already grouped so
# there is no need to call quickselect again
The key point of all this is that quickselect uses a subroutine called partition. Partition arranges an array into two parts, those greater than a given element and those less than a given element. Thus it partially sorts an array around this element and returns its new sorted position. Thus by using quickselect, you actually partially sort the array around the kth element (note that this is different from actually sorting the entire array) in-place and in average-case linear time.
Time Complexity for quickselect:
Worst case performance O(n2)
Best case performance O(n)
Average case performance O(n)
The runtime of quickselect is almost always linear and not quadratic, but this depends on the fact that for most arrays, simply choosing a random pivot point will almost always yield linear runtime. However, if you want to improve the worst case performance for your quickselect, you can choose to use the median of medians algorithm before each call to approximate an optimal pivot to be used for quickselect. In doing so, you will improve the worst case performance of your quickselect algorithm to O(n). This overhead probably isn't necessary but if you are dealing with large lists of randomized integers it can prevent some abnormal quadratic runtimes of your algorithm.
Here is a complete example in Python which implements quickselect and applies it twice to a reverse-sorted list of 120 integers and prints out the three resulting sublists.
from random import randint
def partition(L, left, right, pivotIndex):
'''partition L so it's ordered around L[pivotIndex]
also return its new sorted position in array'''
pivotValue = L[pivotIndex]
L[pivotIndex], L[right] = L[right], L[pivotIndex]
storeIndex = left
for i in xrange(left, right):
if L[i] < pivotValue:
L[storeIndex], L[i] = L[i], L[storeIndex]
storeIndex = storeIndex + 1
L[right], L[storeIndex] = L[storeIndex], L[right]
return storeIndex
def quickselect(L, left, right, k):
'''retrieve kth smallest element of L[left..right] inclusive
additionally partition L so that it's ordered around kth
smallest element'''
if left == right:
return L[left]
# Randomly choose pivot and partition around it
pivotIndex = randint(left, right)
pivotNewIndex = partition(L, left, right, pivotIndex)
pivotDist = pivotNewIndex - left + 1
if pivotDist == k:
return L[pivotNewIndex]
elif k < pivotDist:
return quickselect(L, left, pivotNewIndex - 1, k)
else:
return quickselect(L, pivotNewIndex + 1, right, k - pivotDist)
def main():
# Setup array of 120 elements [120..1]
n = 120
L = range(n, 0, -1)
k = n / 3
# First group smallest elements in array
quickselect(L, 0, n - 1, k) # Call quickselect on your entire array
# Then group middle elements in array
quickselect(L, k, n - 1, k) # Call quickselect on subarray
# Largest elements in array are already grouped so
# there is no need to call quickselect again
print L[:k], '\n'
print L[k:k*2], '\n'
print L[k*2:]
if __name__ == '__main__':
main()
I would take a look at order statistics. The kth order statistic of a statistical sample is equal to its kth-smallest value. The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm.
It is right to think the median of the medians way. However, instead of finding the median, you might want to find both 20th and 40th smallest elements from the array. Just like finding the median, it takes only linear time to find both of them using a selection algorithm. Finally you go over the array and partition the elements according to these two elements, which is linear time as well.
PS. If this is your exercise in an algorithm class, this might help you :)
Allocate an array of the same size of the input array
scan the input array once and keep track of the min and max values of the array.
and at the same time set to 1 all the values of the second array.
compute delta = (max - min) / 3.
Scan the array again and set the second array to two if the number is > min+delta and < max-delta; otherwise if >= max-delta, set it to 3;
As a result you will have an array that tells in which group the number is.
I am assuming that all the numbers are different from each other.
Complexity: O(2n)

Find First Unique Element

I had this question in interview which I couldn't answer.
You have to find first unique element(integer) in the array.
For example:
3,2,1,4,4,5,6,6,7,3,2,3
Then unique elements are 1, 5, 7 and first unique of 1.
The Solution required:
O(n) Time Complexity.
O(1) Space Complexity.
I tried saying:
Using Hashmaps, Bitvector...but none of them had space complexity O(1).
Can anyone tell me solution with space O(1)?
Here's a non-rigorous proof that it isn't possible:
It is well known that duplicate detection cannot be better than O(n * log n) when you use O(1) space. Suppose that the current problem is solvable in O(n) time and O(1) memory. If we get the index 'k' of the first non-repeating number as anything other than 0, we know that k-1 is a repeated and hence with one more sweep through the array we can get its duplicate making duplicate detection a O(n) exercise.
Again it is not rigorous and we can get into a worst case analysis where k is always 0. But it helps you think and convince the interviewer that it isn't likely to be possible.
http://en.wikipedia.org/wiki/Element_distinctness_problem says:
Elements that occur more than n/k times in a multiset of size n may be found in time O(n log k). Here k = n since we want elements that appear more than once.
I think that this is impossible. This isn't a proof, but evidence for a conjecture. My reasoning is as follows...
First, you said that there is no bound on value of the elements (that they can be negative, 0, or positive). Second, there is only O(1) space, so we can't store more than a fixed number of values. Hence, this implies that we would have to solve this using only comparisons. Moreover, we can't sort or otherwise swap values in the array because we would lose the original ordering of unique values (and we can't store the original ordering).
Consider an array where all the integers are unique:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
In order to return the correct output 1 on this array, without reordering the array, we would need to compare each element to all the other elements, to ensure that it is unique, and do this in reverse order, so we can check the first unique element last. This would require O(n^2) comparisons with O(1) space.
I'll delete this answer if anyone finds a solution, and I welcome any pointers on making this into a more rigorous proof.
Note: This can't work in the general case. See the reasoning below.
Original idea
Perhaps there is a solution in O(n) time and O(1) extra space.
It is possible to build a heap in O(n) time. See Building a Heap.
So you built the heap backwards, starting at the last element in the array and making that last position the root. When building the heap, keep track of the most recent item that was not a duplicate.
This assumes that when inserting an item in the heap, you will encounter any identical item that already exist in the heap. I don't know if I can prove that . . .
Assuming the above is true, then when you're done building the heap, you know which item was the first non-duplicated item.
Why it won't work
The algorithm to build a heap in place starts at the midpoint of the array and assumes that all of the nodes beyond that point are leaf nodes. It then works backward (towards item 0), sifting items into the heap. The algorithm doesn't examine the last n/2 items in any particular order, and the order changes as items are sifted into the heap.
As a result, the best we could do (and even then I'm not sure we could do it reliably) is find the first non-duplicated item only if it occurs in the first half of the array.
OP's question original doesn't mention the limit of the number(although latter add number can be negative/positive/zero). Here I assume one more condition:
The number in array are all smaller than array length and
non-negative.
Then, giving a O(n) time, O(1) space solution is possible and seems like a interview question, and the the test case OP gives in the question comply to above assumption.
Solution:
for (int i = 0; i < nums.length; i++) {
if (nums[i] != i) {
if (nums[i] == -1) continue;
if (nums[nums[i]] == nums[i]) {
nums[nums[i]] = -1;
} else {
swap(nums, nums[i], i);
i--;
}
}
}
}
for (int i = 0; i < nums.length; i++) {
if (nums[i] == i) {
return i;
}
}
The algorithm here is considering the original array as bucket in bucket sort. Put numbers into its bucket, if more than twice, mark it as -1. Using another loop to find the first number that has nums[i] == i

Sort name & time complexity

I "invented" "new" sort algorithm. Well, I understand that I can't invent something good, so I tried to search it on wikipedia, but all sort algorithms seems like not my. So I have three questions:
What is name of this algorithm?
Why it sucks? (best, average and worst time complexity)
Can I make it more better still using this idea?
So, idea of my algorithm: if we have an array, we can count number of sorted elements and if this number is less that half of length we can reverse array to make it more sorted. And after that we can sort first half and second half of array. In best case, we need only O(n) - if array is totally sorted in good or bad direction. I have some problems with evaluation of average and worst time complexity.
Code on C#:
public static void Reverse(int[] array, int begin, int end) {
int length = end - begin;
for (int i = 0; i < length / 2; i++)
Algorithms.Swap(ref array[begin+i], ref array[begin + length - i - 1]);
}
public static bool ReverseIf(int[] array, int begin, int end) {
int countSorted = 1;
for (int i = begin + 1; i < end; i++)
if (array[i - 1] <= array[i])
countSorted++;
int length = end - begin;
if (countSorted <= length/2)
Reverse(array, begin, end);
if (countSorted == 1 || countSorted == (end - begin))
return true;
else
return false;
}
public static void ReverseSort(int[] array, int begin, int end) {
if (begin == end || begin == end + 1)
return;
// if we use if-operator (not while), then array {2,3,1} transforms in array {2,1,3} and algorithm stop
while (!ReverseIf(array, begin, end)) {
int pivot = begin + (end - begin) / 2;
ReverseSort(array, begin, pivot + 1);
ReverseSort(array, pivot, end);
}
}
public static void ReverseSort(int[] array) {
ReverseSort(array, 0, array.Length);
}
P.S.: Sorry for my English.
The best case is Theta(n), for, e.g., a sorted array. The worst case is Theta(n^2 log n).
Upper bound
Secondary subproblems have a sorted array preceded or succeeded by an arbitrary element. These are O(n log n). If preceded, we do O(n) work, solve a secondary subproblem on the first half and then on the second half, and then do O(n) more work – O(n log n). If succeeded, do O(n) work, sort the already sorted first half (O(n)), solve a secondary subproblem on the second half, do O(n) work, solve a secondary subproblem on the first half, sort the already sorted second half (O(n)), do O(n) work – O(n log n).
Now, in the general case, we solve two primary subproblems on the two halves and then slowly exchange elements over the pivot using secondary invocations. There are O(n) exchanges necessary, so a straightforward application of the Master Theorem yields a bound of O(n^2 log n).
Lower bound
For k >= 3, we construct an array A(k) of size 2^k recursively using the above analysis as a guide. The bad cases are the arrays [2^k + 1] + A(k).
Let A(3) = [1, ..., 8]. This sorted base case keeps Reverse from being called.
For k > 3, let A(k) = [2^(k-1) + A(k-1)[1], ..., 2^(k-1) + A(k-1)[2^(k-1)]] + A(k-1). Note that the primary subproblems of [2^k + 1] + A(k) are equivalent to [2^(k-1) + 1] + A(k-1).
After the primary recursive invocations, the array is [2^(k-1) + 1, ..., 2^k, 1, ..., 2^(k-1), 2^k + 1]. There are Omega(2^k) elements that have to move Omega(2^k) positions, and each of the secondary invocations that moves an element so far has O(1) sorted subproblems and thus is Omega(n log n).
Clearly more coffee is required – the primary subproblems don't matter. This makes it not too bad to analyze the average case, which is Theta(n^2 log n) as well.
With constant probability, the first half of the array contains at least half of the least quartile and at least half of the greatest quartile. In this case, regardless of whether Reverse happens, there are Omega(n) elements that have to move Omega(n) positions via secondary invocations.
It seems this algorithm, even if it performs horribly with "random" data (as demonstrated by Per in their answer), is quite efficient for "fixing up" arrays which are "nearly-sorted". Thus if you chose to develop this idea further (I personally wouldn't, but if you wanted to think about it as an exercise), you would do well to focus on this strength.
this reference on Wikipedia in the Inversion article alludes to the issue very well. Mahmoud's book is quite insightful, noting that there are various ways to measure "sortedness". For example if we use the number of inversions to characterize a "nearly-sorted array" then we can use insertion sort to sort it extremely quickly. However if your arrays are "nearly-sorted" in slightly different ways (e.g. a deck of cards which is cut or loosely shuffled) then insertion sort will not be the best sort to "fix up" the list.
Input: an array that has already been sorted of size N, with roughly N/k inversions.
I might do something like this for an algorithm:
Calculate number of inversions. (O(N lg(lg(N))), or can assume is small and skip step)
If number of inversions is < [threshold], sort array using insertion sort (it will be fast).
Otherwise the array is not close to being sorted; resort to using your favorite comparison (or better) sorting algorithm
There are better ways to do this though; one can "fix up" such an array in at least O(log(N)*(# new elements)) time if you preprocess your array enough or use the right data-structure, like an array with linked-list properties or similar which supports binary search.
You can generalize this idea even further. Whether "fixing up" an array will work depends on the kind of fixing-up that is required. Thus if you update these statistics whenever you add an element to the list or modify it, you can dispatch onto a good "fix-it-up" algorithm.
But unfortunately this would all be a pain to code. You might just be able to get away with want is a priority queue.

Find median value from a growing set

I came across an interesting algorithm question in an interview. I gave my answer but not sure whether there is any better idea. So I welcome everyone to write something about his/her ideas.
You have an empty set. Now elements are put into the set one by one. We assume all the elements are integers and they are distinct (according to the definition of set, we don't consider two elements with the same value).
Every time a new element is added to the set, the set's median value is asked. The median value is defined the same as in math: the middle element in a sorted list. Here, specially, when the size of set is even, assuming size of set = 2*x, the median element is the x-th element of the set.
An example:
Start with an empty set,
when 12 is added, the median is 12,
when 7 is added, the median is 7,
when 8 is added, the median is 8,
when 11 is added, the median is 8,
when 5 is added, the median is 8,
when 16 is added, the median is 8,
...
Notice that, first, elements are added to set one by one and second, we don't know the elements going to be added.
My answer.
Since it is a question about finding median, sorting is needed. The easiest solution is to use a normal array and keep the array sorted. When a new element comes, use binary search to find the position for the element (log_n) and add the element to the array. Since it is a normal array so shifting the rest of the array is needed, whose time complexity is n. When the element is inserted, we can immediately get the median, using instance time.
The WORST time complexity is: log_n + n + 1.
Another solution is to use link list. The reason for using link list is to remove the need of shifting the array. But finding the location of the new element requires a linear search. Adding the element takes instant time and then we need to find the median by going through half of the array, which always takes n/2 time.
The WORST time complexity is: n + 1 + n/2.
The third solution is to use a binary search tree. Using a tree, we avoid shifting array. But using the binary search tree to find the median is not very attractive. So I change the binary search tree in a way that it is always the case that the left subtree and the right subtree are balanced. This means that at any time, either the left subtree and the right subtree have the same number of nodes or the right subtree has one node more than in the left subtree. In other words, it is ensured that at any time, the root element is the median. Of course this requires changes in the way the tree is built. The technical detail is similar to rotating a red-black tree.
If the tree is maintained properly, it is ensured that the WORST time complexity is O(n).
So the three algorithms are all linear to the size of the set. If no sub-linear algorithm exists, the three algorithms can be thought as the optimal solutions. Since they don't differ from each other much, the best is the easiest to implement, which is the second one, using link list.
So what I really wonder is, will there be a sub-linear algorithm for this problem and if so what will it be like. Any ideas guys?
Steve.
Your complexity analysis is confusing. Let's say that n items total are added; we want to output the stream of n medians (where the ith in the stream is the median of the first i items) efficiently.
I believe this can be done in O(n*lg n) time using two priority queues (e.g. binary or fibonacci heap); one queue for the items below the current median (so the largest element is at the top), and the other for items above it (in this heap, the smallest is at the bottom). Note that in fibonacci (and other) heaps, insertion is O(1) amortized; it's only popping an element that's O(lg n).
This would be called an "online median selection" algorithm, although Wikipedia only talks about online min/max selection. Here's an approximate algorithm, and a lower bound on deterministic and approximate online median selection (a lower bound means no faster algorithm is possible!)
If there are a small number of possible values compared to n, you can probably break the comparison-based lower bound just like you can for sorting.
I received the same interview question and came up with the two-heap solution in wrang-wrang's post. As he says, the time per operation is O(log n) worst-case. The expected time is also O(log n) because you have to "pop an element" 1/4 of the time assuming random inputs.
I subsequently thought about it further and figured out how to get constant expected time; indeed, the expected number of comparisons per element becomes 2+o(1). You can see my writeup at http://denenberg.com/omf.pdf .
BTW, the solutions discussed here all require space O(n), since you must save all the elements. A completely different approach, requiring only O(log n) space, gives you an approximation to the median (not the exact median). Sorry I can't post a link (I'm limited to one link per post) but my paper has pointers.
Although wrang-wrang already answered, I wish to describe a modification of your binary search tree method that is sub-linear.
We use a binary search tree that is balanced (AVL/Red-Black/etc), but not super-balanced like you described. So adding an item is O(log n)
One modification to the tree: for every node we also store the number of nodes in its subtree. This doesn't change the complexity. (For a leaf this count would be 1, for a node with two leaf children this would be 3, etc)
We can now access the Kth smallest element in O(log n) using these counts:
def get_kth_item(subtree, k):
left_size = 0 if subtree.left is None else subtree.left.size
if k < left_size:
return get_kth_item(subtree.left, k)
elif k == left_size:
return subtree.value
else: # k > left_size
return get_kth_item(subtree.right, k-1-left_size)
A median is a special case of Kth smallest element (given that you know the size of the set).
So all in all this is another O(log n) solution.
We can difine a min and max heap to store numbers. Additionally, we define a class DynamicArray for the number set, with two functions: Insert and Getmedian. Time to insert a new number is O(lgn), while time to get median is O(1).
This solution is implemented in C++ as the following:
template<typename T> class DynamicArray
{
public:
void Insert(T num)
{
if(((minHeap.size() + maxHeap.size()) & 1) == 0)
{
if(maxHeap.size() > 0 && num < maxHeap[0])
{
maxHeap.push_back(num);
push_heap(maxHeap.begin(), maxHeap.end(), less<T>());
num = maxHeap[0];
pop_heap(maxHeap.begin(), maxHeap.end(), less<T>());
maxHeap.pop_back();
}
minHeap.push_back(num);
push_heap(minHeap.begin(), minHeap.end(), greater<T>());
}
else
{
if(minHeap.size() > 0 && minHeap[0] < num)
{
minHeap.push_back(num);
push_heap(minHeap.begin(), minHeap.end(), greater<T>());
num = minHeap[0];
pop_heap(minHeap.begin(), minHeap.end(), greater<T>());
minHeap.pop_back();
}
maxHeap.push_back(num);
push_heap(maxHeap.begin(), maxHeap.end(), less<T>());
}
}
int GetMedian()
{
int size = minHeap.size() + maxHeap.size();
if(size == 0)
throw exception("No numbers are available");
T median = 0;
if(size & 1 == 1)
median = minHeap[0];
else
median = (minHeap[0] + maxHeap[0]) / 2;
return median;
}
private:
vector<T> minHeap;
vector<T> maxHeap;
};
For more detailed analysis, please refer to my blog: http://codercareer.blogspot.com/2012/01/no-30-median-in-stream.html.
1) As with the previous suggestions, keep two heaps and cache their respective sizes. The left heap keeps values below the median, the right heap keeps values above the median. If you simply negate the values in the right heap the smallest value will be at the root so there is no need to create a special data structure.
2) When you add a new number, you determine the new median from the size of your two heaps, the current median, and the two roots of the L&R heaps, which just takes constant time.
3) Call a private threaded method to perform the actual work to perform the insert and update, but return immediately with the new median value. You only need to block until the heap roots are updated. Then, the thread doing the insert just needs to maintain a lock on the traversing grandparent node as it traverses the tree; this will ensue that you can insert and rebalance without blocking other inserting threads working on other sub-branches.
Getting the median becomes a constant time procedure, of course now you may have to wait on synchronization from further adds.
Rob
A balanced tree (e.g. R/B tree) with augmented size field should find the median in lg(n) time in the worst case. I think it is in Chapter 14 of the classic Algorithm text book.
To keep the explanation brief, you can efficiently augment a BST to select a key of a specified rank in O(h) by having each node store the number of nodes in its left subtree. If you can guarantee that the tree is balanced, you can reduce this to O(log(n)). Consider using an AVL which is height-balanced (or red-black tree which is roughly balanced), then you can select any key in O(log(n)). When you insert or delete a node into the AVL you can increment or decrement a variable that keeps track of the total number of nodes in the tree to determine the rank of the median which you can then select in O(log(n)).
In order to find the median in linear time you can try this (it just came to my mind). You need to store some values every time you add number to your set, and you won't need sorting. Here it goes.
typedef struct
{
int number;
int lesser;
int greater;
} record;
int median(record numbers[], int count, int n)
{
int i;
int m = VERY_BIG_NUMBER;
int a, b;
numbers[count + 1].number = n:
for (i = 0; i < count + 1; i++)
{
if (n < numbers[i].number)
{
numbers[i].lesser++;
numbers[count + 1].greater++;
}
else
{
numbers[i].greater++;
numbers[count + 1].lesser++;
}
if (numbers[i].greater - numbers[i].lesser == 0)
m = numbers[i].number;
}
if (m == VERY_BIG_NUMBER)
for (i = 0; i < count + 1; i++)
{
if (numbers[i].greater - numbers[i].lesser == -1)
a = numbers[i].number;
if (numbers[i].greater - numbers[i].lesser == 1)
b = numbers[i].number;
m = (a + b) / 2;
}
return m;
}
What this does is, each time you add a number to the set, you must now how many "lesser than your number" numbers have, and how many "greater than your number" numbers have. So, if you have a number with the same "lesser than" and "greater than" it means your number is in the very middle of the set, without having to sort it. In the case that you have an even amount of numbers you may have two choices for a median, so you just return the mean of those two. BTW, this is C code, I hope this helps.

Resources