Dijkstra's shortest path algorithm optimization - data-structures

I want to start by saying my code works as intended, and is reasonably fast. However profiling it, most of the time is spent on one very specific portion, which leads me to ask: is there any generally accepted better solution for this?
Here is my implementation:
var cellDistance = new double[cells.Count];
cellDistance.SetAll(idx => idx == startCellIndex ? 0 : double.PositiveInfinity);
var visitedCells = new HashSet<int>();
do
{
// current cell is the smallest unvisited tentative distance cell
var currentCell = cells[cellDistance.Select((d, idx) => (d, idx)).OrderBy(x => x.d).First(x => !visitedCells.Contains(cells[x.idx].Index)).idx];
foreach (var neighbourCell in currentCell.Neighbours)
if (!visitedCells.Contains(neighbourCell.Index))
{
var distanceThroughCurrentCell = cellDistance[currentCell.Index] + neighbourCell.Value;
if (cellDistance[neighbourCell.Index] > distanceThroughCurrentCell)
{
cellDistance[neighbourCell.Index] = distanceThroughCurrentCell;
prevCell[neighbourCell] = currentCell;
}
}
visitedCells.Add(currentCell.Index);
} while (visitedCells.Count != cells.Count && !visitedCells.Contains(endCell.Index));
Most of the time is spent on this line, which takes the unvisited node with the lowest partial cost:
var currentCell = cells[cellDistance.Select((d, idx) => (d, idx)).OrderBy(x => x.d).First(x => !visitedCells.Contains(cells[x.idx].Index)).idx];
And more specifically, in the last lambda, not the sort (which I found very surprising):
x => !visitedCells.Contains(cells[x.idx].Index)
Since visitedCells is already a HashSet, there isn't much I can improve with just the built-in data structures, so my question is: is there a different way of storing the partial costs that make this specific query (ie, the unvisited node with the lowest partial cost) noticeably faster?
I was considering some kind of sorted dictionary, but that I'd need one that sorts by value, because if it's sorted by key I'd have to make the partial cost the key, which makes updating it costly and then poses the problem as to how I map this structure to my cost array, and this still doesn't solve my visitedCells lookup.

Using an array of flags instead of HashSet
HashSet can have amortized insertion time and expected query time of O(1). However, since your node ids are simply indices into an array, they are sequential and they don't grow much. Also, you will eventually have all ids in the HashSet. In this case, you have faster O(1) options than using "any" generic hash table. You can use an array of booleans that show if a node was visited, and index into it using node ids.
Simply allocate a boolean array with size equal to node count. Fill it with false. Set the value at node id to true when you visit a new node.
Iterating over all nodes instead of sorting them for selecting the next node
Your current code has to sort all nodes with respect to their distances, and then go through them one by one to find the first non-visited one. This takes θ(nlogn) time in most cases due to sorting. (An optimization could be made to partially-sort the nodes, but it would be very surprising if a compiler/library could see that opportunity by itself.) Your total time complexity becomes θ(n^2 * logn) with this approach. Instead, you can go through the nodes once, keeping track of the minimum-distance non-visited node seen so far. This works in θ(n). Total time complexity is O(n^2), as Dijkstra should be.
With these two changes, your code will not have much left that is unneeded for Dijkstra's shortest path.
I was considering some kind of sorted dictionary, but that I'd need
one that sorts by value, because if it's sorted by key I'd have to
make the partial cost the key, which makes updating it costly and then
poses the problem as to how I map this structure to my cost array
There is a data structure called min-heap that can be used to extract the minimum value from a set (along with its satellite data). A simple binary min-heap can extract the minimum key or decrease some key it holds in θ(logn) worst-case time.
In the case of Dijkstra, you need to have a sparse graph for this to be more efficient than iterating over all distances (sparse graph ≈ number of edges is much less than number of nodes squared). Because the algorithm may need to decrease a distance each time it relaxes an edge.
If there are θ(n^2) edges, this makes the worst-case total time complexity θ(n^2 * logn).
If there are θ(n^2 / logn) edges, time taken in relaxations becomes O(n^2). Then, you need a more sparse graph than this one, for a binary heap to be more efficient than using a simple array.
In the worst case, extracting all minimum-distance nodes from the heap takes θ(nlogn) time, relaxing all edges takes θ(e * logn) time, where e is edge count, and total time is θ((n+e)logn). As I said, this can be more efficient than θ(n^2) only if e is asymptotically smaller than n^2 / logn.

Related

Smallest missing number at any point in time in a stream of positive numbers

We are processing a stream of positive integers. At any point in time, we can be asked a query to which the answer is the smallest positive number that we have not seen yet.
One can assume two APIs.
void processNext(int val)
int getSmallestNotSeen()
We can assume the numbers to be bounded by the range [1,10^6]. Let this range be N.
Here is my solution.
Let's take an array of size 10^6. Whenever processNext(val) is called we mark the array[val] to be 1. We make a sum segment tree on this array. This will be a point update in the segment tree. Whenever getSmallestNotSeen() is called I find the smallest index j such that sum [1..j] is less than j. I find j using binary search. processNext(val) -> O(1) getSmallestNotSeen() -> O((logN)^2)
I was thinking maybe if there was something more optimal. Or the above solution can be improved.
Make a map of id - > node (nodes of a doubly-linked list) and initialize for 10^6 nodes, each pointing to its neighbors. Initialize the min to one.
processNext(val): check if the node exists. If it does, delete it and point its neighbors at each other. If the node you delete has no left neighbor (i.e. was smallest), update the min to be the right neighbor.
getSmallestNotSeen(): return the min
The preprocessing is linear time and linear memory. Everything after that is constant time.
In case the number of processNext calls (i.e. the length of the stream) is fairly small compared with the range of N, then space usage could be limited by storing consecutive ranges of numbers, instead of all possible individual numbers. This is also interesting when N could be a much larger range, like [1, 264-1]
Data structure
I would suggest a binary search tree with such [start, end] ranges as elements, and self-balancing (like AVL, red-black, ...).
Algorithm
Initialise the tree with one (root) node: [1, Infinity]
Whenever a new value val is pulled with processNext, find the range [start, end] that includes val, using binary search.
If the range has size 1 (and thus only contains val), perform a deletion of that node (according to the tree rules)
Else if val is a bounding value of the range, then just update the range in that node, excluding val.
Otherwise split the range into two. Update the node with one of the two ranges (decide by the balance information) and let the other range sift down to a new leaf (and rebalance if needed).
In the tree maintain a reference to the node having the least start value. Only when this node gets deleted during processNext it will need a traversal up or down the tree to find the next (in order) node. When the node splits (see above) and it is decided the put the lower part in a new leaf, the reference needs to be updated to that leaf.
The getSmallestNotSeen function will return the start-value from that least-range node.
Time & Space Complexity
The space complexity is O(S), where S is the length of the stream
The time complexity of processNext is O(log(S))
The time complexity of getSmallestNotSeen is O(1)
The best case space and time complexity is O(1). Such a best case occurs when the stream has consecutive integers (increasing or decreasing)
bool array[10^6] = {false, false, ... }
int min = 1
void processNext(int val) {
array[val] = true // A
while (array[min]) // B
min++ // C
}
int getSmallestNotSeen() {
return min
}
Time complexity:
processNext: amortised O(1)
getSmallestNotSeen: O(1)
Analysis:
If processNext is invoked k times and n is the highest value stored in min (which could be returned in getSmallestNotSeen), then:
the line A will be executed exactly k times,
the line B will be executed exactly k + n times, and
the line C will be executed exactly n times.
Additionally, n will never be greater than k, because for min to reach n there needs to be a continous range of n true's in the array, and there can be only k true's in the array in total. Therefore, line B can be executed at most 2 * k times and line C at most k times.
Space complexity:
Instead of an array it is possible to use a HashMap without any additional changes in the pseudocode (non-existing keys in the HashMap should evaluate to false). Then the space complexity is O(k). Additionally, you can prune keys smaller than min, thus saving space in some cases:
HashMap<int,bool> map
int min = 1
void processNext(int val) {
if (val < min)
return
map.put(val, true)
while (map.get(min) = true)
map.remove(min)
min++
}
int getSmallestNotSeen() {
return min
}
This pruning technique might be most effective if the stream values increase steadily.
Your solution takes O(N) space to hold the array and the sum segment tree, and O(N) time to initialise them; then O(1) and O(log² N) for the two queries. It seems pretty clear that you can't do better than O(N) space in the long run to keep track of which numbers are "seen" so far, if there are going to be a lot of queries.
However, a different data structure can improve on the query times. Here are three ideas:
Self-balancing binary search tree
Initialise the tree to contain every number from 1 to N; this can be done in O(N) time by building the tree from the leaves up; the leaves have all the odd numbers, then they're joined by all the numbers which are 2 mod 4, then those are joined by the numbers which are 4 mod 8, and so on. The tree takes O(N) space.
processNext is implemented by removing the number from the tree in O(log N) time.
getSmallestNotSeen is implemented by finding the left-most node in O(log N) time.
This is an improvement if getSmallestNotSeen is called many times, but if getSmallestNotSeen is rarely called then your solution is better because it does processNext in O(1) rather than O(log N).
Doubly-linked list
Initialise a doubly-linked list containing the numbers 1 to N in order, and create an array of size N holding pointers to each node. This takes O(N) space and is done in O(N) time. Initialise a variable holding a cached minimum value to be 1.
processNext is implemented by looking up the corresponding list node in the array, and deleting it from the list. If the deleted node has no predecessor, update the cached minimum value to be the value held by the successor node. This is O(1) time.
getSmallestNotSeen is implemented by returning the cached minimum, in O(1) time.
This is also an improvement, and is strictly better asymptotically, although the constants involved might be higher; there's a lot of overhead to hold an array of size N and also a doubly-linked list of size N.
Hash-set
The time requirements for the other solutions are largely determined by their initialisation stages, which take O(N) time. Initialising an empty hash-set, on the other hand, is O(1). As before, we also initialise a variable holding a current minimum value to be 1.
processNext is implemented by inserting the number into the set, in O(1) amortised time.
getSmallestNotSeen updates the current minimum by incrementing it until it's no longer in the set, and then returns it. Membership tests on a hash-set are O(1), and the number of increments over all queries is limited by the number of times processNext is called, so this is also O(1) amortised time.
Asymptotically, this solution takes O(1) time for initialisation and queries, and it uses O(min(Q,N)) space where Q is the number of queries, while the other solutions use O(N) space regardless.
I think it should be straightforward to prove that O(min(Q,N)) space is asymptotically optimal, so the hash-set turns out to be the best option. Credit goes to Dave for combining the hash-set with a current-minimum variable to do getSmallestNotSeen in O(1) amortised time.

Array merging and sorting complexity calculation

I have one exercise from my algorithm text book and I am not really sure about the solution. I need to explain why this solution:
function array_merge_sorted(array $foo, array $bar)
{
$baz = array_merge($foo, $bar);
$baz = array_unique($baz);
sort($baz);
return $baz;
}
that merge two array and order them is not the most efficient and I need to provide one solution that is the most optimized and prove that not better solution can be done.
My idea was about to use a mergesort algorithm that is O(n log n), to merge and order the two array passed as parameter. But how can I prove that is the best solution ever?
Algorithm
As you have said that both inputs are already sorted, you can use a simple zipper-like approach.
You have one pointer for each input array, pointing to the begin of it. Then you compare both elements, adding the smaller one to the result and advancing the pointer of the array with the smaller element. Then you repeat the step until both pointers reached the end and all elements where added to the result.
You find a collection of such algorithms at Wikipedia#Merge algorithm with my current presented approach being listed as Merging two lists.
Here is some pseudocode:
function Array<Element> mergeSorted(Array<Element> first, Array<Element> second) {
Array<Element> result = new Array<Element>(first.length + second.length);
int firstPointer = 0;
int secondPointer = 0;
while (firstPointer < first.length && secondPointer < first.length) {
Element elementOfFirst = first.get(firstPointer);
Element elementOfSecond = second.get(secondPointer);
if (elementOfFirst < elementOfSecond) {
result.add(elementOfFirst);
firstPointer = firstPointer + 1;
} else {
result.add(elementOfSecond);
secondPointer = secondPointer + 1;
}
}
}
Proof
The algorithm obviously works in O(n) where n is the size of the resulting list. Or more precise it is O(max(n, n') with n being the size of the first list and n' of the second list (or O(n + n') which is the same set).
This is also obviously optimal since you need, at some point, at least traverse all elements once in order to build the result and know the final ordering. This yields a lower bound of Omega(n) for this problem, thus the algorithm is optimal.
A more formal proof assumes a better arbitrary algorithm A which solves the problem without taking a look at each element at least once (or more precise, with less than O(n)).
We call that element, which the algorithm does not look at, e. We can now construct an input I such that e has a value which fulfills the order in its own array but will be placed wrong by the algorithm in the resulting array.
We are able to do so for every algorithm A and since A always needs to work correctly on all possible inputs, we are able to find a counter-example I such that it fails.
Thus A can not exist and Omega(n) is a lower bound for that problem.
Why the given algorithm is worse
Your given algorithm first merges the two arrays, this works in O(n) which is good. But after that it sorts the array.
Sorting (more precise: comparison-based sorting) has a lower-bound of Omega(n log n). This means every such algorithm can not be better than that.
Thus the given algorithm has a total time complexity of O(n log n) (because of the sorting part). Which is worse than O(n), the complexity of the other algorithm and also the optimal solution.
However, to be super-correct, we also would need to argue whether the sort-method truly yields that complexity, since it does not get arbitrary inputs but always the result of the merge-method. Thus it could be possible that a specific sorting method works especially good for such specific inputs, yielding O(n) in the end.
But I doubt that this is in the focus of your task.

What's the time complexity of this algorithm (pseudo code)?

Assume the tree T is a binary tree.
Algorithm computeDepths(node, depth)
Input: node and its depth. For all depths, call with computeDepths(T.root, 0)
Output: depths of all the nodes of T
if node != null
depth ← node.depth
computeDepths(node.left, depth + 1)
computeDepths(node.right, depth + 1)
return depth
end if
I ran it on paper with a full and complete binary tree containing 7 elements, but I still can't put my head around what time complexity it is. If I had to guess, I'd say it's O(n*log n).
It is O(n)
To get an idea on the time complexity, we need to find out the amount of work done by the algorithm, compared with the size of the input. In this algorithm, the work done per function call is constant (only assigning a given value to a variable). So let's count how many times the function is called.
The first time the function is called, it's called on the root.
Then for any subsequent calls, the function checks if the node is null, if it is not null, it set the depth accordingly and set the depths of its children. Then this is done recursively.
Now note that the function is called once per node in the tree, plus two times the number of leaves. In a binary tree, the number of leaves is n/2 (rounded up), so the total number of function calls is:
n + 2*(n/2) = 2n
So this is the amount of work done by the algorithm. And so the time complexity is O(n).

Partition a binary tree into k parts with similar sizes

I was trying to split a binary-tree into k similar-sized parts (by removing k-1 edges). Is there any efficient algorithm for this problem? Or is it NP-hard? Any pointers to papers, problem definitions, etc?
-- One reasonable metric for evaluating the quality of partitioning could be the size gap between the largest and smallest partition; another metric could be making the smallest partition having as many vertices as possible.
I can suggest pretty fast solution for making the smallest part having as many vertices as possible metric.
Let suppose we guess the size S of smallest partit and want check if it's correct.
First I want to make a few statements:
If total size of tree bigger than S there is at least one subtree which is bigger than S and all subtrees of that subtree are smaller. (It's enough to check both biggest.)
If there is some way to split tree where size of smallest part >= S and we have subtree T all subtrees of which are smaller than S than we can grant that no edges inside T are deleted. (Cause any such deletion will create a partition which will be smaller than S)
If there is some way to split tree where size of smallest part >= S, and we have some subtree T which size >= S, has no deleted edges inside but is not one of parts, we can split the tree in other way where subtree T will be one of parts itself and all parts will be no smaller than S. (Just move some extra vertices from original part to any other part, this other part will not become smaller.)
So here is an algorithm to check if we can split the tree in k parts no smaller than S.
find all suitable vertices (roots of subtrees of size >= S and size for both childs < S) and add them in list. You can start from the root and move through all vertices while subtrees are bigger than S.
While list not empty and number of parts lesser then K take a vertice from the list and cut it off the tree. Than update size of subtrees for parent vertices and add to the list if one of them become suitable.
You even have no need to update all the parent vertices, only until you will find first which's new subtree size is bigger than S, parent vertices cant't be suitable for adding in list yet and can be updated later.
You may need to construct tree back to restore original subtree sizes assigned to the vertices.
Now we can use bisection method. We can determine upper bound as Smax = n/k and lower bound can be retrieved from equation (2*Smin- 1)*(K - 1) + Smin = N it will grants that if we will cut off k-1 subtrees with two child subtrees of size Smin - 1 each, we will have part of size Smin left. Smin = (n + k -1)/(2*k - 1)
And now we can check S = (Smax + Smin)/2
If we manage to construct partition using the method above than S is smaller or equal to it's largest possible value, also smallest part in constructed partition may be bigger than S and we can set new lower bound to it instead of S, if we fail S is bigger than possible.
Time complexity of one check is k multiplied by number of parent nodes updated each time, for well balanced tree number of updated nodes is constant (we will use trick explaned earlier and will not update all parent nodes), still it's not bigger than (n/k) in worst case for ultimately unbalanced tree. Searching for suitable vertices has very similar behavior (all vertices passed while searching will be updated later.).
Difference between n/k and (n + k -1)/(2*k - 1) is proportional to n/k.
So we have time complexity O(k * log(n/k)) in best case if we have precalculated subtree sizes, O(n) if subtree sizes are not precalculated and O(n * log(n/k)) in worst case.
This method may lead to situation when last of parts will be comparably big but I suppose once you've got suggested method you can figure out some improvements to minimize it.
Here is a polynomial deterministic solution:
Let's assume that the tree is rooted and there are two fixed values: MIN and MAX - minimum and maximum allowed size of one component.
Then one can use dynamic programming to check if there is a partition such that each component size is between MIN and MAX:
Let's assume f(node, cuts_count, current_count) is true if and only if there is a way to make exactly cuts_count cuts in node's subtree so that current_count vertices are connected to the node so that condition 2) holds true.
The base case for the leaves is: f(leaf, 1, 0)(cut the edge from the parent to the leaf) is true if and only if MIN <= 1 and MAX >= 1 f(leaf, 0, 1)(do not cut it) is always true(it is false for all other values of cuts_count and current_count).
To compute f for a node(not a leaf), one can use the following algorithm:
//Combine all possible children states.
for cuts_left in 0..k
for cuts_right in 0..k
for cnt_left in 0..left_subtree_size
for cnt_right in 0..right_subtree_size
if f(left_child, cuts_left, cnt_left) is true and
f(right_child, cuts_right, cnt_right) is true and then
f(node, cuts_left + cuts_right, cnt_left + cnt_right + 1) = true
//Cut an edge from this node to its parent.
for cuts in 0..k-1
for cnt in 0..node's_subtree_size
if f(node, cuts, node's_subtree_size) is true and MIN <= cnt <= MAX:
f(node, cuts + 1, 0) = true
What this pseudo code does is combining all possible states of node's children to compute all reachable states for this node(the first bunch of for loops) and then produces the rest of reachable states by cutting the edge between this node and its parent(the second bunch of for loops)(the state means (node, cuts_count, current_count) tuple. I call it reachable if f(state) is true).
That is the case for a node with two children, the case with one child can be processes in similar manner.
Finally, if f(root, k, 0) is true then it is possible to find the partition which stratifies the condition 2) and it is not possible otherwise. We need to "pretend" that we did k cuts here because we also cut an imaginary edge from root to its parent(this edge and this parent doesn't exist actually) when we computed f for the root(to avoid corner case).
The space complexity of this algorithm(for fixed MIN and MAX) is O(n^2 * k)(n is the number of nodes), time complexity is O(k^2 * n^2). It might seem that the complexity is actually O(k^2 * n^3), but is not so because the product of number of vertices in left and right subtree of a node is exactly the number of pairs of node's such that their least common ancestor is this node. But the total number of pairs of nodes is O(n^2)(and each pair has only one least common ancestor). Thus, the sum of products of left and right subtree sizes over all nodes is O(n^2).
One can simply try all possible MIN and MAX values and choose the best, but it can be done faster. The key observation is that if there is a solution for MIN and MAX, there is always a solution for MIN and MAX + 1. Thus, one can iterate over all possible values of MIN(n / k different values) and apply binary search to find the smallest MAX which gives a valid solution(log n iterations). So the overall time complexity is O(n^2 * k^2 * n / k * log n) = O(n^3 * k * log n). However, if you want to maximize MIN(not to minimize the difference between MAX and MIN), you can simply use this algorithm and ignore MAX value everywhere(by setting its value to n). Then no binary search over MAX would be required, but one would be able to binary search over MIN instead and obtain an O(n^2 * k^2 * log n) solution.
To reconstruct the partition itself, one can start from f(root, k, 0) and apply the steps we used to compute f, but this time in opposite direction(from root to leaves). It is also possible to save the information about how to get the value of each state(what children's states were combined or what was the state before the edge was cut)(and update it appropriately during the initial computation of f) and then reconstruct the partition using this data(if my explanation of this step seems not very clear, reading an article on dynamic programming and reconstructing the answer might help).
So, there is a polynomial solution for this problem on a binary tree(even though it is NP-hard for an arbitrary graph).

Find median value from a growing set

I came across an interesting algorithm question in an interview. I gave my answer but not sure whether there is any better idea. So I welcome everyone to write something about his/her ideas.
You have an empty set. Now elements are put into the set one by one. We assume all the elements are integers and they are distinct (according to the definition of set, we don't consider two elements with the same value).
Every time a new element is added to the set, the set's median value is asked. The median value is defined the same as in math: the middle element in a sorted list. Here, specially, when the size of set is even, assuming size of set = 2*x, the median element is the x-th element of the set.
An example:
Start with an empty set,
when 12 is added, the median is 12,
when 7 is added, the median is 7,
when 8 is added, the median is 8,
when 11 is added, the median is 8,
when 5 is added, the median is 8,
when 16 is added, the median is 8,
...
Notice that, first, elements are added to set one by one and second, we don't know the elements going to be added.
My answer.
Since it is a question about finding median, sorting is needed. The easiest solution is to use a normal array and keep the array sorted. When a new element comes, use binary search to find the position for the element (log_n) and add the element to the array. Since it is a normal array so shifting the rest of the array is needed, whose time complexity is n. When the element is inserted, we can immediately get the median, using instance time.
The WORST time complexity is: log_n + n + 1.
Another solution is to use link list. The reason for using link list is to remove the need of shifting the array. But finding the location of the new element requires a linear search. Adding the element takes instant time and then we need to find the median by going through half of the array, which always takes n/2 time.
The WORST time complexity is: n + 1 + n/2.
The third solution is to use a binary search tree. Using a tree, we avoid shifting array. But using the binary search tree to find the median is not very attractive. So I change the binary search tree in a way that it is always the case that the left subtree and the right subtree are balanced. This means that at any time, either the left subtree and the right subtree have the same number of nodes or the right subtree has one node more than in the left subtree. In other words, it is ensured that at any time, the root element is the median. Of course this requires changes in the way the tree is built. The technical detail is similar to rotating a red-black tree.
If the tree is maintained properly, it is ensured that the WORST time complexity is O(n).
So the three algorithms are all linear to the size of the set. If no sub-linear algorithm exists, the three algorithms can be thought as the optimal solutions. Since they don't differ from each other much, the best is the easiest to implement, which is the second one, using link list.
So what I really wonder is, will there be a sub-linear algorithm for this problem and if so what will it be like. Any ideas guys?
Steve.
Your complexity analysis is confusing. Let's say that n items total are added; we want to output the stream of n medians (where the ith in the stream is the median of the first i items) efficiently.
I believe this can be done in O(n*lg n) time using two priority queues (e.g. binary or fibonacci heap); one queue for the items below the current median (so the largest element is at the top), and the other for items above it (in this heap, the smallest is at the bottom). Note that in fibonacci (and other) heaps, insertion is O(1) amortized; it's only popping an element that's O(lg n).
This would be called an "online median selection" algorithm, although Wikipedia only talks about online min/max selection. Here's an approximate algorithm, and a lower bound on deterministic and approximate online median selection (a lower bound means no faster algorithm is possible!)
If there are a small number of possible values compared to n, you can probably break the comparison-based lower bound just like you can for sorting.
I received the same interview question and came up with the two-heap solution in wrang-wrang's post. As he says, the time per operation is O(log n) worst-case. The expected time is also O(log n) because you have to "pop an element" 1/4 of the time assuming random inputs.
I subsequently thought about it further and figured out how to get constant expected time; indeed, the expected number of comparisons per element becomes 2+o(1). You can see my writeup at http://denenberg.com/omf.pdf .
BTW, the solutions discussed here all require space O(n), since you must save all the elements. A completely different approach, requiring only O(log n) space, gives you an approximation to the median (not the exact median). Sorry I can't post a link (I'm limited to one link per post) but my paper has pointers.
Although wrang-wrang already answered, I wish to describe a modification of your binary search tree method that is sub-linear.
We use a binary search tree that is balanced (AVL/Red-Black/etc), but not super-balanced like you described. So adding an item is O(log n)
One modification to the tree: for every node we also store the number of nodes in its subtree. This doesn't change the complexity. (For a leaf this count would be 1, for a node with two leaf children this would be 3, etc)
We can now access the Kth smallest element in O(log n) using these counts:
def get_kth_item(subtree, k):
left_size = 0 if subtree.left is None else subtree.left.size
if k < left_size:
return get_kth_item(subtree.left, k)
elif k == left_size:
return subtree.value
else: # k > left_size
return get_kth_item(subtree.right, k-1-left_size)
A median is a special case of Kth smallest element (given that you know the size of the set).
So all in all this is another O(log n) solution.
We can difine a min and max heap to store numbers. Additionally, we define a class DynamicArray for the number set, with two functions: Insert and Getmedian. Time to insert a new number is O(lgn), while time to get median is O(1).
This solution is implemented in C++ as the following:
template<typename T> class DynamicArray
{
public:
void Insert(T num)
{
if(((minHeap.size() + maxHeap.size()) & 1) == 0)
{
if(maxHeap.size() > 0 && num < maxHeap[0])
{
maxHeap.push_back(num);
push_heap(maxHeap.begin(), maxHeap.end(), less<T>());
num = maxHeap[0];
pop_heap(maxHeap.begin(), maxHeap.end(), less<T>());
maxHeap.pop_back();
}
minHeap.push_back(num);
push_heap(minHeap.begin(), minHeap.end(), greater<T>());
}
else
{
if(minHeap.size() > 0 && minHeap[0] < num)
{
minHeap.push_back(num);
push_heap(minHeap.begin(), minHeap.end(), greater<T>());
num = minHeap[0];
pop_heap(minHeap.begin(), minHeap.end(), greater<T>());
minHeap.pop_back();
}
maxHeap.push_back(num);
push_heap(maxHeap.begin(), maxHeap.end(), less<T>());
}
}
int GetMedian()
{
int size = minHeap.size() + maxHeap.size();
if(size == 0)
throw exception("No numbers are available");
T median = 0;
if(size & 1 == 1)
median = minHeap[0];
else
median = (minHeap[0] + maxHeap[0]) / 2;
return median;
}
private:
vector<T> minHeap;
vector<T> maxHeap;
};
For more detailed analysis, please refer to my blog: http://codercareer.blogspot.com/2012/01/no-30-median-in-stream.html.
1) As with the previous suggestions, keep two heaps and cache their respective sizes. The left heap keeps values below the median, the right heap keeps values above the median. If you simply negate the values in the right heap the smallest value will be at the root so there is no need to create a special data structure.
2) When you add a new number, you determine the new median from the size of your two heaps, the current median, and the two roots of the L&R heaps, which just takes constant time.
3) Call a private threaded method to perform the actual work to perform the insert and update, but return immediately with the new median value. You only need to block until the heap roots are updated. Then, the thread doing the insert just needs to maintain a lock on the traversing grandparent node as it traverses the tree; this will ensue that you can insert and rebalance without blocking other inserting threads working on other sub-branches.
Getting the median becomes a constant time procedure, of course now you may have to wait on synchronization from further adds.
Rob
A balanced tree (e.g. R/B tree) with augmented size field should find the median in lg(n) time in the worst case. I think it is in Chapter 14 of the classic Algorithm text book.
To keep the explanation brief, you can efficiently augment a BST to select a key of a specified rank in O(h) by having each node store the number of nodes in its left subtree. If you can guarantee that the tree is balanced, you can reduce this to O(log(n)). Consider using an AVL which is height-balanced (or red-black tree which is roughly balanced), then you can select any key in O(log(n)). When you insert or delete a node into the AVL you can increment or decrement a variable that keeps track of the total number of nodes in the tree to determine the rank of the median which you can then select in O(log(n)).
In order to find the median in linear time you can try this (it just came to my mind). You need to store some values every time you add number to your set, and you won't need sorting. Here it goes.
typedef struct
{
int number;
int lesser;
int greater;
} record;
int median(record numbers[], int count, int n)
{
int i;
int m = VERY_BIG_NUMBER;
int a, b;
numbers[count + 1].number = n:
for (i = 0; i < count + 1; i++)
{
if (n < numbers[i].number)
{
numbers[i].lesser++;
numbers[count + 1].greater++;
}
else
{
numbers[i].greater++;
numbers[count + 1].lesser++;
}
if (numbers[i].greater - numbers[i].lesser == 0)
m = numbers[i].number;
}
if (m == VERY_BIG_NUMBER)
for (i = 0; i < count + 1; i++)
{
if (numbers[i].greater - numbers[i].lesser == -1)
a = numbers[i].number;
if (numbers[i].greater - numbers[i].lesser == 1)
b = numbers[i].number;
m = (a + b) / 2;
}
return m;
}
What this does is, each time you add a number to the set, you must now how many "lesser than your number" numbers have, and how many "greater than your number" numbers have. So, if you have a number with the same "lesser than" and "greater than" it means your number is in the very middle of the set, without having to sort it. In the case that you have an even amount of numbers you may have two choices for a median, so you just return the mean of those two. BTW, this is C code, I hope this helps.

Resources