Median Algorithm in O(log n) - algorithm

How can we remove the median of a set with time complexity O(log n)? Some idea?

If the set is sorted, finding the median requires O(1) item retrievals. If the items are in arbitrary sequence, it will not be possible to identify the median with certainty without examining the majority of the items. If one has examined most, but not all, of the items, that will allow one to guarantee that the median will be within some range [if the list contains duplicates, the upper and lower bounds may match], but examining the majority of the items in a list implies O(n) item retrievals.
If one has the information in a collection which is not fully ordered, but where certain ordering relationships are known, then the time required may require anywhere between O(1) and O(n) item retrievals, depending upon the nature of the known ordering relation.

For unsorted lists, repeatedly do O(n) partial sort until the element located at the median position is known. This is at least O(n), though.
Is there any information about the elements being sorted?

For a general, unsorted set, it is impossible to reliably find the median in better than O(n) time. You can find the median of a sorted set in O(1), or you can trivially sort the set yourself in O(n log n) time and then find the median in O(1), giving an O(n logn n) algorithm. Or, finally, there are more clever median selection algorithms that can work by partitioning instead of sorting and yield O(n) performance.
But if the set has no special properties and you are not allowed any pre-processing step, you will never get below O(n) by the simple fact that you will need to examine all of the elements at least once to ensure that your median is correct.

Here's a solution in Java, based on TreeSet:
public class SetWithMedian {
private SortedSet<Integer> s = new TreeSet<Integer>();
private Integer m = null;
public boolean contains(int e) {
return s.contains(e);
}
public Integer getMedian() {
return m;
}
public void add(int e) {
s.add(e);
updateMedian();
}
public void remove(int e) {
s.remove(e);
updateMedian();
}
private void updateMedian() {
if (s.size() == 0) {
m = null;
} else if (s.size() == 1) {
m = s.first();
} else {
SortedSet<Integer> h = s.headSet(m);
SortedSet<Integer> t = s.tailSet(m + 1);
int x = 1 - s.size() % 2;
if (h.size() < t.size() + x)
m = t.first();
else if (h.size() > t.size() + x)
m = h.last();
}
}
}
Removing the median (i.e. "s.remove(s.getMedian())") takes O(log n) time.
Edit: To help understand the code, here's the invariant condition of the class attributes:
private boolean isGood() {
if (s.isEmpty()) {
return m == null;
} else {
return s.contains(m) && s.headSet(m).size() + s.size() % 2 == s.tailSet(m).size();
}
}
In human-readable form:
If the set "s" is empty, then "m" must be
null.
If the set "s" is not empty, then it must
contain "m".
Let x be the number of elements
strictly less than "m", and let y be
the number of elements greater than
or equal "m". Then, if the total
number of elements is even, x must be
equal to y; otherwise, x+1 must be
equal to y.

Try a Red-black-tree. It should work quiet good and with a binary search you get ur log(n). It has aswell a remove and insert time of log(n) and rebalancing is done in log(n) aswell.

As mentioned in previous answers, there is no way to find the median without touching every element of the data structure. If the algorithm you look for must be executed sequentially, then the best you can do is O(n). The deterministic selection algorithm (median-of-medians) or BFPRT algorithm will solve the problem with a worst case of O(n). You can find more about that here: http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_Median_of_Medians_algorithm
However, the median of medians algorithm can be made to run faster than O(n) making it parallel. Due to it's divide and conquer nature, the algorithm can be "easily" made parallel. For instance, when dividing the input array in elements of 5, you could potentially launch a thread for each sub-array, sort it and find the median within that thread. When this step finished the threads are joined and the algorithm is run again with the newly formed array of medians.
Note that such design would only be beneficial in really large data sets. The additional overhead that spawning threads has and merging them makes it unfeasible for smaller sets. This has a bit of insight: http://www.umiacs.umd.edu/research/EXPAR/papers/3494/node18.html
Note that you can find asymptotically faster algorithms out there, however they are not practical enough for daily use. Your best bet is the already mentioned sequential median-of-medians algorithm.

Master Yoda's randomized algorithm has, of course, a minimum complexity of n like any other, an expected complexity of n (not log n) and a maximum complexity of n squared like Quicksort. It's still very good.
In practice, the "random" pivot choice might sometimes be a fixed location (without involving a RNG) because the initial array elements are known to be random enough (e.g. a random permutation of distinct values, or independent and identically distributed) or deduced from an approximate or exactly known distribution of input values.

I know one randomize algorithm with time complexity of O(n) in expectation.
Here is the algorithm:
Input: array of n numbers A[1...n] [without loss of generality we can assume n is even]
Output: n/2th element in the sorted array.
Algorithm ( A[1..n] , k = n/2):
Pick a pivot - p universally at random from 1...n
Divided array into 2 parts:
L - having element <= A[p]
R - having element > A[p]
if(n/2 == |L|) A[|L| + 1] is the median stop
if( n/2 < |L|) re-curse on (L, k)
else re-curse on (R, k - (|L| + 1)
Complexity:
O( n)
proof is all mathematical. One page long. If you are interested ping me.

To expand on rwong's answer: Here is an example code
// partial_sort example
#include <iostream>
#include <algorithm>
#include <vector>
using namespace std;
int main () {
int myints[] = {9,8,7,6,5,4,3,2,1};
vector<int> myvector (myints, myints+9);
vector<int>::iterator it;
partial_sort (myvector.begin(), myvector.begin()+5, myvector.end());
// print out content:
cout << "myvector contains:";
for (it=myvector.begin(); it!=myvector.end(); ++it)
cout << " " << *it;
cout << endl;
return 0;
}
Output:
myvector contains: 1 2 3 4 5 9 8 7 6
The element in the middle would be the median.

Related

Find an algorithm for sorting integers with time complexity O(n + k*log(k))

Design an algorithm that sorts n integers where there are duplicates. The total number of different numbers is k. Your algorithm should have time complexity O(n + k*log(k)). The expected time is enough. For which values of k does the algorithm become linear?
I am not able to come up with a sorting algorithm for integers which satisfies the condition that it must be O(n + k*log(k)). I am not a very advanced programmer but I was in the problem before this one supposed to come up with an algorithm for all numbers xi in a list, 0 ≤ xi ≤ m such that the algorithm was O(n+m), where n was the number of elements in the list and m was the value of the biggest integer in the list. I solved that problem easily by using counting sort but I struggle with this problem. The condition that makes it the most difficult for me is the term k*log(k) under the ordo notation if that was n*log(n) instead I would be able to use merge sort, right? But that's not possible now so any ideas would be very helpful.
Thanks in advance!
Here is a possible solution:
Using a hash table, count the number of unique values and the number of duplicates of each value. This should have a complexity of O(n).
Enumerate the hashtable, storing the unique values into a temporary array. Complexity is O(k).
Sort this array with a standard algorithm such as mergesort: complexity is O(k.log(k)).
Create the resulting array by replicating the elements of the sorted array of unique values each the number of times stored in the hash table. complexity is O(n) + O(k).
Combined complexity is O(n + k.log(k)).
For example, if k is a small constant, sorting an array of n values converges toward linear time as n becomes larger and larger.
If during the first phase, where k is computed incrementally, it appears that k is not significantly smaller than n, drop the hash table and just sort the original array with a standard algorithm.
The runtime of O(n + k*log(k) indicates (like addition in runtimes often does) that you have 2 subroutines, one which runes in O(n) and the other that runs in O(k*log(k)).
You can first count the frequency of the elements in O(n) (for example in a Hashmap, look this up if youre not familiar with it, it's very useful).
Then you just sort the unique elements, from which there are k. This sorting runs in O(k*log(k)), use any sorting algorithm you want.
At the end replace the single unique elements by how often they actually appeared, by looking this up in the map you created in step 1.
A possible Java solution an be like this:
public List<Integer> sortArrayWithDuplicates(List<Integer> arr) {
// O(n)
Set<Integer> set = new HashSet<>(arr);
Map<Integer, Integer> freqMap = new HashMap<>();
for(Integer i: arr) {
freqMap.put(i, freqMap.getOrDefault(i, 0) + 1);
}
List<Integer> withoutDups = new ArrayList<>(set);
// Sorting => O(k(log(k)))
// as there are k different elements
Arrays.sort(withoutDups);
List<Integer> result = new ArrayList<>();
for(Integer i : withoutDups) {
int c = freqMap.get(i);
for(int j = 0; j < c; j++) {
result.add(i);
}
}
// return the result
return result;
}
The time complexity of the above code is O(n + k*log(k)) and solution is in the same line as answered above.

Big O - is n always the size of the input?

I made up my own interview-style problem, and have a question on the big O of my solution. I will state the problem and my solution below, but first let me say that the obvious solution involves a nested loop and is O(n2). I believe I found a O(n) solution, but then I realized it depends not only on the size of the input, but the largest value of the input. It seems like my running time of O(n) is only a technicality, and that it could easily run in O(n2) time or worse in real life.
The problem is:
For each item in a given array of positive integers, print all the other items in the array that are multiples of the current item.
Example Input:
[2 9 6 8 3]
Example Output:
2: 6 8
9:
6:
8:
3: 9 6
My solution (in C#):
private static void PrintAllDivisibleBy(int[] arr)
{
Dictionary<int, bool> dic = new Dictionary<int, bool>();
if (arr == null || arr.Length < 2)
return;
int max = arr[0];
for(int i=0; i<arr.Length; i++)
{
if (arr[i] > max)
max = arr[i];
dic[arr[i]] = true;
}
for(int i=0; i<arr.Length; i++)
{
Console.Write("{0}: ", arr[i]);
int multiplier = 2;
while(true)
{
int product = multiplier * arr[i];
if (dic.ContainsKey(product))
Console.Write("{0} ", product);
if (product >= max)
break;
multiplier++;
}
Console.WriteLine();
}
}
So, if 2 of the array items are 1 and n, where n is the array length, the inner while loop will run n times, making this equivalent to O(n2). But, since the performance is dependent on the size of the input values, not the length of the list, that makes it O(n), right?
Would you consider this a true O(n) solution? Is it only O(n) due to technicalities, but slower in real life?
Good question! The answer is that, no, n is not always the size of the input: You can't really talk about O(n) without defining what the n means, but often people use imprecise language and imply that n is "the most obvious thing that scales here". Technically we should usually say things like "This sort algorithm performs a number of comparisons that is O(n) in the number of elements in the list": being specific about both what n is, and what quantity we are measuring (comparisons).
If you have an algorithm that depends on the product of two different things (here, the length of the list and the largest element in it), the proper way to express that is in the form O(m*n), and then define what m and n are for your context. So, we could say that your algorithm performs O(m*n) multiplications, where m is the length of the list and n is the largest item in the list.
An algorithm is O(n) when you have to iterate over n elements and perform some constant time operation in each iteration. The inner while loop of your algorithm is not constant time as it depends on the hugeness of the biggest number in your array.
Your algorithm's best case run-time is O(n). This is the case when all the n numbers are same.
Your algorithm's worst case run-time is O(k*n), where k = the max value of int possible on your machine if you really insist to put an upper bound on k's value. For 32 bit int the max value is 2,147,483,647. You can argue that this k is a constant, but this constant is clearly
not fixed for every case of input array; and,
not negligible.
Would you consider this a true O(n) solution?
The runtime actually is O(nm) where m is the maximum element from arr. If the elements in your array are bounded by a constant you can consider the algorithm to be O(n)
Can you improve the runtime? Here's what else you can do. First notice that you can ensure that the elements are different. ( you compress the array in hashmap which stores how many times an element is found in the array). Then your runtime would be max/a[0]+max/a[1]+max/a[2]+...<= max+max/2+...max/max = O(max log (max)) (assuming your array arr is sorted). If you combine this with the obvious O(n^2) algorithm you'd get O(min(n^2, max*log(max)) algorithm.

Find the k largest elements in order

What is the fastest way to find the k largest elements in an array in order (i.e. starting from the largest element to the kth largest element)?
One option would be the following:
Using a linear-time selection algorithm like median-of-medians or introsort, find the kth largest element and rearrange the elements so that all elements from the kth element forward are greater than the kth element.
Sort all elements from the kth forward using a fast sorting algorithm like heapsort or quicksort.
Step (1) takes time O(n), and step (2) takes time O(k log k). Overall, the algorithm runs in time O(n + k log k), which is very, very fast.
Hope this helps!
C++ also provides the partial_sort algorithm, which solves the problem of selecting the smallest k elements (sorted), with a time complexity of O(n log k). No algorithm is provided for selecting the greatest k elements since this should be done by inverting the ordering predicate.
For Perl, the module Sort::Key::Top, available from CPAN, provides a set of functions to select the top n elements from a list using several orderings and custom key extraction procedures. Furthermore, the Statistics::CaseResampling module provides a function to calculate quantiles using quickselect.
Python's standard library (since 2.4) includes heapq.nsmallest() and nlargest(), returning sorted lists, the former in O(n + k log n) time, the latter in O(n log k) time.
Radix sort solution:
Sort the array in descending order, using radix sort;
Print first K elements.
Time complexity: O(N*L), where L = length of the largest element, can assume L = O(1).
Space used: O(N) for radix sort.
However, I think radix sort has costly overhead, making its linear time complexity less attractive.
1) Build a Max Heap tree in O(n)
2) Use Extract Max k times to get k maximum elements from the Max Heap O(klogn)
Time complexity: O(n + klogn)
A C++ implementation using STL is given below:
#include <iostream>
#include<bits/stdc++.h>
using namespace std;
int main() {
int arr[] = {4,3,7,12,23,1,8,5,9,2};
//Lets extract 3 maximum elements
int k = 3;
//First convert the array to a vector to use STL
vector<int> vec;
for(int i=0;i<10;i++){
vec.push_back(arr[i]);
}
//Build heap in O(n)
make_heap(vec.begin(), vec.end());
//Extract max k times
for(int i=0;i<k;i++){
cout<<vec.front()<<" ";
pop_heap(vec.begin(),vec.end());
vec.pop_back();
}
return 0;
}
#templatetypedef's solution is probably the fastest one, assuming you can modify or copy input.
Alternatively, you can use heap or BST (set in C++) to store k largest elements at given moment, then read array's elements one by one. While this is O(n lg k), it doesn't modify input and only uses O(k) additional memory. It also works on streams (when you don't know all the data from the beginning).
Here's a solution with O(N + k lg k) complexity.
int[] kLargest_Dremio(int[] A, int k) {
int[] result = new int[k];
shouldGetIndex = true;
int q = AreIndicesValid(0, A.Length - 1) ? RandomizedSelet(0, A.Length-1,
A.Length-k+1) : -1;
Array.Copy(A, q, result, 0, k);
Array.Sort(result, (a, b) => { return a>b; });
return result;
}
AreIndicesValid and RandomizedSelet are defined in this github source file.
There was a question on performance & restricted resources.
Make a value class for the top 3 values. Use such an accumulator for reduction in a parallel stream. Limit the parallelism according to the context (memory, power).
class BronzeSilverGold {
int[] values = new int[] {Integer.MIN_VALUE, Integer.MIN_VALUE, Integer.MIN_VALUE};
// For reduction
void add(int x) {
...
}
// For combining two results of two threads.
void merge(BronzeSilverGold other) {
...
}
}
The parallelism must be restricted in your constellation, hence specify an N_THREADS in:
try {
ForkJoinPool threadPool = new ForkJoinPool(N_THREADS);
threadPool.submit(() -> {
BronzeSilverGold result = IntStream.of(...).parallel().collect(
BronzeSilverGold::new,
(bsg, n) -> BronzeSilverGold::add,
(bsg1, bsg2) -> bsg1.merge(bsg2));
...
});
} catch (InterruptedException | ExecutionException e) {
prrtl();
}

Sort name & time complexity

I "invented" "new" sort algorithm. Well, I understand that I can't invent something good, so I tried to search it on wikipedia, but all sort algorithms seems like not my. So I have three questions:
What is name of this algorithm?
Why it sucks? (best, average and worst time complexity)
Can I make it more better still using this idea?
So, idea of my algorithm: if we have an array, we can count number of sorted elements and if this number is less that half of length we can reverse array to make it more sorted. And after that we can sort first half and second half of array. In best case, we need only O(n) - if array is totally sorted in good or bad direction. I have some problems with evaluation of average and worst time complexity.
Code on C#:
public static void Reverse(int[] array, int begin, int end) {
int length = end - begin;
for (int i = 0; i < length / 2; i++)
Algorithms.Swap(ref array[begin+i], ref array[begin + length - i - 1]);
}
public static bool ReverseIf(int[] array, int begin, int end) {
int countSorted = 1;
for (int i = begin + 1; i < end; i++)
if (array[i - 1] <= array[i])
countSorted++;
int length = end - begin;
if (countSorted <= length/2)
Reverse(array, begin, end);
if (countSorted == 1 || countSorted == (end - begin))
return true;
else
return false;
}
public static void ReverseSort(int[] array, int begin, int end) {
if (begin == end || begin == end + 1)
return;
// if we use if-operator (not while), then array {2,3,1} transforms in array {2,1,3} and algorithm stop
while (!ReverseIf(array, begin, end)) {
int pivot = begin + (end - begin) / 2;
ReverseSort(array, begin, pivot + 1);
ReverseSort(array, pivot, end);
}
}
public static void ReverseSort(int[] array) {
ReverseSort(array, 0, array.Length);
}
P.S.: Sorry for my English.
The best case is Theta(n), for, e.g., a sorted array. The worst case is Theta(n^2 log n).
Upper bound
Secondary subproblems have a sorted array preceded or succeeded by an arbitrary element. These are O(n log n). If preceded, we do O(n) work, solve a secondary subproblem on the first half and then on the second half, and then do O(n) more work – O(n log n). If succeeded, do O(n) work, sort the already sorted first half (O(n)), solve a secondary subproblem on the second half, do O(n) work, solve a secondary subproblem on the first half, sort the already sorted second half (O(n)), do O(n) work – O(n log n).
Now, in the general case, we solve two primary subproblems on the two halves and then slowly exchange elements over the pivot using secondary invocations. There are O(n) exchanges necessary, so a straightforward application of the Master Theorem yields a bound of O(n^2 log n).
Lower bound
For k >= 3, we construct an array A(k) of size 2^k recursively using the above analysis as a guide. The bad cases are the arrays [2^k + 1] + A(k).
Let A(3) = [1, ..., 8]. This sorted base case keeps Reverse from being called.
For k > 3, let A(k) = [2^(k-1) + A(k-1)[1], ..., 2^(k-1) + A(k-1)[2^(k-1)]] + A(k-1). Note that the primary subproblems of [2^k + 1] + A(k) are equivalent to [2^(k-1) + 1] + A(k-1).
After the primary recursive invocations, the array is [2^(k-1) + 1, ..., 2^k, 1, ..., 2^(k-1), 2^k + 1]. There are Omega(2^k) elements that have to move Omega(2^k) positions, and each of the secondary invocations that moves an element so far has O(1) sorted subproblems and thus is Omega(n log n).
Clearly more coffee is required – the primary subproblems don't matter. This makes it not too bad to analyze the average case, which is Theta(n^2 log n) as well.
With constant probability, the first half of the array contains at least half of the least quartile and at least half of the greatest quartile. In this case, regardless of whether Reverse happens, there are Omega(n) elements that have to move Omega(n) positions via secondary invocations.
It seems this algorithm, even if it performs horribly with "random" data (as demonstrated by Per in their answer), is quite efficient for "fixing up" arrays which are "nearly-sorted". Thus if you chose to develop this idea further (I personally wouldn't, but if you wanted to think about it as an exercise), you would do well to focus on this strength.
this reference on Wikipedia in the Inversion article alludes to the issue very well. Mahmoud's book is quite insightful, noting that there are various ways to measure "sortedness". For example if we use the number of inversions to characterize a "nearly-sorted array" then we can use insertion sort to sort it extremely quickly. However if your arrays are "nearly-sorted" in slightly different ways (e.g. a deck of cards which is cut or loosely shuffled) then insertion sort will not be the best sort to "fix up" the list.
Input: an array that has already been sorted of size N, with roughly N/k inversions.
I might do something like this for an algorithm:
Calculate number of inversions. (O(N lg(lg(N))), or can assume is small and skip step)
If number of inversions is < [threshold], sort array using insertion sort (it will be fast).
Otherwise the array is not close to being sorted; resort to using your favorite comparison (or better) sorting algorithm
There are better ways to do this though; one can "fix up" such an array in at least O(log(N)*(# new elements)) time if you preprocess your array enough or use the right data-structure, like an array with linked-list properties or similar which supports binary search.
You can generalize this idea even further. Whether "fixing up" an array will work depends on the kind of fixing-up that is required. Thus if you update these statistics whenever you add an element to the list or modify it, you can dispatch onto a good "fix-it-up" algorithm.
But unfortunately this would all be a pain to code. You might just be able to get away with want is a priority queue.

Algorithm for sum-up to 0 from 4 set

I have 4 arrays A, B, C, D of size n. n is at most 4000. The elements of each array are 30 bit (positive/negative) numbers. I want to know the number of ways, A[i]+B[j]+C[k]+D[l] = 0 can be formed where 0 <= i,j,k,l < n.
The best algorithm I derived is O(n^2 lg n), is there a faster algorithm?
Ok, Here is my O(n^2lg(n^2)) algorithm-
Suppose there is four array A[], B[], C[], D[]. we want to find the number of way A[i]+B[j]+C[k]+D[l] = 0 can be made where 0 <= i,j,k,l < n.
So sum up all possible arrangement of A[] and B[] and place them in another array E[] that contain n*n number of element.
int k=0;
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
E[k++]=A[i]+B[j];
}
}
The complexity of above code is O(n^2).
Do the same thing for C[] and D[].
int l=0;
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
AUX[l++]=C[i]+D[j];
}
}
The complexity of above code is O(n^2).
Now sort AUX[] so that you can find the number of occurrence of unique element in AUX[] easily.
Sorting complexity of AUX[] is O(n^2 lg(n^2)).
now declare a structure-
struct myHash
{
int value;
int valueOccuredNumberOfTimes;
}F[];
Now in structure F[] place the unique element of AUX[] and number of time they appeared.
It's complexity is O(n^2)
possibleQuardtupple=0;
Now for each item of E[], do the following
for(i=0;i<k;i++)
{
x=E[i];
find -x in structure F[] using binary search.
if(found in j'th position)
{
possibleQuardtupple+=number of occurrences of -x in F[j];
}
}
For loop i ,total n^2 number of iteration is performed and in each
iteration for binary search lg(n^2) comparison is done. So overall
complexity is O(n^2 lg(n^2)).
The number of way 0 can be reached is = possibleQuardtupple.
Now you can use stl map/ binary search. But stl map is slow, so its better to use binary search.
Hope my explanation is clear enough to understand.
I disagree that your solution is in fact as efficient as you say. In your solution populating E[] and AUX[] is O(N^2) each, so 2.N^2. These will each have N^2 elements.
Generating x = O(N)
Sorting AUX = O((2N)*log((2N)))
The binary search for E[i] in AUX[] is based on N^2 elements to be found in N^2 elements.
Thus you are still doing N^4 work, plus extra work generating the intermediate arrays ans for sorting the N^2 elements in AUX[].
I have a solution (work in progress) but I find it very difficult to calculate how much work it is. I deleted my previous answer. I will post something when I am more sure of myself.
I need to find a way to compare O(X)+O(Z)+O(X^3)+O(X^2)+O(Z^3)+O(Z^2)+X.log(X)+Z.log(Z) to O(N^4) where X+Z = N.
It is clearly less than O(N^4) ... but by how much???? My math is failing me here....
The judgement is wrong. The supplied solution generates arrays with size N^2. It then operates on these arrays (sorting, etc).
Therefore the Order of work, which would normaly be O(n^2.log(n)) should have n substituted with n^2. The result is therefore O((n^2)^2.log(n^2))

Resources