Explanation and complexity for visible points in codefight - c++11

I came across this question on code fight/code signal
Given an array of points on a plane find the maximum no of points that are visible from the origin with a viewing angle of 45 degrees.
int visiblePoints(std::vector<std::vector<int>> points) {
const double pi=M_PI,pi_45=M_PI_4,pi_360=M_PI*2.0;
const double epsilon=1e-10;
int n=points.size(),result=0;
vector<double> angles(n);
for(int i=0;i<n;i++){
double angle=atan2(points[i][1],points[i][0]);
angles[i]=angle;
if(angle<pi_45-pi){
angles.push_back(angle+pi_360);
}
}
sort(angles.begin(),angles.end());
//std::sort(angles.begin(), angles.end());
for(auto it=angles.begin();it!=angles.begin()+n;++it){
auto bound=upper_bound(it,angles.end(),*it+(pi_45+epsilon));
int curr=distance(it,bound);
if(curr>result){
result=curr;
}
}
return result;
/*
for (auto it = angles.begin(), e = angles.begin() + n; it != e; ++it) {
auto bound = std::upper_bound(it, angles.end(), *it + (pi_over_4 + epsilon));
int cur = std::distance(it, bound);
if (cur > result)
result = cur;
}
return result;
*/
So the code is fine,I can figure out what is happening here.I just wanted to check is the time complexity O(NlogN).
The first for loop takes O(N).points is an array of several points in 2D.For example points =[[1,1],[3,1],.....]
Then we have the sorting part. I am assuming that sort takes O(N*logN). Of course, quick sort in worst case takes O(N^2), but for now, I will ignore that fact.
And then the last loop is again O(N)
Also, will the space complexity in this scenario be O(1) or O(N)(due to the sorting)
Thank you

You can use 2 pointers so the complexity will be just O(N), not counting sort.
int l = 0, r = 0, res = 0;
while (l < N) {
while (r < N + l && angles[r] - angles[l] < M_PI_4 + eps) ++r;
res = max(res, r - l);
++l;
}
return res;

Related

Complexity of backtracking algorithm

I tried to solve this problem using backtracking but I am not sure about the complexity of the algorithm (and if the algorithm is correct) and what would be an algorithm with a better complexity.
Given 2 positive integers n and m, we call legal a sequence of integers if:
the length of the sequence is n
the elements in the sequence are between 1 and m
the element in position i of the sequence, 1 < i <= n, is a divisor of the element in position i-1
Count the number of legal sequences. Expected complexity of the algorithm is O(m² + nm)
This is my algorithm in c:
// n length of the sequence
// m maximum valid number
// l number of remaining positions in the sequence
// p previous number in the sequence
int legal(int n, int m, int l, int p) {
if (l == 0)
return 1;
int q=0;
for (int i=1; i <= m;i++) {
if (p%i == 0 || l == n)
q += legal(n,m,l-1,i);
}
return q;
}
int main() {
int n, m;
scanf("%d", &n);
scanf("%d", &m);
printf("%d\n", legal(n,m,n,0));
}
I think the complexity of my algorithm is O(nmS(n)) with S(n) = the number of legal sequences
You are correct that your program runs in the solution space of problem. For this type of problem, your solution is sub-optimal for large input (say n = m = 100). That is because the solution space grows exponentially in relation to m and n. Here is a solution that uses memoization to avoid re-computations:
#include <cstdio>
#define LIMIT 101
#define DIRTY -1
long long cache[LIMIT][LIMIT];
void clear_cache() {
for (int i = 0; i < LIMIT; i++) {
for (int j = 0; j < LIMIT; j++) {
// marked all entries in cache as dirty
cache[i][j] = DIRTY;
}
}
}
long long legal_seqs(int curr_len, int prev_num, int seq_len, int max_num) {
// base case
if (curr_len == seq_len) return 1;
// if we haven't seen this sub-problem, compute it!
// this is called memoization
if (cache[curr_len][prev_num] == DIRTY) {
long long ways = 0;
// get all multiples of prev_num
for (int next_num = 1; next_num <= max_num; next_num++) {
if (prev_num % next_num == 0) {
ways += legal_seqs(curr_len + 1, next_num, seq_len, max_num);
}
}
cache[curr_len][prev_num] = ways;
}
return cache[curr_len][prev_num];
}
int main() {
int n, m;
scanf("%d%d", &n, &m);
clear_cache();
printf("%lld\n", legal_seqs(0, 0, n, m));
}
The code above runs in the time complexity you mentioned.

What is the total running time of the following code:

What is the total running time of the following code:
I concluded that this code takes O(N log N) times to perform, when the class is being created the loop takes O(N) time to perform, where this for-loop below takes log n time. But I am not completely sure about it, thats why I am asking here.
Z z = new Z(N);
for (int i = 0; i < N-1; i = i+2)
z.update(i,i+1);
Class Z:
public class Z
{
int[] next, prev;
Z(int N)
{
prev = new int[N];
next = new int[N];
for (int i = 0; i<N; ++i)
// put element i in a list of its own
{
next[i] = i;
prev[i] = i;
}
}
int first(int i)
// return first element of list containing i
{
while (i != prev[i]) i = prev[i];
return i;
}
int last(int i)
// return last element of list containing i
{
while (i != next[i]) i = next[i];
return i;
}
void update(int i, int j)
{
int f = first(j);
int l = last(i);
next[l] = f;
prev[f] = l;
}
boolean query(int i, int j)
{
return last(i) == last(j);
}
}
The total running time is only O(N).
The constructor's loop has O(N) steps.
It creates the next/prev arrays as [0, 1, ..., N].
z.update(i,i+1) takes only O(1) time. Since you only call update() once for each i=i and j=i+1, first(j) and last(i) will return j and i, respectively.
It is not possible to analyze the expected complexity of first() and last() under general conditions, as they could easily contain infinite loops (for instance, if called with 0 when next=[1,0]). However, in the example given, they will always skip the while loop entirely, as each call to these functions is on an index that has not yet been modified.
Your for loop takes O(N) time. You run it a total of N/2 times and because you ignore the constant this is N. Total runtime O(N^2). There is no logarithm.
Here is my analysis:
Z z = new Z(N); // O(n)
for (int i = 0; i < N-1; i = i+2) // O(n)
z.update(i,i+1); // O(1)
Hence, the total running time will be O(n).
int first(int i)
{
while (i != prev[i]) i = prev[i]; // O(1), i will always equal prev[i]
// for any input n > 0
return i;
}
int last(int i)
{
while (i != next[i]) i = next[i]; // O(1), i will always equal next[i]
// for any input n > 0
return i;
}
void update(int i, int j)
{
int f = first(j); // O(1)
int l = last(i); // O(1)
next[l] = f; // O(1)
prev[f] = l; // O(1)
}

How to find all taxicab numbers less than N?

A taxicab number is an integer that can be expressed as the sum of two cubes of integers in two different ways: a^3+b^3 = c^3+d^3. Design an algorithm to find all taxicab numbers with a, b, c, and d less than N.
Please give both the space and time complexity in terms of N.
I could do it in o(N^2.logN) time with O(N^2) space.
Best algorithm I've found so far:
Form all pairs: N^2
Sort the sum: N^2 logN
Find duplicates less than N
But this takes N^2 space. Can we do better?
But this takes N^2 space. Can we do better?
There exists an O(N) space solution based on a priority queue. Time complexity is O(N^2 logN). To sketch out the idea of the algorithm, here is the matrix M such that M[i][j] = i^3 + j^3 (of course, the matrix is never created in memory):
0 1 8 27 64 125
1 2 9 28 65 126
8 9 16 35 72 133
27 28 35 54 91 152
64 65 72 91 128 189
125 126 133 152 189 250
Observe that every line and every row is sorted in ascending order. Let PQ be the priority queue. First we put the biggest element in the priority queue. Then perform the following, as long as the PQ is not empty:
Pop the biggest element from PQ
add adjacent element above if the PQ doesn't have any element from that row
add adjacent element on the left if the PQ doesn't have any element from that column, and if it is not under the diagonal of the matrix (to avoid redundant elements)
Note that
You don't need to create the matrix in memory to implement the algorithm
The elements will be popped from the PQ in descending order, from the biggest element of the matrix to its smallest one (avoiding elements from the redundant half part of the matrix).
Everytime the PQ issues the same value twice then we have found a taxicab number.
As an illustration, here is an implementation in C++. The time complexity is O(N^2 logN) and space complexity O(N).
#include <iostream>
#include <cassert>
#include <queue>
using namespace std;
typedef unsigned int value_type;
struct Square
{
value_type i;
value_type j;
value_type sum_of_cubes;
Square(value_type i, value_type j) : i(i), j(j), sum_of_cubes(i*i*i+j*j*j) {}
friend class SquareCompare;
bool taxicab(const Square& sq) const
{
return sum_of_cubes == sq.sum_of_cubes && i != sq.i && i != sq.j;
}
friend ostream& operator<<(ostream& os, const Square& sq);
};
class SquareCompare
{
public:
bool operator()(const Square& a, const Square& b)
{
return a.sum_of_cubes < b.sum_of_cubes;
}
};
ostream& operator<<(ostream& os, const Square& sq)
{
return os << sq.i << "^3 + " << sq.j << "^3 = " << sq.sum_of_cubes;
}
int main()
{
const value_type N=2001;
value_type count = 0;
bool in_i [N];
bool in_j [N];
for (value_type i=0; i<N; i++) {
in_i[i] = false;
in_j[i] = false;
}
priority_queue<Square, vector<Square>, SquareCompare> p_queue;
p_queue.push(Square(N-1, N-1));
in_i[N-1] = true;
in_j[N-1] = true;
while(!p_queue.empty()) {
Square sq = p_queue.top();
p_queue.pop();
in_i[sq.i] = false;
in_j[sq.j] = false;
// cout << "pop " << sq.i << " " << sq.j << endl;
if (sq.i > 0 && !in_i[sq.i - 1] && sq.i-1 >= sq.j) {
p_queue.push(Square(sq.i-1, sq.j));
in_i[sq.i-1] = true;
in_j[sq.j] = true;
// cout << "push " << sq.i-1 << " " << sq.j << endl;
}
if (sq.j > 0 && !in_j[sq.j-1] && sq.i >= sq.j - 1) {
p_queue.push(Square(sq.i, sq.j-1));
in_i[sq.i] = true;
in_j[sq.j - 1] = true;
// cout << "push " << sq.i << " " << sq.j-1 << endl;
}
if (sq.taxicab(p_queue.top())) {
/* taxicab number */
cout << sq << " " << p_queue.top() << endl;
count++;
}
}
cout << endl;
cout << "there are " << count << " taxicab numbers with a, b, c, d < " << N << endl;
return 0;
}
The answers given by Novneet Nov and user3017842 are both correct ideas for finding the taxicab numbers with storage O(N) using minHeap.
Just a little bit more explanation why the minHeap of size N works.
First, if you had all the sums (O(N^2)) and could sort them (O(N^2lgN)) you would just pick the duplicates as you traverse the sorted array. Well, in our case using a minHeap we can traverse in-order all the sums: we just need to ensure that the minHeap always contains the minimum unprocessed sum.
Now, we have a huge number of sums (O(N^2)). But, notice that this number can be split into N groups each of which has an easily defined minimum!
(fix a, change b from 0 to N-1 => here are your N groups. The sum in one group with a smaller b is smaller than one with a bigger b in the same group - because a is the same).
The minimum of union of these groups is in the union of mins of these
groups. Therefore, if you keep all minimums of these groups in the
minHeap you are guaranteed to have the total minimum in the minHeap.
Now, when you extract Min from the heap, you just add next smallest element from the group of this extracted min (so if you extracted (a, b) you add (a, b+1)) and you are guaranteed that your minHeap still contains the next unprocessed min of all the sums.
I found the solution/code here : Time complexity O(N^2 logN), space complexity O(N)
The solution is implemented by help of priority queues.
Reverse thinking can be easily done by looking at the code. It can be done in an array of size N because the min sums are deleted from the array after comparing to the next minimum and then the array is made to size N by adding a new sum - (i^3 + (j+1)^3).
A intuitive proof is here :
Initially, we have added (1,1),(2,2),(3,3),...,(N,N) in the min-priority queue.
Suppose a^+b^3=c^3+d^3, and (a,b) is the minimum that will be taken out of the priority queue next. To be able to detect this taxicab number, (c,d) must also be in the priority queue which would be taken out after (a,b).
Note: We would be adding (a,b+1) after extracting (a,b) so there is no way that extraction of (a,b) would result in addition of (c,d) to the priority queue, so it must already exist in the priority queue.
Now lets assume that (c,d) is not in the priority queue, because we haven't gotten to it yet. Instead, there is some (c,d−k) in the priority queue where k>0.
Since (a,b) is being taken out,
a^3+b^3≤c^3+(d−k)^3
However, a^3+b^3=c^3+d^3
Therefore,
c^3+d^3≤c^3+(d−k)^3
d≤d−k
k≤0
Since k>0, this is impossible. Thus our assumption can never come to pass.
Thus for every (a,b) which is being removed from the min-PQ, (c,d) is already in the min-PQ (or was just removed) if a^3+b^3=c^3+d^3
The time complexity of the algorithm can't be less than O(N2) in any case, since you might print up to O(N2) taxicab numbers.
To reduce space usage you could, in theory, use the suggestion mentioned here: little link. Basically, the idea is that first you try all possible pairs a, b and find the solution to this:
a = 1 − (p − 3 * q)(p2 + 3 * q2)
b = −1 + (p + 3 * q)(p2 + 3q2)
Then you can find the appropriate c, d pair using:
c = (p + 3 * q) - (p2 + 3 * q2)
d = -(p - 3 * q) + (p2 + 3 * q2)
and check whether they are both less than N. The issue here is that solving that system of equations might get a bit messy (by 'a bit' I mean very tedious).
The O(N2) space solution is much simpler, and it'd probably be efficient enough since anything of quadratic time complexity that can run in reasonable time limits will probably be fine with quadratic space usage.
I hope that helped!
version1 uses List and sorting
O(n^2*logn) time and O(n^2) space
public static void Taxicab1(int n)
{
// O(n^2) time and O(n^2) space
var list = new List<int>();
for (int i = 1; i <= n; i++)
{
for (int j = i; j <= n; j++)
{
list.Add(i * i * i + j * j * j);
}
}
// O(n^2*log(n^2)) time
list.Sort();
// O(n^2) time
int prev = -1;
foreach (var next in list)
{
if (prev == next)
{
Console.WriteLine(prev);
}
prev = next;
}
}
version2 uses HashSet
O(n^2) time and O(n^2) space
public static void Taxicab2(int n)
{
// O(n^2) time and O(n^2) space
var set = new HashSet<int>();
for (int i = 1; i <= n; i++)
{
for (int j = i; j <= n; j++)
{
int x = i * i * i + j * j * j;
if (!set.Add(x))
{
Console.WriteLine(x);
}
}
}
}
version3 uses min oriented Priority Queue
O(n^2*logn) time and O(n) space
public static void Taxicab3(int n)
{
// O(n) time and O(n) space
var pq = new MinPQ<SumOfCubes>();
for (int i = 1; i <= n; i++)
{
pq.Push(new SumOfCubes(i, i));
}
// O(n^2*logn) time
var sentinel = new SumOfCubes(0, 0);
while (pq.Count > 0)
{
var current = pq.Pop();
if (current.Result == sentinel.Result)
Console.WriteLine($"{sentinel.A}^3+{sentinel.B}^3 = {current.A}^3+{current.B}^3 = {current.Result}");
if (current.B <= n)
pq.Push(new SumOfCubes(current.A, current.B + 1));
sentinel = current;
}
}
where SummOfCubes
public class SumOfCubes : IComparable<SumOfCubes>
{
public int A { get; private set; }
public int B { get; private set; }
public int Result { get; private set; }
public SumOfCubes(int a, int b)
{
A = a;
B = b;
Result = a * a * a + b * b * b;
}
public int CompareTo(SumOfCubes other)
{
return Result.CompareTo(other.Result);
}
}
github
create an array: 1^3, 2^3, 3^3, 4^3, ....... k^3. such that k^3 < N and (k+1)^3 > N. the array size would be ~ (N)^(1/3). the array is sorted order.
use 2sum technique (link) in lineal time proportional to the array size. if we find 2 pairs of numbers, that is a hit.
looping through step 2 by decreasing N by 1 each time.
This will use O(N^(1/3)) extra space and ~ O(N^(4/3)) time.
A easy way of understanding Time complexity O(N^2 logN), space complexity O(N) is to think it as a merge of N sorted arrays plus a bookkeeping of the previously merged element.
It seems like a simple brute-force algorithm with proper bounds solves it in time proportional to n^1.33 and space proportional to n. Or could anyone point me to the place where I'm mistaken?
Consider 4 nested loops, each running from 1 to cubic root of n. Using these loops we can go over all possible combinations of 4 values and find the pairs forming taxicab numbers. It means each loop takes time proportional to cubic root of n, or n^(1/3). Multiply this value 4 times and get:
(n^(1/3)^4 = n^(4/3) = n^1.33
I wrote a solution in JavaScript and benchmarked it, and it seems to be working. One caveat is that the result is only partially sorted.
Here is my JavaScript code (it's not optimal yet, could be optimized even more):
function taxicab(n) {
let a = 1, b = 1, c = 1, d = 1,
cubeA = a**3 + b**3,
cubeB = c**3 + d**3,
results = [];
while (cubeA < n) { // loop over a
while (cubeA < n) { // loop over b
// avoid running nested loops if this number is already in results
if (results.indexOf(cubeA) === -1) {
while (cubeB <= cubeA) { // loop over c
while (cubeB <= cubeA) { // loop over d
if (cubeB === cubeA && a!=c && a!=d) { // found a taxicab number!
results.push(cubeA);
}
d++;
cubeB = c**3 + d**3;
} // end loop over d
c++;
d = c;
cubeB = c**3 + d**3;
} // end loop over c
}
b++;
cubeA = a**3 + b**3;
c = d = 1;
cubeB = c**3 + d**3;
} // end loop over d
a++;
b = a;
cubeA = a**3 + b**3;
} // end loop over a
return results;
}
Running taxicab(1E8) takes around 30 seconds in a browser console and yields 485 numbers as a result. Ten times smaller value taxicab(1E7) (10 millions) takes almost 1.4 seconds and yields 150 numbers. 10^1.33 * 1.4 = 29.9, i.e. multiplying n by 10 leads to the running time increased by 10^1.33 times. The result array is unsorted, but after quickly sorting it we get correct result, as it seems:
[1729, 4104, 13832, 20683, 32832, 39312, 40033, 46683, 64232, 65728,
110656, 110808, 134379, 149389, 165464, 171288, 195841, 216027, 216125,
262656, 314496, 320264, 327763, 373464, 402597, 439101, 443889, 513000,
513856, 515375, 525824, 558441, 593047, 684019, 704977, 805688, 842751,
885248, 886464, 920673, 955016, 984067, 994688, 1009736, 1016496, 1061424,
1073375, 1075032, 1080891, 1092728, 1195112, 1260441, 1323712, 1331064,
1370304, 1407672, 1533357, 1566728, 1609272, 1728216, 1729000, 1734264,
1774656, 1845649, 2048391, 2101248, 2301299, 2418271, 2515968, 2562112,
2585375, 2622104, 2691451, 2864288, 2987712, 2991816, 3220776, 3242197,
3375001, 3375008, 3511872, 3512808, 3551112, 3587409, 3628233, 3798613,
3813992, 4033503, 4104000, 4110848, 4123000, 4174281, 4206592, 4342914,
4467528, 4505949, 4511808, 4607064, 4624776, 4673088, …]
Here is a code for benchmarking:
// run taxicab(n) for k trials and return the average running time
function benchmark(n, k) {
let t = 0;
k = k || 1; // how many times to repeat the trial to get an averaged result
for(let i = 0; i < k; i++) {
let t1 = new Date();
taxicab(n);
let t2 = new Date();
t += t2 - t1;
}
return Math.round(t/k);
}
Finally, I tested it:
let T = benchmark(1E7, 3); // 1376 - running time for n = 10 million
let T2 = benchmark(2E7, 3);// 4821 - running time for n = 20 million
let powerLaw = Math.log2(T2/T); // 1.3206693816701993
So it means time is proportional to n^1.32 in this test. Repeating this many times with different values always yields around the same result: from 1.3 to 1.4.
First of all, we will construct the taxicab numbers instead of searching for them. The range we will use to construct a taxicab number i.e Ta(2) will go up to n^1/3 not n. Because if you cube a number bigger than n^1/3 it will be bigger than n and also we can't cube negative numbers to prevent that case by definition. We will use a HashSet to remember the sums of two cubed numbers in the algorithm. This will help us to lookup previous cubed sums in O(1) time while we are iterating over every possible pair of numbers in the range I mentioned earlier.
Time complexity: O(n^2/3)
Space complexity: O(n^1/3)
def taxicab_numbers(n: int) -> list[int]:
taxicab_numbers = []
max_num = math.floor(n ** (1. / 3.))
seen_sums = set()
for i in range(1, max_num + 1):
for j in range(i, max_num + 1):
cube_sum = i ** 3 + j ** 3
if cube_sum in seen_sums:
taxicab_numbers.append(cube_sum)
else:
seen_sums.add(cube_sum)
return taxicab_numbers
import java.util.*;
public class A5Q24 {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter number:");
int n = sc.nextInt();
// start checking every int less than the input
for (int a = 2;a <= n;a++) {
int count = 0;
// number of ways that number be expressed in sum of two number cubes
for (int i = 1; Math.pow(i, 3) < a; i++) {
// if the cube of number smaller is greater than the number than it goes out
for (int j = 1; j <= i; j++) {
if (Math.pow(i, 3) + Math.pow(j, 3) == a)
count++;
}
}
if (count == 2)
System.out.println(a);
}
sc.close();
}
}
I think we can also do better on time (O (N ^ 2)) with O(N ^ 2) memory, using a hashmap to check if a pair of cubes has already be seen. In Python:
def find_taxicab_numbers(n: int) -> List[Tuple[int, int, int, int, int]]:
"""
find all taxicab numbers smaller than n, i.e. integers that can be expressed as the sum of two cubes of positive
integers in two different ways so that a^3 + b^3 = c^3 + d^3.
Time: O(n ^ 2) (two loops, one dict lookup). Space: O(n ^ 2)) (all possible cubes)
:param n: upper bound for a, b, c, d
:return: list of tuples of int: a, b, c, d, and taxicab numbers
"""
cubes = [i ** 3 for i in range(n)]
seen_sum_cubes = dict() # mapping sum cubes -> a, b
taxicabs = list() # list of a, b, c, d, taxicab
# check all possible sums of cubes
for i in range(n):
for j in range(i):
sum_cubes = cubes[i] + cubes[j]
if sum_cubes in seen_sum_cubes:
prev_i, prev_j = seen_sum_cubes[sum_cubes]
taxicabs.append((i, j, prev_i, prev_j, sum_cubes))
else:
seen_sum_cubes[sum_cubes] = (i, j)
return taxicabs

A data structure problem

Given a sequence of integers, there are a number of queries.
Each query has a range [l, r], and you are to find the median of the given range [l, r]
The number of queries can be as large as 100,000
The length of the sequence can be as large as 100,000
I wonder if there is any data structure can support such query
My solution:
I consult my partner today and he tells to use partition tree.
We can build a partition tree in nlog(n) time and answer each query in log(n) time
The partition tree actually is the process of merge sort, but for each node in the tree, it saves the number of integers that go to the left subtree. Thus, we can use this information to deal with the query.
here is my code:
This program is to find the x in a given interval [l, r], that minimize the following equation.
alt text http://acm.tju.edu.cn/toj/3556_01.jpg
Explanation:
seq saves the sequence
pos saves the position after sort
ind saves the index
cntL saves the number of integers that go to the left tree in a given range
#include <cstdio>
#include <cstring>
#include <algorithm>
using namespace std;
#define N 100008
typedef long long LL;
int n, m, seq[N], ind[N], pos[N], next[N];
int cntL[20][N];
LL sum[20][N], sumL, subSum[N];
void build(int l, int r, int head, int dep)
{
if (l == r)
{
cntL[dep][l] = cntL[dep][l-1];
sum[dep][l] = sum[dep][l-1];
return ;
}
int mid = (l+r)>>1;
int hl = 0, hr = 0, tl = 0, tr = 0;
for (int i = head, j = l; i != -1; i = next[i], j++)
{
cntL[dep][j] = cntL[dep][j-1];
sum[dep][j] = sum[dep][j-1];
if (pos[i] <= mid)
{
next[tl] = i;
tl = i;
if (hl == 0) hl = i;
cntL[dep][j]++;
sum[dep][j] += seq[i];
}
else
{
next[tr] = i;
tr = i;
if (hr == 0) hr = i;
}
}
next[tl] = -1;
next[tr] = -1;
build(l, mid, hl, dep+1);
build(mid+1, r, hr, dep+1);
}
int query(int left, int right, int ql, int qr, int kth, int dep)
{
if (left == right)
{
return ind[left];
}
int mid = (left+right)>>1;
if (cntL[dep][qr] - cntL[dep][ql-1] >= kth)
{
return query(left, mid, left+cntL[dep][ql-1]-cntL[dep][left-1], left+cntL[dep][qr]-cntL[dep][left-1]-1, kth, dep+1);
}
else
{
sumL += sum[dep][qr]-sum[dep][ql-1];
return query(mid+1, right, mid+1+ql-left-(cntL[dep][ql-1]-cntL[dep][left-1]), mid+qr+1-left-(cntL[dep][qr]-cntL[dep][left-1]), \
kth-(cntL[dep][qr]-cntL[dep][ql-1]), dep+1);
}
}
inline int cmp(int x, int y)
{
return seq[x] < seq[y];
}
int main()
{
int ca, t, i, j, middle, ql, qr, id, tot;
LL ans;
scanf("%d", &ca);
for (t = 1; t <= ca; t++)
{
scanf("%d", &n);
subSum[0] = 0;
for (i = 1; i <= n; i++)
{
scanf("%d", seq+i);
ind[i] = i;
subSum[i] = subSum[i-1]+seq[i];
}
sort(ind+1, ind+1+n, cmp);
for (i = 1; i <= n; i++)
{
pos[ind[i]] = i;
next[i] = i+1;
}
next[n] = -1;
build(1, n, 1, 0);
printf("Case #%d:\n", t);
scanf("%d", &m);
while (m--)
{
scanf("%d%d", &ql, &qr);
ql++, qr++;
middle = (qr-ql+2)/2;
sumL= 0;
id = query(1, n, ql, qr, middle, 0);
ans = subSum[qr]-subSum[ql-1]-sumL;
tot = qr-ql+1;
ans = ans-(tot-middle+1)*1ll*seq[id]+(middle-1)*1ll*seq[id]-sumL;
printf("%lld\n", ans);
}
puts("");
}
}
This is called the Range Median Query problem. The following paper might be relevant: Towards Optimal Range Medians. (Free link, thanks to belisarius).
From the abstract of the paper:
We consider the following problem:
Given an unsorted array of n elements,
and a sequence of intervals in the
array, compute the median in each of
the subarrays defined by the
intervals. We describe a simple
algorithm which needs O(nlogk+klogn)
time to answer k such median queries.
This improves previous algorithms by a
logarithmic factor and matches a
comparison lower bound for k=O(n). The
space complexity of our simple
algorithm is O(nlogn) in the pointer
machine model, and O(n) in the RAM
model. In the latter model, a more
involved O(n) space data structure can
be constructed in O(nlogn) time where
the time per query is reduced to
O(logn/loglogn). We also give
efficient dynamic variants of both
data structures, achieving O(log^2n)
query time using O(nlogn) space in the
comparison model and
O((logn/loglogn)^2) query time using
O(nlogn/loglogn) space in the RAM
model, and show that in the cell-probe
model, any data structure which
supports updates in O(log^O(1)n) time
must have Ω(logn/loglogn) query time.
Our approach naturally generalizes to
higher-dimensional range median
problems, where element positions and
query ranges are multidimensional—it
reduces a range median query to a
logarithmic number of range counting
queries.
Of course, you could preprocess the whole array in O(n^3) time (or perhaps even O(n^2logn) time) and O(n^2) space to be able to return the median in O(1) time.
Additional constraints might help simplify the solution. For instance, do we know that r-l will lesser than a known constant? etc...

How to find the kth largest element in an unsorted array of length n in O(n)?

I believe there's a way to find the kth largest element in an unsorted array of length n in O(n). Or perhaps it's "expected" O(n) or something. How can we do this?
This is called finding the k-th order statistic. There's a very simple randomized algorithm (called quickselect) taking O(n) average time, O(n^2) worst case time, and a pretty complicated non-randomized algorithm (called introselect) taking O(n) worst case time. There's some info on Wikipedia, but it's not very good.
Everything you need is in these powerpoint slides. Just to extract the basic algorithm of the O(n) worst-case algorithm (introselect):
Select(A,n,i):
Divide input into ⌈n/5⌉ groups of size 5.
/* Partition on median-of-medians */
medians = array of each group’s median.
pivot = Select(medians, ⌈n/5⌉, ⌈n/10⌉)
Left Array L and Right Array G = partition(A, pivot)
/* Find ith element in L, pivot, or G */
k = |L| + 1
If i = k, return pivot
If i < k, return Select(L, k-1, i)
If i > k, return Select(G, n-k, i-k)
It's also very nicely detailed in the Introduction to Algorithms book by Cormen et al.
If you want a true O(n) algorithm, as opposed to O(kn) or something like that, then you should use quickselect (it's basically quicksort where you throw out the partition that you're not interested in). My prof has a great writeup, with the runtime analysis: (reference)
The QuickSelect algorithm quickly finds the k-th smallest element of an unsorted array of n elements. It is a RandomizedAlgorithm, so we compute the worst-case expected running time.
Here is the algorithm.
QuickSelect(A, k)
let r be chosen uniformly at random in the range 1 to length(A)
let pivot = A[r]
let A1, A2 be new arrays
# split into a pile A1 of small elements and A2 of big elements
for i = 1 to n
if A[i] < pivot then
append A[i] to A1
else if A[i] > pivot then
append A[i] to A2
else
# do nothing
end for
if k <= length(A1):
# it's in the pile of small elements
return QuickSelect(A1, k)
else if k > length(A) - length(A2)
# it's in the pile of big elements
return QuickSelect(A2, k - (length(A) - length(A2))
else
# it's equal to the pivot
return pivot
What is the running time of this algorithm? If the adversary flips coins for us, we may find that the pivot is always the largest element and k is always 1, giving a running time of
T(n) = Theta(n) + T(n-1) = Theta(n2)
But if the choices are indeed random, the expected running time is given by
T(n) <= Theta(n) + (1/n) ∑i=1 to nT(max(i, n-i-1))
where we are making the not entirely reasonable assumption that the recursion always lands in the larger of A1 or A2.
Let's guess that T(n) <= an for some a. Then we get
T(n)
<= cn + (1/n) ∑i=1 to nT(max(i-1, n-i))
= cn + (1/n) ∑i=1 to floor(n/2) T(n-i) + (1/n) ∑i=floor(n/2)+1 to n T(i)
<= cn + 2 (1/n) ∑i=floor(n/2) to n T(i)
<= cn + 2 (1/n) ∑i=floor(n/2) to n ai
and now somehow we have to get the horrendous sum on the right of the plus sign to absorb the cn on the left. If we just bound it as 2(1/n) ∑i=n/2 to n an, we get roughly 2(1/n)(n/2)an = an. But this is too big - there's no room to squeeze in an extra cn. So let's expand the sum using the arithmetic series formula:
∑i=floor(n/2) to n i
= ∑i=1 to n i - ∑i=1 to floor(n/2) i
= n(n+1)/2 - floor(n/2)(floor(n/2)+1)/2
<= n2/2 - (n/4)2/2
= (15/32)n2
where we take advantage of n being "sufficiently large" to replace the ugly floor(n/2) factors with the much cleaner (and smaller) n/4. Now we can continue with
cn + 2 (1/n) ∑i=floor(n/2) to n ai,
<= cn + (2a/n) (15/32) n2
= n (c + (15/16)a)
<= an
provided a > 16c.
This gives T(n) = O(n). It's clearly Omega(n), so we get T(n) = Theta(n).
A quick Google on that ('kth largest element array') returned this: http://discuss.joelonsoftware.com/default.asp?interview.11.509587.17
"Make one pass through tracking the three largest values so far."
(it was specifically for 3d largest)
and this answer:
Build a heap/priority queue. O(n)
Pop top element. O(log n)
Pop top element. O(log n)
Pop top element. O(log n)
Total = O(n) + 3 O(log n) = O(n)
You do like quicksort. Pick an element at random and shove everything either higher or lower. At this point you'll know which element you actually picked, and if it is the kth element you're done, otherwise you repeat with the bin (higher or lower), that the kth element would fall in. Statistically speaking, the time it takes to find the kth element grows with n, O(n).
A Programmer's Companion to Algorithm Analysis gives a version that is O(n), although the author states that the constant factor is so high, you'd probably prefer the naive sort-the-list-then-select method.
I answered the letter of your question :)
The C++ standard library has almost exactly that function call nth_element, although it does modify your data. It has expected linear run-time, O(N), and it also does a partial sort.
const int N = ...;
double a[N];
// ...
const int m = ...; // m < N
nth_element (a, a + m, a + N);
// a[m] contains the mth element in a
You can do it in O(n + kn) = O(n) (for constant k) for time and O(k) for space, by keeping track of the k largest elements you've seen.
For each element in the array you can scan the list of k largest and replace the smallest element with the new one if it is bigger.
Warren's priority heap solution is neater though.
Although not very sure about O(n) complexity, but it will be sure to be between O(n) and nLog(n). Also sure to be closer to O(n) than nLog(n). Function is written in Java
public int quickSelect(ArrayList<Integer>list, int nthSmallest){
//Choose random number in range of 0 to array length
Random random = new Random();
//This will give random number which is not greater than length - 1
int pivotIndex = random.nextInt(list.size() - 1);
int pivot = list.get(pivotIndex);
ArrayList<Integer> smallerNumberList = new ArrayList<Integer>();
ArrayList<Integer> greaterNumberList = new ArrayList<Integer>();
//Split list into two.
//Value smaller than pivot should go to smallerNumberList
//Value greater than pivot should go to greaterNumberList
//Do nothing for value which is equal to pivot
for(int i=0; i<list.size(); i++){
if(list.get(i)<pivot){
smallerNumberList.add(list.get(i));
}
else if(list.get(i)>pivot){
greaterNumberList.add(list.get(i));
}
else{
//Do nothing
}
}
//If smallerNumberList size is greater than nthSmallest value, nthSmallest number must be in this list
if(nthSmallest < smallerNumberList.size()){
return quickSelect(smallerNumberList, nthSmallest);
}
//If nthSmallest is greater than [ list.size() - greaterNumberList.size() ], nthSmallest number must be in this list
//The step is bit tricky. If confusing, please see the above loop once again for clarification.
else if(nthSmallest > (list.size() - greaterNumberList.size())){
//nthSmallest will have to be changed here. [ list.size() - greaterNumberList.size() ] elements are already in
//smallerNumberList
nthSmallest = nthSmallest - (list.size() - greaterNumberList.size());
return quickSelect(greaterNumberList,nthSmallest);
}
else{
return pivot;
}
}
I implemented finding kth minimimum in n unsorted elements using dynamic programming, specifically tournament method. The execution time is O(n + klog(n)). The mechanism used is listed as one of methods on Wikipedia page about Selection Algorithm (as indicated in one of the posting above). You can read about the algorithm and also find code (java) on my blog page Finding Kth Minimum. In addition the logic can do partial ordering of the list - return first K min (or max) in O(klog(n)) time.
Though the code provided result kth minimum, similar logic can be employed to find kth maximum in O(klog(n)), ignoring the pre-work done to create tournament tree.
Sexy quickselect in Python
def quickselect(arr, k):
'''
k = 1 returns first element in ascending order.
can be easily modified to return first element in descending order
'''
r = random.randrange(0, len(arr))
a1 = [i for i in arr if i < arr[r]] '''partition'''
a2 = [i for i in arr if i > arr[r]]
if k <= len(a1):
return quickselect(a1, k)
elif k > len(arr)-len(a2):
return quickselect(a2, k - (len(arr) - len(a2)))
else:
return arr[r]
As per this paper Finding the Kth largest item in a list of n items the following algorithm will take O(n) time in worst case.
Divide the array in to n/5 lists of 5 elements each.
Find the median in each sub array of 5 elements.
Recursively find the median of all the medians, lets call it M
Partition the array in to two sub array 1st sub-array contains the elements larger than M , lets say this sub-array is a1 , while other sub-array contains the elements smaller then M., lets call this sub-array a2.
If k <= |a1|, return selection (a1,k).
If k− 1 = |a1|, return M.
If k> |a1| + 1, return selection(a2,k −a1 − 1).
Analysis: As suggested in the original paper:
We use the median to partition the list into two halves(the first half,
if k <= n/2 , and the second half otherwise). This algorithm takes
time cn at the first level of recursion for some constant c, cn/2 at
the next level (since we recurse in a list of size n/2), cn/4 at the
third level, and so on. The total time taken is cn + cn/2 + cn/4 +
.... = 2cn = o(n).
Why partition size is taken 5 and not 3?
As mentioned in original paper:
Dividing the list by 5 assures a worst-case split of 70 − 30. Atleast
half of the medians greater than the median-of-medians, hence atleast
half of the n/5 blocks have atleast 3 elements and this gives a
3n/10 split, which means the other partition is 7n/10 in worst case.
That gives T(n) = T(n/5)+T(7n/10)+O(n). Since n/5+7n/10 < 1, the
worst-case running time isO(n).
Now I have tried to implement the above algorithm as:
public static int findKthLargestUsingMedian(Integer[] array, int k) {
// Step 1: Divide the list into n/5 lists of 5 element each.
int noOfRequiredLists = (int) Math.ceil(array.length / 5.0);
// Step 2: Find pivotal element aka median of medians.
int medianOfMedian = findMedianOfMedians(array, noOfRequiredLists);
//Now we need two lists split using medianOfMedian as pivot. All elements in list listOne will be grater than medianOfMedian and listTwo will have elements lesser than medianOfMedian.
List<Integer> listWithGreaterNumbers = new ArrayList<>(); // elements greater than medianOfMedian
List<Integer> listWithSmallerNumbers = new ArrayList<>(); // elements less than medianOfMedian
for (Integer element : array) {
if (element < medianOfMedian) {
listWithSmallerNumbers.add(element);
} else if (element > medianOfMedian) {
listWithGreaterNumbers.add(element);
}
}
// Next step.
if (k <= listWithGreaterNumbers.size()) return findKthLargestUsingMedian((Integer[]) listWithGreaterNumbers.toArray(new Integer[listWithGreaterNumbers.size()]), k);
else if ((k - 1) == listWithGreaterNumbers.size()) return medianOfMedian;
else if (k > (listWithGreaterNumbers.size() + 1)) return findKthLargestUsingMedian((Integer[]) listWithSmallerNumbers.toArray(new Integer[listWithSmallerNumbers.size()]), k-listWithGreaterNumbers.size()-1);
return -1;
}
public static int findMedianOfMedians(Integer[] mainList, int noOfRequiredLists) {
int[] medians = new int[noOfRequiredLists];
for (int count = 0; count < noOfRequiredLists; count++) {
int startOfPartialArray = 5 * count;
int endOfPartialArray = startOfPartialArray + 5;
Integer[] partialArray = Arrays.copyOfRange((Integer[]) mainList, startOfPartialArray, endOfPartialArray);
// Step 2: Find median of each of these sublists.
int medianIndex = partialArray.length/2;
medians[count] = partialArray[medianIndex];
}
// Step 3: Find median of the medians.
return medians[medians.length / 2];
}
Just for sake of completion, another algorithm makes use of Priority Queue and takes time O(nlogn).
public static int findKthLargestUsingPriorityQueue(Integer[] nums, int k) {
int p = 0;
int numElements = nums.length;
// create priority queue where all the elements of nums will be stored
PriorityQueue<Integer> pq = new PriorityQueue<Integer>();
// place all the elements of the array to this priority queue
for (int n : nums) {
pq.add(n);
}
// extract the kth largest element
while (numElements - k + 1 > 0) {
p = pq.poll();
k++;
}
return p;
}
Both of these algorithms can be tested as:
public static void main(String[] args) throws IOException {
Integer[] numbers = new Integer[]{2, 3, 5, 4, 1, 12, 11, 13, 16, 7, 8, 6, 10, 9, 17, 15, 19, 20, 18, 23, 21, 22, 25, 24, 14};
System.out.println(findKthLargestUsingMedian(numbers, 8));
System.out.println(findKthLargestUsingPriorityQueue(numbers, 8));
}
As expected output is:
18
18
Find the median of the array in linear time, then use partition procedure exactly as in quicksort to divide the array in two parts, values to the left of the median lesser( < ) than than median and to the right greater than ( > ) median, that too can be done in lineat time, now, go to that part of the array where kth element lies,
Now recurrence becomes:
T(n) = T(n/2) + cn
which gives me O (n) overal.
Below is the link to full implementation with quite an extensive explanation how the algorithm for finding Kth element in an unsorted algorithm works. Basic idea is to partition the array like in QuickSort. But in order to avoid extreme cases (e.g. when smallest element is chosen as pivot in every step, so that algorithm degenerates into O(n^2) running time), special pivot selection is applied, called median-of-medians algorithm. The whole solution runs in O(n) time in worst and in average case.
Here is link to the full article (it is about finding Kth smallest element, but the principle is the same for finding Kth largest):
Finding Kth Smallest Element in an Unsorted Array
How about this kinda approach
Maintain a buffer of length k and a tmp_max, getting tmp_max is O(k) and is done n times so something like O(kn)
Is it right or am i missing something ?
Although it doesn't beat average case of quickselect and worst case of median statistics method but its pretty easy to understand and implement.
There is also one algorithm, that outperforms quickselect algorithm. It's called Floyd-Rivets (FR) algorithm.
Original article: https://doi.org/10.1145/360680.360694
Downloadable version: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.309.7108&rep=rep1&type=pdf
Wikipedia article https://en.wikipedia.org/wiki/Floyd%E2%80%93Rivest_algorithm
I tried to implement quickselect and FR algorithm in C++. Also I compared them to the standard C++ library implementations std::nth_element (which is basically introselect hybrid of quickselect and heapselect). The result was quickselect and nth_element ran comparably on average, but FR algorithm ran approx. twice as fast compared to them.
Sample code that I used for FR algorithm:
template <typename T>
T FRselect(std::vector<T>& data, const size_t& n)
{
if (n == 0)
return *(std::min_element(data.begin(), data.end()));
else if (n == data.size() - 1)
return *(std::max_element(data.begin(), data.end()));
else
return _FRselect(data, 0, data.size() - 1, n);
}
template <typename T>
T _FRselect(std::vector<T>& data, const size_t& left, const size_t& right, const size_t& n)
{
size_t leftIdx = left;
size_t rightIdx = right;
while (rightIdx > leftIdx)
{
if (rightIdx - leftIdx > 600)
{
size_t range = rightIdx - leftIdx + 1;
long long i = n - (long long)leftIdx + 1;
long long z = log(range);
long long s = 0.5 * exp(2 * z / 3);
long long sd = 0.5 * sqrt(z * s * (range - s) / range) * sgn(i - (long long)range / 2);
size_t newLeft = fmax(leftIdx, n - i * s / range + sd);
size_t newRight = fmin(rightIdx, n + (range - i) * s / range + sd);
_FRselect(data, newLeft, newRight, n);
}
T t = data[n];
size_t i = leftIdx;
size_t j = rightIdx;
// arrange pivot and right index
std::swap(data[leftIdx], data[n]);
if (data[rightIdx] > t)
std::swap(data[rightIdx], data[leftIdx]);
while (i < j)
{
std::swap(data[i], data[j]);
++i; --j;
while (data[i] < t) ++i;
while (data[j] > t) --j;
}
if (data[leftIdx] == t)
std::swap(data[leftIdx], data[j]);
else
{
++j;
std::swap(data[j], data[rightIdx]);
}
// adjust left and right towards the boundaries of the subset
// containing the (k - left + 1)th smallest element
if (j <= n)
leftIdx = j + 1;
if (n <= j)
rightIdx = j - 1;
}
return data[leftIdx];
}
template <typename T>
int sgn(T val) {
return (T(0) < val) - (val < T(0));
}
iterate through the list. if the current value is larger than the stored largest value, store it as the largest value and bump the 1-4 down and 5 drops off the list. If not,compare it to number 2 and do the same thing. Repeat, checking it against all 5 stored values. this should do it in O(n)
i would like to suggest one answer
if we take the first k elements and sort them into a linked list of k values
now for every other value even for the worst case if we do insertion sort for rest n-k values even in the worst case number of comparisons will be k*(n-k) and for prev k values to be sorted let it be k*(k-1) so it comes out to be (nk-k) which is o(n)
cheers
Explanation of the median - of - medians algorithm to find the k-th largest integer out of n can be found here:
http://cs.indstate.edu/~spitla/presentation.pdf
Implementation in c++ is below:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int findMedian(vector<int> vec){
// Find median of a vector
int median;
size_t size = vec.size();
median = vec[(size/2)];
return median;
}
int findMedianOfMedians(vector<vector<int> > values){
vector<int> medians;
for (int i = 0; i < values.size(); i++) {
int m = findMedian(values[i]);
medians.push_back(m);
}
return findMedian(medians);
}
void selectionByMedianOfMedians(const vector<int> values, int k){
// Divide the list into n/5 lists of 5 elements each
vector<vector<int> > vec2D;
int count = 0;
while (count != values.size()) {
int countRow = 0;
vector<int> row;
while ((countRow < 5) && (count < values.size())) {
row.push_back(values[count]);
count++;
countRow++;
}
vec2D.push_back(row);
}
cout<<endl<<endl<<"Printing 2D vector : "<<endl;
for (int i = 0; i < vec2D.size(); i++) {
for (int j = 0; j < vec2D[i].size(); j++) {
cout<<vec2D[i][j]<<" ";
}
cout<<endl;
}
cout<<endl;
// Calculating a new pivot for making splits
int m = findMedianOfMedians(vec2D);
cout<<"Median of medians is : "<<m<<endl;
// Partition the list into unique elements larger than 'm' (call this sublist L1) and
// those smaller them 'm' (call this sublist L2)
vector<int> L1, L2;
for (int i = 0; i < vec2D.size(); i++) {
for (int j = 0; j < vec2D[i].size(); j++) {
if (vec2D[i][j] > m) {
L1.push_back(vec2D[i][j]);
}else if (vec2D[i][j] < m){
L2.push_back(vec2D[i][j]);
}
}
}
// Checking the splits as per the new pivot 'm'
cout<<endl<<"Printing L1 : "<<endl;
for (int i = 0; i < L1.size(); i++) {
cout<<L1[i]<<" ";
}
cout<<endl<<endl<<"Printing L2 : "<<endl;
for (int i = 0; i < L2.size(); i++) {
cout<<L2[i]<<" ";
}
// Recursive calls
if ((k - 1) == L1.size()) {
cout<<endl<<endl<<"Answer :"<<m;
}else if (k <= L1.size()) {
return selectionByMedianOfMedians(L1, k);
}else if (k > (L1.size() + 1)){
return selectionByMedianOfMedians(L2, k-((int)L1.size())-1);
}
}
int main()
{
int values[] = {2, 3, 5, 4, 1, 12, 11, 13, 16, 7, 8, 6, 10, 9, 17, 15, 19, 20, 18, 23, 21, 22, 25, 24, 14};
vector<int> vec(values, values + 25);
cout<<"The given array is : "<<endl;
for (int i = 0; i < vec.size(); i++) {
cout<<vec[i]<<" ";
}
selectionByMedianOfMedians(vec, 8);
return 0;
}
There is also Wirth's selection algorithm, which has a simpler implementation than QuickSelect. Wirth's selection algorithm is slower than QuickSelect, but with some improvements it becomes faster.
In more detail. Using Vladimir Zabrodsky's MODIFIND optimization and the median-of-3 pivot selection and paying some attention to the final steps of the partitioning part of the algorithm, i've came up with the following algorithm (imaginably named "LefSelect"):
#define F_SWAP(a,b) { float temp=(a);(a)=(b);(b)=temp; }
# Note: The code needs more than 2 elements to work
float lefselect(float a[], const int n, const int k) {
int l=0, m = n-1, i=l, j=m;
float x;
while (l<m) {
if( a[k] < a[i] ) F_SWAP(a[i],a[k]);
if( a[j] < a[i] ) F_SWAP(a[i],a[j]);
if( a[j] < a[k] ) F_SWAP(a[k],a[j]);
x=a[k];
while (j>k & i<k) {
do i++; while (a[i]<x);
do j--; while (a[j]>x);
F_SWAP(a[i],a[j]);
}
i++; j--;
if (j<k) {
while (a[i]<x) i++;
l=i; j=m;
}
if (k<i) {
while (x<a[j]) j--;
m=j; i=l;
}
}
return a[k];
}
In benchmarks that i did here, LefSelect is 20-30% faster than QuickSelect.
Haskell Solution:
kthElem index list = sort list !! index
withShape ~[] [] = []
withShape ~(x:xs) (y:ys) = x : withShape xs ys
sort [] = []
sort (x:xs) = (sort ls `withShape` ls) ++ [x] ++ (sort rs `withShape` rs)
where
ls = filter (< x)
rs = filter (>= x)
This implements the median of median solutions by using the withShape method to discover the size of a partition without actually computing it.
Here is a C++ implementation of Randomized QuickSelect. The idea is to randomly pick a pivot element. To implement randomized partition, we use a random function, rand() to generate index between l and r, swap the element at randomly generated index with the last element, and finally call the standard partition process which uses last element as pivot.
#include<iostream>
#include<climits>
#include<cstdlib>
using namespace std;
int randomPartition(int arr[], int l, int r);
// This function returns k'th smallest element in arr[l..r] using
// QuickSort based method. ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCT
int kthSmallest(int arr[], int l, int r, int k)
{
// If k is smaller than number of elements in array
if (k > 0 && k <= r - l + 1)
{
// Partition the array around a random element and
// get position of pivot element in sorted array
int pos = randomPartition(arr, l, r);
// If position is same as k
if (pos-l == k-1)
return arr[pos];
if (pos-l > k-1) // If position is more, recur for left subarray
return kthSmallest(arr, l, pos-1, k);
// Else recur for right subarray
return kthSmallest(arr, pos+1, r, k-pos+l-1);
}
// If k is more than number of elements in array
return INT_MAX;
}
void swap(int *a, int *b)
{
int temp = *a;
*a = *b;
*b = temp;
}
// Standard partition process of QuickSort(). It considers the last
// element as pivot and moves all smaller element to left of it and
// greater elements to right. This function is used by randomPartition()
int partition(int arr[], int l, int r)
{
int x = arr[r], i = l;
for (int j = l; j <= r - 1; j++)
{
if (arr[j] <= x) //arr[i] is bigger than arr[j] so swap them
{
swap(&arr[i], &arr[j]);
i++;
}
}
swap(&arr[i], &arr[r]); // swap the pivot
return i;
}
// Picks a random pivot element between l and r and partitions
// arr[l..r] around the randomly picked element using partition()
int randomPartition(int arr[], int l, int r)
{
int n = r-l+1;
int pivot = rand() % n;
swap(&arr[l + pivot], &arr[r]);
return partition(arr, l, r);
}
// Driver program to test above methods
int main()
{
int arr[] = {12, 3, 5, 7, 4, 19, 26};
int n = sizeof(arr)/sizeof(arr[0]), k = 3;
cout << "K'th smallest element is " << kthSmallest(arr, 0, n-1, k);
return 0;
}
The worst case time complexity of the above solution is still O(n2).In worst case, the randomized function may always pick a corner element. The expected time complexity of above randomized QuickSelect is Θ(n)
Have Priority queue created.
Insert all the elements into heap.
Call poll() k times.
public static int getKthLargestElements(int[] arr)
{
PriorityQueue<Integer> pq = new PriorityQueue<>((x , y) -> (y-x));
//insert all the elements into heap
for(int ele : arr)
pq.offer(ele);
// call poll() k times
int i=0;
while(i<k)
{
int result = pq.poll();
}
return result;
}
This is an implementation in Javascript.
If you release the constraint that you cannot modify the array, you can prevent the use of extra memory using two indexes to identify the "current partition" (in classic quicksort style - http://www.nczonline.net/blog/2012/11/27/computer-science-in-javascript-quicksort/).
function kthMax(a, k){
var size = a.length;
var pivot = a[ parseInt(Math.random()*size) ]; //Another choice could have been (size / 2)
//Create an array with all element lower than the pivot and an array with all element higher than the pivot
var i, lowerArray = [], upperArray = [];
for (i = 0; i < size; i++){
var current = a[i];
if (current < pivot) {
lowerArray.push(current);
} else if (current > pivot) {
upperArray.push(current);
}
}
//Which one should I continue with?
if(k <= upperArray.length) {
//Upper
return kthMax(upperArray, k);
} else {
var newK = k - (size - lowerArray.length);
if (newK > 0) {
///Lower
return kthMax(lowerArray, newK);
} else {
//None ... it's the current pivot!
return pivot;
}
}
}
If you want to test how it perform, you can use this variation:
function kthMax (a, k, logging) {
var comparisonCount = 0; //Number of comparison that the algorithm uses
var memoryCount = 0; //Number of integers in memory that the algorithm uses
var _log = logging;
if(k < 0 || k >= a.length) {
if (_log) console.log ("k is out of range");
return false;
}
function _kthmax(a, k){
var size = a.length;
var pivot = a[parseInt(Math.random()*size)];
if(_log) console.log("Inputs:", a, "size="+size, "k="+k, "pivot="+pivot);
// This should never happen. Just a nice check in this exercise
// if you are playing with the code to avoid never ending recursion
if(typeof pivot === "undefined") {
if (_log) console.log ("Ops...");
return false;
}
var i, lowerArray = [], upperArray = [];
for (i = 0; i < size; i++){
var current = a[i];
if (current < pivot) {
comparisonCount += 1;
memoryCount++;
lowerArray.push(current);
} else if (current > pivot) {
comparisonCount += 2;
memoryCount++;
upperArray.push(current);
}
}
if(_log) console.log("Pivoting:",lowerArray, "*"+pivot+"*", upperArray);
if(k <= upperArray.length) {
comparisonCount += 1;
return _kthmax(upperArray, k);
} else if (k > size - lowerArray.length) {
comparisonCount += 2;
return _kthmax(lowerArray, k - (size - lowerArray.length));
} else {
comparisonCount += 2;
return pivot;
}
/*
* BTW, this is the logic for kthMin if we want to implement that... ;-)
*
if(k <= lowerArray.length) {
return kthMin(lowerArray, k);
} else if (k > size - upperArray.length) {
return kthMin(upperArray, k - (size - upperArray.length));
} else
return pivot;
*/
}
var result = _kthmax(a, k);
return {result: result, iterations: comparisonCount, memory: memoryCount};
}
The rest of the code is just to create some playground:
function getRandomArray (n){
var ar = [];
for (var i = 0, l = n; i < l; i++) {
ar.push(Math.round(Math.random() * l))
}
return ar;
}
//Create a random array of 50 numbers
var ar = getRandomArray (50);
Now, run you tests a few time.
Because of the Math.random() it will produce every time different results:
kthMax(ar, 2, true);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 34, true);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
If you test it a few times you can see even empirically that the number of iterations is, on average, O(n) ~= constant * n and the value of k does not affect the algorithm.
I came up with this algorithm and seems to be O(n):
Let's say k=3 and we want to find the 3rd largest item in the array. I would create three variables and compare each item of the array with the minimum of these three variables. If array item is greater than our minimum, we would replace the min variable with the item value. We continue the same thing until end of the array. The minimum of our three variables is the 3rd largest item in the array.
define variables a=0, b=0, c=0
iterate through the array items
find minimum a,b,c
if item > min then replace the min variable with item value
continue until end of array
the minimum of a,b,c is our answer
And, to find Kth largest item we need K variables.
Example: (k=3)
[1,2,4,1,7,3,9,5,6,2,9,8]
Final variable values:
a=7 (answer)
b=8
c=9
Can someone please review this and let me know what I am missing?
Here is the implementation of the algorithm eladv suggested(I also put here the implementation with random pivot):
public class Median {
public static void main(String[] s) {
int[] test = {4,18,20,3,7,13,5,8,2,1,15,17,25,30,16};
System.out.println(selectK(test,8));
/*
int n = 100000000;
int[] test = new int[n];
for(int i=0; i<test.length; i++)
test[i] = (int)(Math.random()*test.length);
long start = System.currentTimeMillis();
random_selectK(test, test.length/2);
long end = System.currentTimeMillis();
System.out.println(end - start);
*/
}
public static int random_selectK(int[] a, int k) {
if(a.length <= 1)
return a[0];
int r = (int)(Math.random() * a.length);
int p = a[r];
int small = 0, equal = 0, big = 0;
for(int i=0; i<a.length; i++) {
if(a[i] < p) small++;
else if(a[i] == p) equal++;
else if(a[i] > p) big++;
}
if(k <= small) {
int[] temp = new int[small];
for(int i=0, j=0; i<a.length; i++)
if(a[i] < p)
temp[j++] = a[i];
return random_selectK(temp, k);
}
else if (k <= small+equal)
return p;
else {
int[] temp = new int[big];
for(int i=0, j=0; i<a.length; i++)
if(a[i] > p)
temp[j++] = a[i];
return random_selectK(temp,k-small-equal);
}
}
public static int selectK(int[] a, int k) {
if(a.length <= 5) {
Arrays.sort(a);
return a[k-1];
}
int p = median_of_medians(a);
int small = 0, equal = 0, big = 0;
for(int i=0; i<a.length; i++) {
if(a[i] < p) small++;
else if(a[i] == p) equal++;
else if(a[i] > p) big++;
}
if(k <= small) {
int[] temp = new int[small];
for(int i=0, j=0; i<a.length; i++)
if(a[i] < p)
temp[j++] = a[i];
return selectK(temp, k);
}
else if (k <= small+equal)
return p;
else {
int[] temp = new int[big];
for(int i=0, j=0; i<a.length; i++)
if(a[i] > p)
temp[j++] = a[i];
return selectK(temp,k-small-equal);
}
}
private static int median_of_medians(int[] a) {
int[] b = new int[a.length/5];
int[] temp = new int[5];
for(int i=0; i<b.length; i++) {
for(int j=0; j<5; j++)
temp[j] = a[5*i + j];
Arrays.sort(temp);
b[i] = temp[2];
}
return selectK(b, b.length/2 + 1);
}
}
it is similar to the quickSort strategy, where we pick an arbitrary pivot, and bring the smaller elements to its left, and the larger to the right
public static int kthElInUnsortedList(List<int> list, int k)
{
if (list.Count == 1)
return list[0];
List<int> left = new List<int>();
List<int> right = new List<int>();
int pivotIndex = list.Count / 2;
int pivot = list[pivotIndex]; //arbitrary
for (int i = 0; i < list.Count && i != pivotIndex; i++)
{
int currentEl = list[i];
if (currentEl < pivot)
left.Add(currentEl);
else
right.Add(currentEl);
}
if (k == left.Count + 1)
return pivot;
if (left.Count < k)
return kthElInUnsortedList(right, k - left.Count - 1);
else
return kthElInUnsortedList(left, k);
}
Go to the End of this link : ...........
http://www.geeksforgeeks.org/kth-smallestlargest-element-unsorted-array-set-3-worst-case-linear-time/
You can find the kth smallest element in O(n) time and constant space. If we consider the array is only for integers.
The approach is to do a binary search on the range of Array values. If we have a min_value and a max_value both in integer range, we can do a binary search on that range.
We can write a comparator function which will tell us if any value is the kth-smallest or smaller than kth-smallest or bigger than kth-smallest.
Do the binary search until you reach the kth-smallest number
Here is the code for that
class Solution:
def _iskthsmallest(self, A, val, k):
less_count, equal_count = 0, 0
for i in range(len(A)):
if A[i] == val: equal_count += 1
if A[i] < val: less_count += 1
if less_count >= k: return 1
if less_count + equal_count < k: return -1
return 0
def kthsmallest_binary(self, A, min_val, max_val, k):
if min_val == max_val:
return min_val
mid = (min_val + max_val)/2
iskthsmallest = self._iskthsmallest(A, mid, k)
if iskthsmallest == 0: return mid
if iskthsmallest > 0: return self.kthsmallest_binary(A, min_val, mid, k)
return self.kthsmallest_binary(A, mid+1, max_val, k)
# #param A : tuple of integers
# #param B : integer
# #return an integer
def kthsmallest(self, A, k):
if not A: return 0
if k > len(A): return 0
min_val, max_val = min(A), max(A)
return self.kthsmallest_binary(A, min_val, max_val, k)
What I would do is this:
initialize empty doubly linked list l
for each element e in array
if e larger than head(l)
make e the new head of l
if size(l) > k
remove last element from l
the last element of l should now be the kth largest element
You can simply store pointers to the first and last element in the linked list. They only change when updates to the list are made.
Update:
initialize empty sorted tree l
for each element e in array
if e between head(l) and tail(l)
insert e into l // O(log k)
if size(l) > k
remove last element from l
the last element of l should now be the kth largest element
First we can build a BST from unsorted array which takes O(n) time and from the BST we can find the kth smallest element in O(log(n)) which over all counts to an order of O(n).

Resources