looking for algorithm to calculate h-index fast - algorithm

http://en.wikipedia.org/wiki/H-index
this wiki page is a definition of h-index
basically if I were to have an array of [ 0 3 4 7 8 9 10 ], my h-index would be 4 since I have 4 numbers bigger than 4. My h-index would've been 5 if I were to have 5 numbers bigger than 5, and etc. Given an array of integers bigger or equal to 0, what are the ways of calculating h-index efficiently?
edit: the array is not necessarily sorted

Here my realization O(N) with tabling, this is simple and blazing fast:
private static int GetHIndex(int[] m)
{
int[] s = new int[m.Length + 1];
for (int i = 0; i < m.Length; i++) s[Math.Min(m.Length, m[i])]++;
int sum = 0;
for (int i = s.Length - 1; i >= 0; i--)
{
sum += s[i];
if (sum >= i)
return i;
}
return 0;
}

This could be done in O(n) time.
Find median of the array.
if median > (n-1)/2 then the number comes before median. Find it iteratively
If median < (n-1)/2 then the number comes after median. Find it iteratively.
If median == (n-1)/2 then the median is the solution
Here I am assuming that n is odd. Change algorithm slightly for even n (replace (n+1)/2 with n/2 assuming rank of median is n/2). Also, finding actual median in O(n) time is complicated. Use a good pivot instead (as in quicksort).
Complexity: n+n/2 +n/4... = O(n)

Answer in c# but easily convertable to java as well
public int HIndex(int[] citations) {
Array.Sort(citations);
var currentCount = 0;
var length = citations.Length;
for (var i = citations.Length - 1; i >= 0; i--)
{
currentCount = length - i;
// if the count of items to the right is larger than current value it means thats the max we can expect for hindex
if (currentCount - 1 >= citations[i])
{
return currentCount - 1;
}
}
return currentCount;
}

This is one solution I could think of. not sure if its the best.
Sort the array in ascending order. complexity nlog(n)
Iterate through the array from the index 0 to n. complexity of n
and for each iteration, suppose index is i
if (arr[i] == (arr.length - (i+1))
return arr[i]
e.g.,
arr =[ 0 3 4 7 8 9 10 ]
arr[2] = 4
i = 2
arr.length = 7
4 = (7- (2+1))

This is in O(nlogn) time but sort and concise.
public static int hindex(int[] array) {
Arrays.sort(array);
int pos = 0;
while (pos < array.length && array[pos] <= array.length - pos) {
pos++;
}
return array[pos - 1];
}

n=size of array
sort the array
then h-index = max(min(f(i),i) for i=1:n)
since h-index can never exceed n, replace all numbers in array greater
than n with n.
Now use count sort to sort the array.
time complexity O(n)
space complexity O(n)

I was not happy with my previous implementation, so I replaced it with a faster solution written in Java.
public int hIndex(int[] citations) {
if(citations == null || citations.length == 0)
{
return 0;
}
Arrays.sort(citations);
int hIndex = 0;
for(int i=0;i<citations.length;i++)
{
int hNew;
if(citations[i]<citations.length-i)
{
hNew = citations[i];
if(hNew>hIndex)
{
hIndex = hNew;
}
}
else if(citations[i]>=citations.length-i)
{
hNew = citations.length-i;
if(hNew>hIndex)
{
hIndex = hNew;
}
break;
}
}
return hIndex;
}

Related

My solution only use one for loop but why performance is low?

The Question:
Write a function:
class Solution {
public int solution(int[] A) {...}
}
that, given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
For example:
Given A = [1, 3, 6, 4, 1, 2], the function should return 5.
Given A = [1, 2, 3], the function should return 4.
Given A = [−1, −3], the function should return 1.
Assume that:
N is an integer within the range [1..100,000]; each element of array A is an integer within the range [−1,000,000..1,000,000].
Complexity:
expected worst-case time complexity is O(N); expected worst-case space complexity is O(N) (not counting the storage required for input arguments).
May I know why I get so low score to answer the question?
My solution below:
public static int solution(int[] A) {
int returnInt = 1;
int maxInt = 0;
if (A.length == 0)
return returnInt;
for (int i = 0; i < A.length; i++)
{
if (A[i] > maxInt)
maxInt = A[i];
}
if (maxInt < returnInt)
return returnInt;
return maxInt % 2 == 0
? maxInt - 1
: maxInt + 1;
}
The solution has only one for loop, I do not understand why I get a very low score.
You can use HashSet<int> exists to store all positive items of A; then you can check if number 1..exists.Count is in exists.
C# code:
public static int solution(int[] A) {
if (A is null || A.Length <= 0)
return 1;
var exists = new HashSet<int>();
foreach (int item in A)
if (item > 0)
exists.Add(item);
for (int i = 1; i <= exists.Count; ++i)
if (!exists.Contains(i))
return i;
return exists.Count + 1;
}
In the worst case we have
Time complexity: O(n), providing that we have good hash function: foreach loop is O(n) - adding to hash set is O(1), for (int i = 1; i <= exists.Count; ++i) is O(n) as well - Contains is O(1) in case of hash set
Space complexity: O(n) (hash set)
If we can allow ourselves to get slightly worse time complexity - O(n * log(n)) we can have O(1) space complexity only:
C# code:
public static int solution(int[] A) {
if (A is null || A.Length <= 0)
return 1;
Array.Sort(A);
for (int i = 0, prior = 0; i < A.Length; prior = Math.Clamp(A[i++], 0, A.Length))
if (A[i] > 0 && A[i] != prior + 1)
return prior + 1;
return Math.Clamp(A[A.Length - 1] + 1, 1, A.Length);
}
OP's performance is "low" certainly because it is producing the wrong answers.
return maxInt % 2 == 0 ? maxInt - 1 : maxInt + 1; makes little sense.
Simplify algorithm.
given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A.
Recognize that between values [1...N+1], there must be at least 1 value not in A[]. A[] has, at most, N different values. Pigeonhole principle
Cost O(N) time, O(N) more space solution, no hash table, no BST:
Form an array B[N+1] of T/F values - set all to false. Index this array [1...N+1]. Cost O(N) time, O(N) more space.
Walk array A. For each A[i], test if A[i] <= N (and A[i] >= 1). If A[i] in range, set B[A[i]] = true. Cost O(N) time.
Walk array B. Find the first B[i] that is false, i is the answer. Cost O(N) time.
Sample C code:
size_t LowestMissingPositive(size_t N, const int A[N]) {
unsigned char Used[N + 1];
memset(Used, 0, sizeof Used);
for (size_t i = 0; i < N; i++) {
if (A[i] >= 1 && (unsigned) A[i] <= N) {
Used[A[i] - 1] = 1;
}
}
for (size_t i = 0; i <= N; i++) {
if (!Used[i]) {
return i + 1;
}
}
// Code never expected to get here.
return N + 1;
}
Note: "each element of array A is an integer within the range [−1,000,000..1,000,000]" is not really an important stipulation other than the type of A[] needs to handle the range. E.g. at least a 21-bit wide integer type.
Create a list L with all integers from A which are bigger than 0. O(n)
Sort L. O(n lg(n))
If L is empty, return 1. If L[0] is not 1, return 1.
Iterate through L. If L[i] != i, return i. O(n)
Total complexity = O(n + n lg(n)) = O(n lg(n)).

Looking for largest sum inside array

I have a given array [-2 -3 4 -1 -2 1 5 -3] so the largest sum would be 7 (numbers from 3rd to 7th index). This array is just a simple example, the program should be user input elements and length of the array.
My question is, how to determine which sum would be largest?
I created a sum from all numbers and the sum of only positive numbers and yet the positive sum would be great but I didn't used the -1 and -2 after that 3rd index because of the "IF statement" so my sum is 10 and the solution is not good.
I assume your questions is to find the contiguous subarray(containing at least one number) which has the largest sum. Otherwise, the problem is pretty trivial as you can just pick all the positive numbers.
There are 3 solutions that are better than the O(N^2) brute force solution. N is the length of the input array.
Dynamic programming. O(N) runtime, O(N) space
Since the subarray contains at least one number, we know that there are only N possible candidates: subarray that ends at A[0], A[1]...... A[N - 1]
For the subarray that ends at A[i], we have the following optimal substructure:
maxSum[i] = max of {maxSum[i - 1] + A[i], A[i]};
class Solution {
public int maxSubArray(int[] nums) {
int max = Integer.MIN_VALUE;
if(nums == null || nums.length == 0) {
return max;
}
int[] maxSum = new int[nums.length + 1];
for(int i = 1; i < maxSum.length; i++) {
maxSum[i] = Math.max(maxSum[i - 1] + nums[i - 1], nums[i - 1]);
}
for(int i = 1; i < maxSum.length; i++) {
max = Math.max(maxSum[i], max);
}
return max;
}
}
Prefix sum, O(N) runtime, O(1) space
Maintain a minimum sum variable as you iterate through the entire array. When visiting each number in the input array, update the prefix sum variable currSum. Then update the maximum sum and minimum sum shown in the following code.
class Solution {
public int maxSubArray(int[] nums) {
if(nums == null || nums.length == 0) {
return 0;
}
int maxSum = Integer.MIN_VALUE, currSum = 0, minSum = 0;
for(int i = 0; i < nums.length; i++) {
currSum += nums[i];
maxSum = Math.max(maxSum, currSum - minSum);
minSum = Math.min(minSum, currSum);
}
return maxSum;
}
}
Divide and conquer, O(N * logN) runtime
Divide the original problem into two subproblems and apply this principle recursively using the following formula.
Let A[0,.... midIdx] be the left half of A, A[midIdx + 1, ..... A.length - 1] be the right half of A. leftSumMax is the answer of the left subproblem, rightSumMax is the answer of the right subproblem.
The final answer will be one of the following 3:
1. only uses numbers from the left half (solved by the left subproblem)
2. only uses numbers from the right half (solved by the right subproblem)
3. uses numbers from both left and right halves (solved in O(n) time)
class Solution {
public int maxSubArray(int[] nums) {
if(nums == null || nums.length == 0)
{
return 0;
}
return maxSubArrayHelper(nums, 0, nums.length - 1);
}
private int maxSubArrayHelper(int[] nums, int startIdx, int endIdx){
if(startIdx == endIdx){
return nums[startIdx];
}
int midIdx = startIdx + (endIdx - startIdx) / 2;
int leftMax = maxSubArrayHelper(nums, startIdx, midIdx);
int rightMax = maxSubArrayHelper(nums, midIdx + 1, endIdx);
int leftIdx = midIdx, rightIdx = midIdx + 1;
int leftSumMax = nums[leftIdx], rightSumMax = nums[rightIdx];
int leftSum = nums[leftIdx], rightSum = nums[rightIdx];
for(int i = leftIdx - 1; i >= startIdx; i--){
leftSum += nums[i];
leftSumMax = Math.max(leftSumMax, leftSum);
}
for(int j = rightIdx + 1; j <= endIdx; j++){
rightSum += nums[j];
rightSumMax = Math.max(rightSumMax, rightSum);
}
return Math.max(Math.max(leftMax, rightMax), leftSumMax + rightSumMax);
}
}
Try this:
locate the first positive number, offset i.
add the following positive numbers, giving a sum of sum, last offset is j. If this sum is greater than your current best sum, it becomes the current best sum with offsets i to j.
add the negative numbers that follow until you get another positive number. If this negative sum is greater in absolute value than sum, start a new sum at this offset, otherwise continue with the current sum.
go back to step 2.
Stop this when you get to the end of the array. The best positive sum has been found.
If no positive sum can be found, locate the least negative value, this single entry would be your best non-trivial sum.

Find 2 numbers in an unsorted array equal to a given sum

We need to find pair of numbers in an array whose sum is equal to a given value.
A = {6,4,5,7,9,1,2}
Sum = 10
Then the pairs are - {6,4} , {9,1}
I have two solutions for this .
an O(nlogn) solution - sort + check sum with 2 iterators (beginning and end).
an O(n) solution - hashing the array. Then checking if sum-hash[i] exists in the hash table or not.
But , the problem is that although the second solution is O(n) time , but uses O(n) space as well.
So , I was wondering if we could do it in O(n) time and O(1) space. And this is NOT homework!
Use in-place radix sort and OP's first solution with 2 iterators, coming towards each other.
If numbers in the array are not some sort of multi-precision numbers and are, for example, 32-bit integers, you can sort them in 2*32 passes using practically no additional space (1 bit per pass). Or 2*8 passes and 16 integer counters (4 bits per pass).
Details for the 2 iterators solution:
First iterator initially points to first element of the sorted array and advances forward. Second iterator initially points to last element of the array and advances backward.
If sum of elements, referenced by iterators, is less than the required value, advance first iterator. If it is greater than the required value, advance second iterator. If it is equal to the required value, success.
Only one pass is needed, so time complexity is O(n). Space complexity is O(1). If radix sort is used, complexities of the whole algorithm are the same.
If you are interested in related problems (with sum of more than 2 numbers), see "Sum-subset with a fixed subset size" and "Finding three elements in an array whose sum is closest to an given number".
This is a classic interview question from Microsoft research Asia.
How to Find 2 numbers in an unsorted array equal to a given sum.
[1]brute force solution
This algorithm is very simple. The time complexity is O(N^2)
[2]Using binary search
Using bianry searching to find the Sum-arr[i] with every arr[i], The time complexity can be reduced to O(N*logN)
[3]Using Hash
Base on [2] algorithm and use hash, the time complexity can be reduced to O(N), but this solution will add the O(N) space of hash.
[4]Optimal algorithm:
Pseduo-code:
for(i=0;j=n-1;i<j)
if(arr[i]+arr[j]==sum) return (i,j);
else if(arr[i]+arr[j]<sum) i++;
else j--;
return(-1,-1);
or
If a[M] + a[m] > I then M--
If a[M] + a[m] < I then m++
If a[M] + a[m] == I you have found it
If m > M, no such numbers exist.
And, Is this quesiton completely solved? No. If the number is N. This problem will become very complex.
The quesiton then:
How can I find all the combination cases with a given number?
This is a classic NP-Complete problem which is called subset-sum.
To understand NP/NPC/NP-Hard you'd better to read some professional books.
References:
[1]http://www.quora.com/Mathematics/How-can-I-find-all-the-combination-cases-with-a-given-number
[2]http://en.wikipedia.org/wiki/Subset_sum_problem
for (int i=0; i < array.size(); i++){
int value = array[i];
int diff = sum - value;
if (! hashSet.contains(diffvalue)){
hashSet.put(value,value);
} else{
printf(sum = diffvalue + hashSet.get(diffvalue));
}
}
--------
Sum being sum of 2 numbers.
public void printPairsOfNumbers(int[] a, int sum){
//O(n2)
for (int i = 0; i < a.length; i++) {
for (int j = i+1; j < a.length; j++) {
if(sum - a[i] == a[j]){
//match..
System.out.println(a[i]+","+a[j]);
}
}
}
//O(n) time and O(n) space
Set<Integer> cache = new HashSet<Integer>();
cache.add(a[0]);
for (int i = 1; i < a.length; i++) {
if(cache.contains(sum - a[i])){
//match//
System.out.println(a[i]+","+(sum-a[i]));
}else{
cache.add(a[i]);
}
}
}
Create a dictionary with pairs Key (number from the list) and the Value is the number which is necessary to obtain a desired value. Next, check the presence of the pairs of numbers in the list.
def check_sum_in_list(p_list, p_check_sum):
l_dict = {i: (p_check_sum - i) for i in p_list}
for key, value in l_dict.items():
if key in p_list and value in p_list:
return True
return False
if __name__ == '__main__':
l1 = [1, 3, 7, 12, 72, 2, 8]
l2 = [1, 2, 2, 4, 7, 4, 13, 32]
print(check_sum_in_list(l1, 10))
print(check_sum_in_list(l2, 99))
Output:
True
Flase
version 2
import random
def check_sum_in_list(p_list, p_searched_sum):
print(list(p_list))
l_dict = {i: p_searched_sum - i for i in set(p_list)}
for key, value in l_dict.items():
if key in p_list and value in p_list:
if p_list.index(key) != p_list.index(value):
print(key, value)
return True
return False
if __name__ == '__main__':
l1 = []
for i in range(1, 2000000):
l1.append(random.randrange(1, 1000))
j = 0
i = 9
while i < len(l1):
if check_sum_in_list(l1[j:i], 100):
print('Found')
break
else:
print('Continue searching')
j = i
i = i + 10
Output:
...
[154, 596, 758, 924, 797, 379, 731, 278, 992, 167]
Continue searching
[808, 730, 216, 15, 261, 149, 65, 386, 670, 770]
Continue searching
[961, 632, 39, 888, 61, 18, 166, 167, 474, 108]
39 61
Finded
[Finished in 3.9s]
If you assume that the value M to which the pairs are suppose to sum is constant and that the entries in the array are positive, then you can do this in one pass (O(n) time) using M/2 pointers (O(1) space) as follows. The pointers are labeled P1,P2,...,Pk where k=floor(M/2). Then do something like this
for (int i=0; i<N; ++i) {
int j = array[i];
if (j < M/2) {
if (Pj == 0)
Pj = -(i+1); // found smaller unpaired
else if (Pj > 0)
print(Pj-1,i); // found a pair
Pj = 0;
} else
if (Pj == 0)
Pj = (i+1); // found larger unpaired
else if (Pj < 0)
print(Pj-1,i); // found a pair
Pj = 0;
}
}
You can handle repeated entries (e.g. two 6's) by storing the indices as digits in base N, for example. For M/2, you can add the conditional
if (j == M/2) {
if (Pj == 0)
Pj = i+1; // found unpaired middle
else
print(Pj-1,i); // found a pair
Pj = 0;
}
But now you have the problem of putting the pairs together.
Does the obvious solution not work (iterating over every consecutive pair) or are the two numbers in any order?
In that case, you could sort the list of numbers and use random sampling to partition the sorted list until you have a sublist that is small enough to be iterated over.
public static ArrayList<Integer> find(int[] A , int target){
HashSet<Integer> set = new HashSet<Integer>();
ArrayList<Integer> list = new ArrayList<Integer>();
int diffrence = 0;
for(Integer i : A){
set.add(i);
}
for(int i = 0; i <A.length; i++){
diffrence = target- A[i];
if(set.contains(diffrence)&&A[i]!=diffrence){
list.add(A[i]);
list.add(diffrence);
return list;
}
}
return null;
}
`package algorithmsDesignAnalysis;
public class USELESStemp {
public static void main(String[] args){
int A[] = {6, 8, 7, 5, 3, 11, 10};
int sum = 12;
int[] B = new int[A.length];
int Max =A.length;
for(int i=0; i<A.length; i++){
B[i] = sum - A[i];
if(B[i] > Max)
Max = B[i];
if(A[i] > Max)
Max = A[i];
System.out.print(" " + B[i] + "");
} // O(n) here;
System.out.println("\n Max = " + Max);
int[] Array = new int[Max+1];
for(int i=0; i<B.length; i++){
Array[B[i]] = B[i];
} // O(n) here;
for(int i=0; i<A.length; i++){
if (Array[A[i]] >= 0)
System.out.println("We got one: " + A[i] +" and " + (sum-A[i]));
} // O(n) here;
} // end main();
/******
Running time: 3*O(n)
*******/
}
Below code takes the array and the number N as the target sum.
First the array is sorted, then a new array containing the
remaining elements are taken and then scanned not by binary search
but simple scanning of the remainder and the array simultaneously.
public static int solution(int[] a, int N) {
quickSort(a, 0, a.length-1); // nlog(n)
int[] remainders = new int[a.length];
for (int i=0; i<a.length; i++) {
remainders[a.length-1-i] = N - a[i]; // n
}
int previous = 0;
for (int j=0; j<a.length; j++) { // ~~ n
int k = previous;
while(k < remainders.length && remainders[k] < a[j]) {
k++;
}
if(k < remainders.length && remainders[k] == a[j]) {
return 1;
}
previous = k;
}
return 0;
}
Shouldn't iterating from both ends just solve the problem?
Sort the array. And start comparing from both ends.
if((arr[start] + arr[end]) < sum) start++;
if((arr[start] + arr[end]) > sum) end--;
if((arr[start] + arr[end]) = sum) {print arr[start] "," arr[end] ; start++}
if(start > end) break;
Time Complexity O(nlogn)
if its a sorted array and we need only pair of numbers and not all the pairs we can do it like this:
public void sums(int a[] , int x){ // A = 1,2,3,9,11,20 x=11
int i=0 , j=a.length-1;
while(i < j){
if(a[i] + a[j] == x) system.out.println("the numbers : "a[x] + " " + a[y]);
else if(a[i] + a[j] < x) i++;
else j--;
}
}
1 2 3 9 11 20 || i=0 , j=5 sum=21 x=11
1 2 3 9 11 20 || i=0 , j=4 sum=13 x=11
1 2 3 9 11 20 || i=0 , j=4 sum=11 x=11
END
The following code returns true if two integers in an array match a compared integer.
function compareArraySums(array, compare){
var candidates = [];
function compareAdditions(element, index, array){
if(element <= y){
candidates.push(element);
}
}
array.forEach(compareAdditions);
for(var i = 0; i < candidates.length; i++){
for(var j = 0; j < candidates.length; j++){
if (i + j === y){
return true;
}
}
}
}
Python 2.7 Implementation:
import itertools
list = [1, 1, 2, 3, 4, 5,]
uniquelist = set(list)
targetsum = 5
for n in itertools.combinations(uniquelist, 2):
if n[0] + n[1] == targetsum:
print str(n[0]) + " + " + str(n[1])
Output:
1 + 4
2 + 3
https://github.com/clockzhong/findSumPairNumber
#! /usr/bin/env python
import sys
import os
import re
#get the number list
numberListStr=raw_input("Please input your number list (seperated by spaces)...\n")
numberList=[int(i) for i in numberListStr.split()]
print 'you have input the following number list:'
print numberList
#get the sum target value
sumTargetStr=raw_input("Please input your target number:\n")
sumTarget=int(sumTargetStr)
print 'your target is: '
print sumTarget
def generatePairsWith2IndexLists(list1, list2):
result=[]
for item1 in list1:
for item2 in list2:
#result.append([item1, item2])
result.append([item1+1, item2+1])
#print result
return result
def generatePairsWithOneIndexLists(list1):
result=[]
index = 0
while index< (len(list1)-1):
index2=index+1
while index2 < len(list1):
#result.append([list1[index],list1[index2]])
result.append([list1[index]+1,list1[index2]+1])
index2+=1
index+=1
return result
def getPairs(numList, target):
pairList=[]
candidateSlots=[] ##we have (target-1) slots
#init the candidateSlots list
index=0
while index < target+1:
candidateSlots.append(None)
index+=1
#generate the candidateSlots, contribute O(n) complexity
index=0
while index<len(numList):
if numList[index]<=target and numList[index]>=0:
#print 'index:',index
#print 'numList[index]:',numList[index]
#print 'len(candidateSlots):',len(candidateSlots)
if candidateSlots[numList[index]]==None:
candidateSlots[numList[index]]=[index]
else:
candidateSlots[numList[index]].append(index)
index+=1
#print candidateSlots
#generate the pairs list based on the candidateSlots[] we just created
#contribute O(target) complexity
index=0
while index<=(target/2):
if candidateSlots[index]!=None and candidateSlots[target-index]!=None:
if index!=(target-index):
newPairList=generatePairsWith2IndexLists(candidateSlots[index], candidateSlots[target-index])
else:
newPairList=generatePairsWithOneIndexLists(candidateSlots[index])
pairList+=newPairList
index+=1
return pairList
print getPairs(numberList, sumTarget)
I've successfully implemented one solution with Python under O(n+m) time and space cost.
The "m" means the target value which those two numbers' sum need equal to.
I believe this is the lowest cost could get. Erict2k used itertools.combinations, it'll also cost similar or higher time&space cost comparing my algorithm.
If numbers aren't very big, you can use fast fourier transform to multiply two polynomials and then in O(1) check if coefficient before x^(needed sum) sum is more than zero. O(n log n) total!
// Java implementation using Hashing
import java.io.*;
class PairSum
{
private static final int MAX = 100000; // Max size of Hashmap
static void printpairs(int arr[],int sum)
{
// Declares and initializes the whole array as false
boolean[] binmap = new boolean[MAX];
for (int i=0; i<arr.length; ++i)
{
int temp = sum-arr[i];
// checking for condition
if (temp>=0 && binmap[temp])
{
System.out.println("Pair with given sum " +
sum + " is (" + arr[i] +
", "+temp+")");
}
binmap[arr[i]] = true;
}
}
// Main to test the above function
public static void main (String[] args)
{
int A[] = {1, 4, 45, 6, 10, 8};
int n = 16;
printpairs(A, n);
}
}
public static void Main(string[] args)
{
int[] myArray = {1,2,3,4,5,6,1,4,2,2,7 };
int Sum = 9;
for (int j = 1; j < myArray.Length; j++)
{
if (myArray[j-1]+myArray[j]==Sum)
{
Console.WriteLine("{0}, {1}",myArray[j-1],myArray[j]);
}
}
Console.ReadLine();
}

Finding sum of N largest elements of array of single-digit values [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Retrieving the top 100 numbers from one hundred million of numbers
I have a array which consists positive number between 0 to 9,(digit can repeat). I want to find sum of N largest elements
For example array = 5 1 2 4 and N=2
ans = 5+4 = 9
Simple approach: sort array and find sum of n largest elements. But i dont want to use it
The simplest O(n) solution is the following:
Run through array a and increasе b[a[i]] where b is a zero initialized array of 10 integers.
Run through b starting from the end (9th position) and if b[i] is lower than N add b[i] * i to your answer, then decrease N by b[i], otherwise if b[i] is greater or equal to N add N * i to the answer and over the loop.
Edit: code
vector<int> b(10, 0);
for(int i = 0; i < a.size(); ++i) {
b[a[i]]++;
}
int sum = 0;
for(int i = 9; i >= 0; --i) {
if(b[i] < n) {
sum += b[i] * i;
n -= b[i];
} else {
sum += n * i;
n = 0;
break;
}
}
if(n != 0) {
// no enough element in the array
}
insert all into a heap, and then delete (and sum) N elements.
complexity: O(n+Nlogn), because creating a heap is O(n), and each delete is O(logn), and you iterate over delete N times. total: O(n+Nlogn) [where n is the number of elements in your array].
EDIT: I missed it at first, but all your numbers are digits. so the simplest solution will be using radix sort or bucket sort and then sum the N biggest elements. solution is O(n).
I am a bit slow today, should code faster hehe ;-)
There are multiple answers already but I want to share my pseudo-code with you anyway, hope it helps!
public class LargestSumAlgorithm
{
private ArrayList arValues;
public void AddValueToArray(int p_iValue)
{
arValues.Add(p_iValue);
}
public int ComputeMaxSum(int p_iNumOfElementsToCompute)
{
// check if there are n elements in the array
int iNumOfItemsInArray = arValues.Size;
int iComputedValue = 0;
if(iNumOfItemsInArray >= p_iNumOfElementsToCompute)
{
// order the ArrayList ascending - largest values first
arValues.Sort(SortingEnum.Ascending);
// iterate over the p_iNumOfElementsToCompute in a zero index based ArrayList
for(int iPositionInValueArray = 0; iPositionInValueArray < p_iNumOfElementsToCompute); iPositionInValueArray++)
{
iComputedValue += arValues[i];
}
}
else
{
throw new ArgumentOutOfRangeException;
}
return iComputedValue;
}
public LargestSumAlgorithm()
{
arValues = new ArrayList();
}
}
public class Example
{
LargestNumAlgorithm theAlgorithm = new LargestSumAlgorithm();
theAlgorithm.AddValueToArray(1);
theAlgorithm.AddValueToArray(2);
theAlgorithm.AddValueToArray(3);
theAlgorithm.AddValueToArray(4);
theAlgorithm.AddValueToArray(5);
int iResult = theAlgorithm.ComputeMaxSum(3);
}
If you are using C++, use std::nth_element() to partition the array into two sets, one of them containing the N largest elements (unordered). Selection algo runs in O(n) time.

How to find the kth largest element in an unsorted array of length n in O(n)?

I believe there's a way to find the kth largest element in an unsorted array of length n in O(n). Or perhaps it's "expected" O(n) or something. How can we do this?
This is called finding the k-th order statistic. There's a very simple randomized algorithm (called quickselect) taking O(n) average time, O(n^2) worst case time, and a pretty complicated non-randomized algorithm (called introselect) taking O(n) worst case time. There's some info on Wikipedia, but it's not very good.
Everything you need is in these powerpoint slides. Just to extract the basic algorithm of the O(n) worst-case algorithm (introselect):
Select(A,n,i):
Divide input into ⌈n/5⌉ groups of size 5.
/* Partition on median-of-medians */
medians = array of each group’s median.
pivot = Select(medians, ⌈n/5⌉, ⌈n/10⌉)
Left Array L and Right Array G = partition(A, pivot)
/* Find ith element in L, pivot, or G */
k = |L| + 1
If i = k, return pivot
If i < k, return Select(L, k-1, i)
If i > k, return Select(G, n-k, i-k)
It's also very nicely detailed in the Introduction to Algorithms book by Cormen et al.
If you want a true O(n) algorithm, as opposed to O(kn) or something like that, then you should use quickselect (it's basically quicksort where you throw out the partition that you're not interested in). My prof has a great writeup, with the runtime analysis: (reference)
The QuickSelect algorithm quickly finds the k-th smallest element of an unsorted array of n elements. It is a RandomizedAlgorithm, so we compute the worst-case expected running time.
Here is the algorithm.
QuickSelect(A, k)
let r be chosen uniformly at random in the range 1 to length(A)
let pivot = A[r]
let A1, A2 be new arrays
# split into a pile A1 of small elements and A2 of big elements
for i = 1 to n
if A[i] < pivot then
append A[i] to A1
else if A[i] > pivot then
append A[i] to A2
else
# do nothing
end for
if k <= length(A1):
# it's in the pile of small elements
return QuickSelect(A1, k)
else if k > length(A) - length(A2)
# it's in the pile of big elements
return QuickSelect(A2, k - (length(A) - length(A2))
else
# it's equal to the pivot
return pivot
What is the running time of this algorithm? If the adversary flips coins for us, we may find that the pivot is always the largest element and k is always 1, giving a running time of
T(n) = Theta(n) + T(n-1) = Theta(n2)
But if the choices are indeed random, the expected running time is given by
T(n) <= Theta(n) + (1/n) ∑i=1 to nT(max(i, n-i-1))
where we are making the not entirely reasonable assumption that the recursion always lands in the larger of A1 or A2.
Let's guess that T(n) <= an for some a. Then we get
T(n)
<= cn + (1/n) ∑i=1 to nT(max(i-1, n-i))
= cn + (1/n) ∑i=1 to floor(n/2) T(n-i) + (1/n) ∑i=floor(n/2)+1 to n T(i)
<= cn + 2 (1/n) ∑i=floor(n/2) to n T(i)
<= cn + 2 (1/n) ∑i=floor(n/2) to n ai
and now somehow we have to get the horrendous sum on the right of the plus sign to absorb the cn on the left. If we just bound it as 2(1/n) ∑i=n/2 to n an, we get roughly 2(1/n)(n/2)an = an. But this is too big - there's no room to squeeze in an extra cn. So let's expand the sum using the arithmetic series formula:
∑i=floor(n/2) to n i
= ∑i=1 to n i - ∑i=1 to floor(n/2) i
= n(n+1)/2 - floor(n/2)(floor(n/2)+1)/2
<= n2/2 - (n/4)2/2
= (15/32)n2
where we take advantage of n being "sufficiently large" to replace the ugly floor(n/2) factors with the much cleaner (and smaller) n/4. Now we can continue with
cn + 2 (1/n) ∑i=floor(n/2) to n ai,
<= cn + (2a/n) (15/32) n2
= n (c + (15/16)a)
<= an
provided a > 16c.
This gives T(n) = O(n). It's clearly Omega(n), so we get T(n) = Theta(n).
A quick Google on that ('kth largest element array') returned this: http://discuss.joelonsoftware.com/default.asp?interview.11.509587.17
"Make one pass through tracking the three largest values so far."
(it was specifically for 3d largest)
and this answer:
Build a heap/priority queue. O(n)
Pop top element. O(log n)
Pop top element. O(log n)
Pop top element. O(log n)
Total = O(n) + 3 O(log n) = O(n)
You do like quicksort. Pick an element at random and shove everything either higher or lower. At this point you'll know which element you actually picked, and if it is the kth element you're done, otherwise you repeat with the bin (higher or lower), that the kth element would fall in. Statistically speaking, the time it takes to find the kth element grows with n, O(n).
A Programmer's Companion to Algorithm Analysis gives a version that is O(n), although the author states that the constant factor is so high, you'd probably prefer the naive sort-the-list-then-select method.
I answered the letter of your question :)
The C++ standard library has almost exactly that function call nth_element, although it does modify your data. It has expected linear run-time, O(N), and it also does a partial sort.
const int N = ...;
double a[N];
// ...
const int m = ...; // m < N
nth_element (a, a + m, a + N);
// a[m] contains the mth element in a
You can do it in O(n + kn) = O(n) (for constant k) for time and O(k) for space, by keeping track of the k largest elements you've seen.
For each element in the array you can scan the list of k largest and replace the smallest element with the new one if it is bigger.
Warren's priority heap solution is neater though.
Although not very sure about O(n) complexity, but it will be sure to be between O(n) and nLog(n). Also sure to be closer to O(n) than nLog(n). Function is written in Java
public int quickSelect(ArrayList<Integer>list, int nthSmallest){
//Choose random number in range of 0 to array length
Random random = new Random();
//This will give random number which is not greater than length - 1
int pivotIndex = random.nextInt(list.size() - 1);
int pivot = list.get(pivotIndex);
ArrayList<Integer> smallerNumberList = new ArrayList<Integer>();
ArrayList<Integer> greaterNumberList = new ArrayList<Integer>();
//Split list into two.
//Value smaller than pivot should go to smallerNumberList
//Value greater than pivot should go to greaterNumberList
//Do nothing for value which is equal to pivot
for(int i=0; i<list.size(); i++){
if(list.get(i)<pivot){
smallerNumberList.add(list.get(i));
}
else if(list.get(i)>pivot){
greaterNumberList.add(list.get(i));
}
else{
//Do nothing
}
}
//If smallerNumberList size is greater than nthSmallest value, nthSmallest number must be in this list
if(nthSmallest < smallerNumberList.size()){
return quickSelect(smallerNumberList, nthSmallest);
}
//If nthSmallest is greater than [ list.size() - greaterNumberList.size() ], nthSmallest number must be in this list
//The step is bit tricky. If confusing, please see the above loop once again for clarification.
else if(nthSmallest > (list.size() - greaterNumberList.size())){
//nthSmallest will have to be changed here. [ list.size() - greaterNumberList.size() ] elements are already in
//smallerNumberList
nthSmallest = nthSmallest - (list.size() - greaterNumberList.size());
return quickSelect(greaterNumberList,nthSmallest);
}
else{
return pivot;
}
}
I implemented finding kth minimimum in n unsorted elements using dynamic programming, specifically tournament method. The execution time is O(n + klog(n)). The mechanism used is listed as one of methods on Wikipedia page about Selection Algorithm (as indicated in one of the posting above). You can read about the algorithm and also find code (java) on my blog page Finding Kth Minimum. In addition the logic can do partial ordering of the list - return first K min (or max) in O(klog(n)) time.
Though the code provided result kth minimum, similar logic can be employed to find kth maximum in O(klog(n)), ignoring the pre-work done to create tournament tree.
Sexy quickselect in Python
def quickselect(arr, k):
'''
k = 1 returns first element in ascending order.
can be easily modified to return first element in descending order
'''
r = random.randrange(0, len(arr))
a1 = [i for i in arr if i < arr[r]] '''partition'''
a2 = [i for i in arr if i > arr[r]]
if k <= len(a1):
return quickselect(a1, k)
elif k > len(arr)-len(a2):
return quickselect(a2, k - (len(arr) - len(a2)))
else:
return arr[r]
As per this paper Finding the Kth largest item in a list of n items the following algorithm will take O(n) time in worst case.
Divide the array in to n/5 lists of 5 elements each.
Find the median in each sub array of 5 elements.
Recursively find the median of all the medians, lets call it M
Partition the array in to two sub array 1st sub-array contains the elements larger than M , lets say this sub-array is a1 , while other sub-array contains the elements smaller then M., lets call this sub-array a2.
If k <= |a1|, return selection (a1,k).
If k− 1 = |a1|, return M.
If k> |a1| + 1, return selection(a2,k −a1 − 1).
Analysis: As suggested in the original paper:
We use the median to partition the list into two halves(the first half,
if k <= n/2 , and the second half otherwise). This algorithm takes
time cn at the first level of recursion for some constant c, cn/2 at
the next level (since we recurse in a list of size n/2), cn/4 at the
third level, and so on. The total time taken is cn + cn/2 + cn/4 +
.... = 2cn = o(n).
Why partition size is taken 5 and not 3?
As mentioned in original paper:
Dividing the list by 5 assures a worst-case split of 70 − 30. Atleast
half of the medians greater than the median-of-medians, hence atleast
half of the n/5 blocks have atleast 3 elements and this gives a
3n/10 split, which means the other partition is 7n/10 in worst case.
That gives T(n) = T(n/5)+T(7n/10)+O(n). Since n/5+7n/10 < 1, the
worst-case running time isO(n).
Now I have tried to implement the above algorithm as:
public static int findKthLargestUsingMedian(Integer[] array, int k) {
// Step 1: Divide the list into n/5 lists of 5 element each.
int noOfRequiredLists = (int) Math.ceil(array.length / 5.0);
// Step 2: Find pivotal element aka median of medians.
int medianOfMedian = findMedianOfMedians(array, noOfRequiredLists);
//Now we need two lists split using medianOfMedian as pivot. All elements in list listOne will be grater than medianOfMedian and listTwo will have elements lesser than medianOfMedian.
List<Integer> listWithGreaterNumbers = new ArrayList<>(); // elements greater than medianOfMedian
List<Integer> listWithSmallerNumbers = new ArrayList<>(); // elements less than medianOfMedian
for (Integer element : array) {
if (element < medianOfMedian) {
listWithSmallerNumbers.add(element);
} else if (element > medianOfMedian) {
listWithGreaterNumbers.add(element);
}
}
// Next step.
if (k <= listWithGreaterNumbers.size()) return findKthLargestUsingMedian((Integer[]) listWithGreaterNumbers.toArray(new Integer[listWithGreaterNumbers.size()]), k);
else if ((k - 1) == listWithGreaterNumbers.size()) return medianOfMedian;
else if (k > (listWithGreaterNumbers.size() + 1)) return findKthLargestUsingMedian((Integer[]) listWithSmallerNumbers.toArray(new Integer[listWithSmallerNumbers.size()]), k-listWithGreaterNumbers.size()-1);
return -1;
}
public static int findMedianOfMedians(Integer[] mainList, int noOfRequiredLists) {
int[] medians = new int[noOfRequiredLists];
for (int count = 0; count < noOfRequiredLists; count++) {
int startOfPartialArray = 5 * count;
int endOfPartialArray = startOfPartialArray + 5;
Integer[] partialArray = Arrays.copyOfRange((Integer[]) mainList, startOfPartialArray, endOfPartialArray);
// Step 2: Find median of each of these sublists.
int medianIndex = partialArray.length/2;
medians[count] = partialArray[medianIndex];
}
// Step 3: Find median of the medians.
return medians[medians.length / 2];
}
Just for sake of completion, another algorithm makes use of Priority Queue and takes time O(nlogn).
public static int findKthLargestUsingPriorityQueue(Integer[] nums, int k) {
int p = 0;
int numElements = nums.length;
// create priority queue where all the elements of nums will be stored
PriorityQueue<Integer> pq = new PriorityQueue<Integer>();
// place all the elements of the array to this priority queue
for (int n : nums) {
pq.add(n);
}
// extract the kth largest element
while (numElements - k + 1 > 0) {
p = pq.poll();
k++;
}
return p;
}
Both of these algorithms can be tested as:
public static void main(String[] args) throws IOException {
Integer[] numbers = new Integer[]{2, 3, 5, 4, 1, 12, 11, 13, 16, 7, 8, 6, 10, 9, 17, 15, 19, 20, 18, 23, 21, 22, 25, 24, 14};
System.out.println(findKthLargestUsingMedian(numbers, 8));
System.out.println(findKthLargestUsingPriorityQueue(numbers, 8));
}
As expected output is:
18
18
Find the median of the array in linear time, then use partition procedure exactly as in quicksort to divide the array in two parts, values to the left of the median lesser( < ) than than median and to the right greater than ( > ) median, that too can be done in lineat time, now, go to that part of the array where kth element lies,
Now recurrence becomes:
T(n) = T(n/2) + cn
which gives me O (n) overal.
Below is the link to full implementation with quite an extensive explanation how the algorithm for finding Kth element in an unsorted algorithm works. Basic idea is to partition the array like in QuickSort. But in order to avoid extreme cases (e.g. when smallest element is chosen as pivot in every step, so that algorithm degenerates into O(n^2) running time), special pivot selection is applied, called median-of-medians algorithm. The whole solution runs in O(n) time in worst and in average case.
Here is link to the full article (it is about finding Kth smallest element, but the principle is the same for finding Kth largest):
Finding Kth Smallest Element in an Unsorted Array
How about this kinda approach
Maintain a buffer of length k and a tmp_max, getting tmp_max is O(k) and is done n times so something like O(kn)
Is it right or am i missing something ?
Although it doesn't beat average case of quickselect and worst case of median statistics method but its pretty easy to understand and implement.
There is also one algorithm, that outperforms quickselect algorithm. It's called Floyd-Rivets (FR) algorithm.
Original article: https://doi.org/10.1145/360680.360694
Downloadable version: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.309.7108&rep=rep1&type=pdf
Wikipedia article https://en.wikipedia.org/wiki/Floyd%E2%80%93Rivest_algorithm
I tried to implement quickselect and FR algorithm in C++. Also I compared them to the standard C++ library implementations std::nth_element (which is basically introselect hybrid of quickselect and heapselect). The result was quickselect and nth_element ran comparably on average, but FR algorithm ran approx. twice as fast compared to them.
Sample code that I used for FR algorithm:
template <typename T>
T FRselect(std::vector<T>& data, const size_t& n)
{
if (n == 0)
return *(std::min_element(data.begin(), data.end()));
else if (n == data.size() - 1)
return *(std::max_element(data.begin(), data.end()));
else
return _FRselect(data, 0, data.size() - 1, n);
}
template <typename T>
T _FRselect(std::vector<T>& data, const size_t& left, const size_t& right, const size_t& n)
{
size_t leftIdx = left;
size_t rightIdx = right;
while (rightIdx > leftIdx)
{
if (rightIdx - leftIdx > 600)
{
size_t range = rightIdx - leftIdx + 1;
long long i = n - (long long)leftIdx + 1;
long long z = log(range);
long long s = 0.5 * exp(2 * z / 3);
long long sd = 0.5 * sqrt(z * s * (range - s) / range) * sgn(i - (long long)range / 2);
size_t newLeft = fmax(leftIdx, n - i * s / range + sd);
size_t newRight = fmin(rightIdx, n + (range - i) * s / range + sd);
_FRselect(data, newLeft, newRight, n);
}
T t = data[n];
size_t i = leftIdx;
size_t j = rightIdx;
// arrange pivot and right index
std::swap(data[leftIdx], data[n]);
if (data[rightIdx] > t)
std::swap(data[rightIdx], data[leftIdx]);
while (i < j)
{
std::swap(data[i], data[j]);
++i; --j;
while (data[i] < t) ++i;
while (data[j] > t) --j;
}
if (data[leftIdx] == t)
std::swap(data[leftIdx], data[j]);
else
{
++j;
std::swap(data[j], data[rightIdx]);
}
// adjust left and right towards the boundaries of the subset
// containing the (k - left + 1)th smallest element
if (j <= n)
leftIdx = j + 1;
if (n <= j)
rightIdx = j - 1;
}
return data[leftIdx];
}
template <typename T>
int sgn(T val) {
return (T(0) < val) - (val < T(0));
}
iterate through the list. if the current value is larger than the stored largest value, store it as the largest value and bump the 1-4 down and 5 drops off the list. If not,compare it to number 2 and do the same thing. Repeat, checking it against all 5 stored values. this should do it in O(n)
i would like to suggest one answer
if we take the first k elements and sort them into a linked list of k values
now for every other value even for the worst case if we do insertion sort for rest n-k values even in the worst case number of comparisons will be k*(n-k) and for prev k values to be sorted let it be k*(k-1) so it comes out to be (nk-k) which is o(n)
cheers
Explanation of the median - of - medians algorithm to find the k-th largest integer out of n can be found here:
http://cs.indstate.edu/~spitla/presentation.pdf
Implementation in c++ is below:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int findMedian(vector<int> vec){
// Find median of a vector
int median;
size_t size = vec.size();
median = vec[(size/2)];
return median;
}
int findMedianOfMedians(vector<vector<int> > values){
vector<int> medians;
for (int i = 0; i < values.size(); i++) {
int m = findMedian(values[i]);
medians.push_back(m);
}
return findMedian(medians);
}
void selectionByMedianOfMedians(const vector<int> values, int k){
// Divide the list into n/5 lists of 5 elements each
vector<vector<int> > vec2D;
int count = 0;
while (count != values.size()) {
int countRow = 0;
vector<int> row;
while ((countRow < 5) && (count < values.size())) {
row.push_back(values[count]);
count++;
countRow++;
}
vec2D.push_back(row);
}
cout<<endl<<endl<<"Printing 2D vector : "<<endl;
for (int i = 0; i < vec2D.size(); i++) {
for (int j = 0; j < vec2D[i].size(); j++) {
cout<<vec2D[i][j]<<" ";
}
cout<<endl;
}
cout<<endl;
// Calculating a new pivot for making splits
int m = findMedianOfMedians(vec2D);
cout<<"Median of medians is : "<<m<<endl;
// Partition the list into unique elements larger than 'm' (call this sublist L1) and
// those smaller them 'm' (call this sublist L2)
vector<int> L1, L2;
for (int i = 0; i < vec2D.size(); i++) {
for (int j = 0; j < vec2D[i].size(); j++) {
if (vec2D[i][j] > m) {
L1.push_back(vec2D[i][j]);
}else if (vec2D[i][j] < m){
L2.push_back(vec2D[i][j]);
}
}
}
// Checking the splits as per the new pivot 'm'
cout<<endl<<"Printing L1 : "<<endl;
for (int i = 0; i < L1.size(); i++) {
cout<<L1[i]<<" ";
}
cout<<endl<<endl<<"Printing L2 : "<<endl;
for (int i = 0; i < L2.size(); i++) {
cout<<L2[i]<<" ";
}
// Recursive calls
if ((k - 1) == L1.size()) {
cout<<endl<<endl<<"Answer :"<<m;
}else if (k <= L1.size()) {
return selectionByMedianOfMedians(L1, k);
}else if (k > (L1.size() + 1)){
return selectionByMedianOfMedians(L2, k-((int)L1.size())-1);
}
}
int main()
{
int values[] = {2, 3, 5, 4, 1, 12, 11, 13, 16, 7, 8, 6, 10, 9, 17, 15, 19, 20, 18, 23, 21, 22, 25, 24, 14};
vector<int> vec(values, values + 25);
cout<<"The given array is : "<<endl;
for (int i = 0; i < vec.size(); i++) {
cout<<vec[i]<<" ";
}
selectionByMedianOfMedians(vec, 8);
return 0;
}
There is also Wirth's selection algorithm, which has a simpler implementation than QuickSelect. Wirth's selection algorithm is slower than QuickSelect, but with some improvements it becomes faster.
In more detail. Using Vladimir Zabrodsky's MODIFIND optimization and the median-of-3 pivot selection and paying some attention to the final steps of the partitioning part of the algorithm, i've came up with the following algorithm (imaginably named "LefSelect"):
#define F_SWAP(a,b) { float temp=(a);(a)=(b);(b)=temp; }
# Note: The code needs more than 2 elements to work
float lefselect(float a[], const int n, const int k) {
int l=0, m = n-1, i=l, j=m;
float x;
while (l<m) {
if( a[k] < a[i] ) F_SWAP(a[i],a[k]);
if( a[j] < a[i] ) F_SWAP(a[i],a[j]);
if( a[j] < a[k] ) F_SWAP(a[k],a[j]);
x=a[k];
while (j>k & i<k) {
do i++; while (a[i]<x);
do j--; while (a[j]>x);
F_SWAP(a[i],a[j]);
}
i++; j--;
if (j<k) {
while (a[i]<x) i++;
l=i; j=m;
}
if (k<i) {
while (x<a[j]) j--;
m=j; i=l;
}
}
return a[k];
}
In benchmarks that i did here, LefSelect is 20-30% faster than QuickSelect.
Haskell Solution:
kthElem index list = sort list !! index
withShape ~[] [] = []
withShape ~(x:xs) (y:ys) = x : withShape xs ys
sort [] = []
sort (x:xs) = (sort ls `withShape` ls) ++ [x] ++ (sort rs `withShape` rs)
where
ls = filter (< x)
rs = filter (>= x)
This implements the median of median solutions by using the withShape method to discover the size of a partition without actually computing it.
Here is a C++ implementation of Randomized QuickSelect. The idea is to randomly pick a pivot element. To implement randomized partition, we use a random function, rand() to generate index between l and r, swap the element at randomly generated index with the last element, and finally call the standard partition process which uses last element as pivot.
#include<iostream>
#include<climits>
#include<cstdlib>
using namespace std;
int randomPartition(int arr[], int l, int r);
// This function returns k'th smallest element in arr[l..r] using
// QuickSort based method. ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCT
int kthSmallest(int arr[], int l, int r, int k)
{
// If k is smaller than number of elements in array
if (k > 0 && k <= r - l + 1)
{
// Partition the array around a random element and
// get position of pivot element in sorted array
int pos = randomPartition(arr, l, r);
// If position is same as k
if (pos-l == k-1)
return arr[pos];
if (pos-l > k-1) // If position is more, recur for left subarray
return kthSmallest(arr, l, pos-1, k);
// Else recur for right subarray
return kthSmallest(arr, pos+1, r, k-pos+l-1);
}
// If k is more than number of elements in array
return INT_MAX;
}
void swap(int *a, int *b)
{
int temp = *a;
*a = *b;
*b = temp;
}
// Standard partition process of QuickSort(). It considers the last
// element as pivot and moves all smaller element to left of it and
// greater elements to right. This function is used by randomPartition()
int partition(int arr[], int l, int r)
{
int x = arr[r], i = l;
for (int j = l; j <= r - 1; j++)
{
if (arr[j] <= x) //arr[i] is bigger than arr[j] so swap them
{
swap(&arr[i], &arr[j]);
i++;
}
}
swap(&arr[i], &arr[r]); // swap the pivot
return i;
}
// Picks a random pivot element between l and r and partitions
// arr[l..r] around the randomly picked element using partition()
int randomPartition(int arr[], int l, int r)
{
int n = r-l+1;
int pivot = rand() % n;
swap(&arr[l + pivot], &arr[r]);
return partition(arr, l, r);
}
// Driver program to test above methods
int main()
{
int arr[] = {12, 3, 5, 7, 4, 19, 26};
int n = sizeof(arr)/sizeof(arr[0]), k = 3;
cout << "K'th smallest element is " << kthSmallest(arr, 0, n-1, k);
return 0;
}
The worst case time complexity of the above solution is still O(n2).In worst case, the randomized function may always pick a corner element. The expected time complexity of above randomized QuickSelect is Θ(n)
Have Priority queue created.
Insert all the elements into heap.
Call poll() k times.
public static int getKthLargestElements(int[] arr)
{
PriorityQueue<Integer> pq = new PriorityQueue<>((x , y) -> (y-x));
//insert all the elements into heap
for(int ele : arr)
pq.offer(ele);
// call poll() k times
int i=0;
while(i<k)
{
int result = pq.poll();
}
return result;
}
This is an implementation in Javascript.
If you release the constraint that you cannot modify the array, you can prevent the use of extra memory using two indexes to identify the "current partition" (in classic quicksort style - http://www.nczonline.net/blog/2012/11/27/computer-science-in-javascript-quicksort/).
function kthMax(a, k){
var size = a.length;
var pivot = a[ parseInt(Math.random()*size) ]; //Another choice could have been (size / 2)
//Create an array with all element lower than the pivot and an array with all element higher than the pivot
var i, lowerArray = [], upperArray = [];
for (i = 0; i < size; i++){
var current = a[i];
if (current < pivot) {
lowerArray.push(current);
} else if (current > pivot) {
upperArray.push(current);
}
}
//Which one should I continue with?
if(k <= upperArray.length) {
//Upper
return kthMax(upperArray, k);
} else {
var newK = k - (size - lowerArray.length);
if (newK > 0) {
///Lower
return kthMax(lowerArray, newK);
} else {
//None ... it's the current pivot!
return pivot;
}
}
}
If you want to test how it perform, you can use this variation:
function kthMax (a, k, logging) {
var comparisonCount = 0; //Number of comparison that the algorithm uses
var memoryCount = 0; //Number of integers in memory that the algorithm uses
var _log = logging;
if(k < 0 || k >= a.length) {
if (_log) console.log ("k is out of range");
return false;
}
function _kthmax(a, k){
var size = a.length;
var pivot = a[parseInt(Math.random()*size)];
if(_log) console.log("Inputs:", a, "size="+size, "k="+k, "pivot="+pivot);
// This should never happen. Just a nice check in this exercise
// if you are playing with the code to avoid never ending recursion
if(typeof pivot === "undefined") {
if (_log) console.log ("Ops...");
return false;
}
var i, lowerArray = [], upperArray = [];
for (i = 0; i < size; i++){
var current = a[i];
if (current < pivot) {
comparisonCount += 1;
memoryCount++;
lowerArray.push(current);
} else if (current > pivot) {
comparisonCount += 2;
memoryCount++;
upperArray.push(current);
}
}
if(_log) console.log("Pivoting:",lowerArray, "*"+pivot+"*", upperArray);
if(k <= upperArray.length) {
comparisonCount += 1;
return _kthmax(upperArray, k);
} else if (k > size - lowerArray.length) {
comparisonCount += 2;
return _kthmax(lowerArray, k - (size - lowerArray.length));
} else {
comparisonCount += 2;
return pivot;
}
/*
* BTW, this is the logic for kthMin if we want to implement that... ;-)
*
if(k <= lowerArray.length) {
return kthMin(lowerArray, k);
} else if (k > size - upperArray.length) {
return kthMin(upperArray, k - (size - upperArray.length));
} else
return pivot;
*/
}
var result = _kthmax(a, k);
return {result: result, iterations: comparisonCount, memory: memoryCount};
}
The rest of the code is just to create some playground:
function getRandomArray (n){
var ar = [];
for (var i = 0, l = n; i < l; i++) {
ar.push(Math.round(Math.random() * l))
}
return ar;
}
//Create a random array of 50 numbers
var ar = getRandomArray (50);
Now, run you tests a few time.
Because of the Math.random() it will produce every time different results:
kthMax(ar, 2, true);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 34, true);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
If you test it a few times you can see even empirically that the number of iterations is, on average, O(n) ~= constant * n and the value of k does not affect the algorithm.
I came up with this algorithm and seems to be O(n):
Let's say k=3 and we want to find the 3rd largest item in the array. I would create three variables and compare each item of the array with the minimum of these three variables. If array item is greater than our minimum, we would replace the min variable with the item value. We continue the same thing until end of the array. The minimum of our three variables is the 3rd largest item in the array.
define variables a=0, b=0, c=0
iterate through the array items
find minimum a,b,c
if item > min then replace the min variable with item value
continue until end of array
the minimum of a,b,c is our answer
And, to find Kth largest item we need K variables.
Example: (k=3)
[1,2,4,1,7,3,9,5,6,2,9,8]
Final variable values:
a=7 (answer)
b=8
c=9
Can someone please review this and let me know what I am missing?
Here is the implementation of the algorithm eladv suggested(I also put here the implementation with random pivot):
public class Median {
public static void main(String[] s) {
int[] test = {4,18,20,3,7,13,5,8,2,1,15,17,25,30,16};
System.out.println(selectK(test,8));
/*
int n = 100000000;
int[] test = new int[n];
for(int i=0; i<test.length; i++)
test[i] = (int)(Math.random()*test.length);
long start = System.currentTimeMillis();
random_selectK(test, test.length/2);
long end = System.currentTimeMillis();
System.out.println(end - start);
*/
}
public static int random_selectK(int[] a, int k) {
if(a.length <= 1)
return a[0];
int r = (int)(Math.random() * a.length);
int p = a[r];
int small = 0, equal = 0, big = 0;
for(int i=0; i<a.length; i++) {
if(a[i] < p) small++;
else if(a[i] == p) equal++;
else if(a[i] > p) big++;
}
if(k <= small) {
int[] temp = new int[small];
for(int i=0, j=0; i<a.length; i++)
if(a[i] < p)
temp[j++] = a[i];
return random_selectK(temp, k);
}
else if (k <= small+equal)
return p;
else {
int[] temp = new int[big];
for(int i=0, j=0; i<a.length; i++)
if(a[i] > p)
temp[j++] = a[i];
return random_selectK(temp,k-small-equal);
}
}
public static int selectK(int[] a, int k) {
if(a.length <= 5) {
Arrays.sort(a);
return a[k-1];
}
int p = median_of_medians(a);
int small = 0, equal = 0, big = 0;
for(int i=0; i<a.length; i++) {
if(a[i] < p) small++;
else if(a[i] == p) equal++;
else if(a[i] > p) big++;
}
if(k <= small) {
int[] temp = new int[small];
for(int i=0, j=0; i<a.length; i++)
if(a[i] < p)
temp[j++] = a[i];
return selectK(temp, k);
}
else if (k <= small+equal)
return p;
else {
int[] temp = new int[big];
for(int i=0, j=0; i<a.length; i++)
if(a[i] > p)
temp[j++] = a[i];
return selectK(temp,k-small-equal);
}
}
private static int median_of_medians(int[] a) {
int[] b = new int[a.length/5];
int[] temp = new int[5];
for(int i=0; i<b.length; i++) {
for(int j=0; j<5; j++)
temp[j] = a[5*i + j];
Arrays.sort(temp);
b[i] = temp[2];
}
return selectK(b, b.length/2 + 1);
}
}
it is similar to the quickSort strategy, where we pick an arbitrary pivot, and bring the smaller elements to its left, and the larger to the right
public static int kthElInUnsortedList(List<int> list, int k)
{
if (list.Count == 1)
return list[0];
List<int> left = new List<int>();
List<int> right = new List<int>();
int pivotIndex = list.Count / 2;
int pivot = list[pivotIndex]; //arbitrary
for (int i = 0; i < list.Count && i != pivotIndex; i++)
{
int currentEl = list[i];
if (currentEl < pivot)
left.Add(currentEl);
else
right.Add(currentEl);
}
if (k == left.Count + 1)
return pivot;
if (left.Count < k)
return kthElInUnsortedList(right, k - left.Count - 1);
else
return kthElInUnsortedList(left, k);
}
Go to the End of this link : ...........
http://www.geeksforgeeks.org/kth-smallestlargest-element-unsorted-array-set-3-worst-case-linear-time/
You can find the kth smallest element in O(n) time and constant space. If we consider the array is only for integers.
The approach is to do a binary search on the range of Array values. If we have a min_value and a max_value both in integer range, we can do a binary search on that range.
We can write a comparator function which will tell us if any value is the kth-smallest or smaller than kth-smallest or bigger than kth-smallest.
Do the binary search until you reach the kth-smallest number
Here is the code for that
class Solution:
def _iskthsmallest(self, A, val, k):
less_count, equal_count = 0, 0
for i in range(len(A)):
if A[i] == val: equal_count += 1
if A[i] < val: less_count += 1
if less_count >= k: return 1
if less_count + equal_count < k: return -1
return 0
def kthsmallest_binary(self, A, min_val, max_val, k):
if min_val == max_val:
return min_val
mid = (min_val + max_val)/2
iskthsmallest = self._iskthsmallest(A, mid, k)
if iskthsmallest == 0: return mid
if iskthsmallest > 0: return self.kthsmallest_binary(A, min_val, mid, k)
return self.kthsmallest_binary(A, mid+1, max_val, k)
# #param A : tuple of integers
# #param B : integer
# #return an integer
def kthsmallest(self, A, k):
if not A: return 0
if k > len(A): return 0
min_val, max_val = min(A), max(A)
return self.kthsmallest_binary(A, min_val, max_val, k)
What I would do is this:
initialize empty doubly linked list l
for each element e in array
if e larger than head(l)
make e the new head of l
if size(l) > k
remove last element from l
the last element of l should now be the kth largest element
You can simply store pointers to the first and last element in the linked list. They only change when updates to the list are made.
Update:
initialize empty sorted tree l
for each element e in array
if e between head(l) and tail(l)
insert e into l // O(log k)
if size(l) > k
remove last element from l
the last element of l should now be the kth largest element
First we can build a BST from unsorted array which takes O(n) time and from the BST we can find the kth smallest element in O(log(n)) which over all counts to an order of O(n).

Resources