Algorithm's Running Time Optimization - algorithm

I'm trying to find a way to optimize my algorithm such that the running time is O(n²) (Big O Notation).
The input is an array with n elements, of only positive and negative integers. We can assume that the array is already sorted.
I have to determine: for each r (element of the array), whether r = s + t, where s and t are also elements of the array, and can be the same (s == t), or also zero.
I tried to reduce the number of elements I have to check by checking if the current number is positive or negative, but the running time is still too long. The problem is that I'm using 3 while loops which already means a running time of O(n³) for the worst case.
Here is my code:
public static void Checker(int[] array) {
List<Integer> testlist = new ArrayList<Integer>();
int i = 0;
while (i < array.length) {
int current = array[i];
if (attached(current, array)) {
testlist.add(current);
}
i++;
}
}
public static boolean attached(int current, int[] array) {
boolean result = false;
int i = 0;
while (i < array.length && !result) {
int number1 = array[i];
int j = 0;
while (j < array.length && !result) {
int number2 = array[j];
if (number1 + number2 == current) {
result = true;
}
j++;
}
i++;
}
return result;
}

You can start sorting the array O(nlogn) (if not), then for each element in the array you can check if there are two elements that the sum is equals to the number in O(n²).
The code is in C#:
public static bool Solve(int[] arr)
{
Array.Sort(arr); //If not already sorted
foreach (var num in arr)
if (!FindTwoThatSumN(arr, num))
return false;
return true;
}
public static bool FindTwoThatSumN(int[] arr, int num)
{
int min = 0;
int max = arr.Length - 1;
while (true)
{
if (min == max) break;
int sum = arr[min] + arr[max];
if (sum < num) min++;
if (sum > num) max--;
if (sum == num) return true;
}
return false;
}
The idea to check if there are two numbers in an array (must be sorted) that sum a specific value is start from the minimum (min = 0) and the maximum (max = arr.Length), then in each iteration:
If the sum is lower than the number, increase min index.
If the sum is greater than the number, decrease max index.
If the sum is equals to the number, then you find a solution.
If min index reach max then there are no solution.
You can refer to this question/answers for more details and proof.
Time complexity for overall solution is O(n²):
Sort the array: O(nlogn).
Iterate over the sorted array: O(n).
Find two numbers that sum the value: O(n).
So, is O(n²) due the nested calls to FindTwoThatSumN.
If you want you can pass the index instead of the number to FindTwoThatSumN method to avoid with an additional check use the number itself as part of the solution.

calculate all the possible sums of s+t and put the results in a set => O(n2)
iterate over each r and check if there is a sum that matches r => O(n) since set.contains runs in constant time.

Related

longest nondecreasing subsequence in O(nlgn)

I have the following algorithm which works well
I tried explaining it here for myself http://nemo.la/?p=943 and it is explained here http://www.geeksforgeeks.org/longest-monotonically-increasing-subsequence-size-n-log-n/ as well and on stackoverflow as well
I want to modify it to produce the longest non-monotonically increasing subsequence
for the sequence 30 20 20 10 10 10 10
the answer should be 4: "10 10 10 10"
But the with nlgn version of the algorithm it isn't working. Initializing s to contain the first element "30" and starting at the second element = 20. This is what happens:
The first step: 30 is not greater than or equal to 20. We find the smallest element greater than 20. The new s becomes "20"
The second step: 20 is greater than or equal to 20. We extend the sequence and s now contains "20 20"
The third step: 10 is not greater than or equal to 20. We find the smallest element greater than 10 which is "20". The new s becomes "10 20"
and s will never grow after that and the algorithm will return 2 instead of 4
int height[100];
int s[100];
int binary_search(int first, int last, int x) {
int mid;
while (first < last) {
mid = (first + last) / 2;
if (height[s[mid]] == x)
return mid;
else if (height[s[mid]] >= x)
last = mid;
else
first = mid + 1;
}
return first; /* or last */
}
int longest_increasing_subsequence_nlgn(int n) {
int i, k, index;
memset(s, 0, sizeof(s));
index = 1;
s[1] = 0; /* s[i] = 0 is the index of the element that ends an increasing sequence of length i = 1 */
for (i = 1; i < n; i++) {
if (height[i] >= height[s[index]]) { /* larger element, extend the sequence */
index++; /* increase the length of my subsequence */
s[index] = i; /* the current doll ends my subsequence */
}
/* else find the smallest element in s >= a[i], basically insert a[i] in s such that s stays sorted */
else {
k = binary_search(1, index, height[i]);
if (height[s[k]] >= height[i]) { /* if truly >= greater */
s[k] = i;
}
}
}
return index;
}
To find the longest non-strictly increasing subsequence, change these conditions:
If A[i] is smallest among all end candidates of active lists, we will start new active list of length 1.
If A[i] is largest among all end candidates of active lists, we will clone the largest active list, and extend it by A[i].
If A[i] is in between, we will find a list with largest end element that is smaller than A[i]. Clone and extend this list by A[i]. We will discard all other lists of same length as that of this modified list.
to:
If A[i] is smaller than the smallest of all end candidates of active lists, we will start new active list of length 1.
If A[i] is largest among all end candidates of active lists, we will clone the largest active list, and extend it by A[i].
If A[i] is in between, we will find a list with largest end element that is smaller than or equal to A[i]. Clone and extend this list by A[i]. We will discard all other lists of same length as that of this modified list.
The fourth step for your example sequence should be:
10 is not less than 10 (the smallest element). We find the largest element that is smaller than or equal to 10 (that would be s[0]==10). Clone and extend this list by 10. Discard the existing list of length 2. The new s becomes {10 10}.
Your code nearly works except a problem in your binary_search() function, this function should return the index of the first element that's greater than the target element(x) since you want the longest non-decreasing sequence. Modify it to this, it'll be OK.
If you use c++, std::lower_bound() and std::upper_bound() will help you get rid of this confusing problem. By the way, the if statement"if (height[s[k]] >= height[i])" is superfluous.
int binary_search(int first, int last, int x) {
while(last > first)
{
int mid = first + (last - first) / 2;
if(height[s[mid]] > x)
last = mid;
else
first = mid + 1;
}
return first; /* or last */
}
Just apply the longest increasing sub-sequence algorithm to ordered pair (A[i], i), by using a lexicographic compare.
My Java version:
public static int longestNondecreasingSubsequenceLength(List<Integer> A) {
int n = A.size();
int dp[] = new int[n];
int max = 0;
for(int i = 0; i < n; i++) {
int el = A.get(i);
int idx = Arrays.binarySearch(dp, 0, max, el);
if(idx < 0) {
idx = -(idx + 1);
}
if(dp[idx] == el) { // duplicate found, let's find the last one
idx = Arrays.binarySearch(dp, 0, max, el + 1);
if(idx < 0) {
idx = -(idx + 1);
}
}
dp[idx] = el;
if(idx == max) {
max++;
}
}
return max;
}
A completely different solution to this problem is the following. Make a copy of the array and sort it. Then, compute the minimum nonzero difference between any two elements of the array (this will be the minimum nonzero difference between two adjacent array elements) and call it δ. This step takes time O(n log n).
The key observation is that if you add 0 to element 0 of the original array, δ/n to the second element of the original array, 2δ/n to the third element of the array, etc., then any nondecreasing sequence in the original array becomes a strictly increasing sequence in the new array and vice-versa. Therefore, you can transform the array this way, then run a standard longest increasing subsequence solver, which runs in time O(n log n). The net result of this process is an O(n log n) algorithm for finding the longest nondecreasing subsequence.
For example, consider 30, 20, 20, 10, 10, 10, 10. In this case δ = 10 and n = 7, so δ / n &approx; 1.42. The new array is then
40, 21.42, 22.84, 14.28, 15.71, 17.14, 18.57
Here, the LIS is 14.28, 15.71, 17.14, 18.57, which maps back to 10, 10, 10, 10 in the original array.
Hope this helps!
I have my simple solution for the longest non-decreasing subsequence using upper bound function in c++.
Time complexity (nlogn)
int longest(vector<long long> a) {
vector<long long> s;
s.push_back(a[0]);
int n = a.size();
int len = 1;
for (int i = 1; i < n; i++) {
int idx = upper_bound(s.begin(), s.end(), a[i]) - s.begin();
int m = s.size();
if (m > idx) {
s[idx] = a[i];
} else {
s.push_back(a[i]);
}
}
return s.size();
}
If you know the algorithm for LIS, then changing inequalities in the code, gives the Longest Non-Decreasing subsequence.
Code for LIS:
public int ceilIndex(int []a, int n, int t[], int ele){
int l=-1, r=n+1;
while(r-l>1){
int mid=l+(r-l)/2;
if(a[t[mid]]<ele) l=mid;
else r=mid;
}
return r;
}
public int lengthOfLIS(int[] a) {
int n=a.length;
int index[]=new int[n];
int len=0;
index[len]=0;
int reversePath[]=new int[n];
for(int i=0;i<n;i++) reversePath[i]=-1;
for(int i=1;i<n;i++){
if(a[index[0]]>=a[i]){
index[0]=i;
reversePath[i]=-1;
}else if(a[index[len]]<a[i]){
reversePath[i]=index[len];
len++;
index[len]=i;
}else{
int idx=ceilIndex(a, len, index, a[i]);
reversePath[i]=index[idx-1];
index[idx]=i;
}
}
for(int i=0;i<n;i++) System.out.print(reversePath[i]+" ");
System.out.println();
// printing the LIS in reverseFashion
// we iterate the indexes in reverse
int idx=index[len];
while(idx!=-1){
System.out.print(a[idx]+" ");
idx=reversePath[idx];
}
return len+1;
}
Code for Longest Non-Decreasing subsequence:
public int ceilIndex(int []a, int n, int t[], int ele){
int l=-1, r=n+1;
while(r-l>1){
int mid=l+(r-l)/2;
if(a[t[mid]]<=ele) l=mid;
else r=mid;
}
return r;
}
public int lengthOfLongestNonDecreasingSubsequence(int[] a) {
int n=a.length;
int index[]=new int[n];
int len=0;
index[len]=0;
int reversePath[]=new int[n];
for(int i=0;i<n;i++) reversePath[i]=-1;
for(int i=1;i<n;i++){
if(a[index[0]]>a[i]){
index[0]=i;
reversePath[i]=-1;
}else if(a[index[len]]<=a[i]){
reversePath[i]=index[len];
len++;
index[len]=i;
}else{
int idx=ceilIndex(a, len, index, a[i]);
reversePath[i]=index[idx-1];
index[idx]=i;
}
}
for(int i=0;i<n;i++) System.out.print(reversePath[i]+" ");
System.out.println();
// printing the LIS in reverseFashion
// we iterate the indexes in reverse
int idx=index[len];
while(idx!=-1){
System.out.print(a[idx]+" ");
idx=reversePath[idx];
}
return len+1;
}

Linear time algorithm for 2-SUM

Given an integer x and a sorted array a of N distinct integers, design a linear-time algorithm to determine if there exists two distinct indices i and j such that a[i] + a[j] == x
This is type of Subset sum problem
Here is my solution. I don't know if it was known earlier or not. Imagine 3D plot of function of two variables i and j:
sum(i,j) = a[i]+a[j]
For every i there is such j that a[i]+a[j] is closest to x. All these (i,j) pairs form closest-to-x line. We just need to walk along this line and look for a[i]+a[j] == x:
int i = 0;
int j = lower_bound(a.begin(), a.end(), x) - a.begin();
while (j >= 0 && j < a.size() && i < a.size()) {
int sum = a[i]+a[j];
if (sum == x) {
cout << "found: " << i << " " << j << endl;
return;
}
if (sum > x) j--;
else i++;
if (i > j) break;
}
cout << " not found\n";
Complexity: O(n)
think in terms of complements.
iterate over the list, figure out for each item what the number needed to get to X for that number is. stick number and complement into hash. while iterating check to see if number or its complement is in hash. if so, found.
edit: and as I have some time, some pseudo'ish code.
boolean find(int[] array, int x) {
HashSet<Integer> s = new HashSet<Integer>();
for(int i = 0; i < array.length; i++) {
if (s.contains(array[i]) || s.contains(x-array[i])) {
return true;
}
s.add(array[i]);
s.add(x-array[i]);
}
return false;
}
Given that the array is sorted (WLOG in descending order), we can do the following:
Algorithm A_1:
We are given (a_1,...,a_n,m), a_1<...,<a_n.
Put a pointer at the top of the list and one at the bottom.
Compute the sum where both pointers are.
If the sum is greater than m, move the above pointer down.
If the sum is less than m, move the lower pointer up.
If a pointer is on the other (here we assume each number can be employed only once), report unsat.
Otherwise, (an equivalent sum will be found), report sat.
It is clear that this is O(n) since the maximum number of sums computed is exactly n. The proof of correctness is left as an exercise.
This is merely a subroutine of the Horowitz and Sahni (1974) algorithm for SUBSET-SUM. (However, note that almost all general purpose SS algorithms contain such a routine, Schroeppel, Shamir (1981), Howgrave-Graham_Joux (2010), Becker-Joux (2011).)
If we were given an unordered list, implementing this algorithm would be O(nlogn) since we could sort the list using Mergesort, then apply A_1.
First pass search for the first value that is > ceil(x/2). Lets call this value L.
From index of L, search backwards till you find the other operand that matches the sum.
It is 2*n ~ O(n)
This we can extend to binary search.
Search for an element using binary search such that we find L, such that L is min(elements in a > ceil(x/2)).
Do the same for R, but now with L as the max size of searchable elements in the array.
This approach is 2*log(n).
Here's a python version using Dictionary data structure and number complement. This has linear running time(Order of N: O(N)):
def twoSum(N, x):
dict = {}
for i in range(len(N)):
complement = x - N[i]
if complement in dict:
return True
dict[N[i]] = i
return False
# Test
print twoSum([2, 7, 11, 15], 9) # True
print twoSum([2, 7, 11, 15], 3) # False
Iterate over the array and save the qualified numbers and their indices into the map. The time complexity of this algorithm is O(n).
vector<int> twoSum(vector<int> &numbers, int target) {
map<int, int> summap;
vector<int> result;
for (int i = 0; i < numbers.size(); i++) {
summap[numbers[i]] = i;
}
for (int i = 0; i < numbers.size(); i++) {
int searched = target - numbers[i];
if (summap.find(searched) != summap.end()) {
result.push_back(i + 1);
result.push_back(summap[searched] + 1);
break;
}
}
return result;
}
I would just add the difference to a HashSet<T> like this:
public static bool Find(int[] array, int toReach)
{
HashSet<int> hashSet = new HashSet<int>();
foreach (int current in array)
{
if (hashSet.Contains(current))
{
return true;
}
hashSet.Add(toReach - current);
}
return false;
}
Note: The code is mine but the test file was not. Also, this idea for the hash function comes from various readings on the net.
An implementation in Scala. It uses a hashMap and a custom (yet simple) mapping for the values. I agree that it does not makes use of the sorted nature of the initial array.
The hash function
I fix the bucket size by dividing each value by 10000. That number could vary, depending on the size you want for the buckets, which can be made optimal depending on the input range.
So for example, key 1 is responsible for all the integers from 1 to 9.
Impact on search scope
What that means, is that for a current value n, for which you're looking to find a complement c such as n + c = x (x being the element you're trying ton find a 2-SUM of), there is only 3 possibles buckets in which the complement can be:
-key
-key + 1
-key - 1
Let's say that your numbers are in a file of the following form:
0
1
10
10
-10
10000
-10000
10001
9999
-10001
-9999
10000
5000
5000
-5000
-1
1000
2000
-1000
-2000
Here's the implementation in Scala
import scala.collection.mutable
import scala.io.Source
object TwoSumRed {
val usage = """
Usage: scala TwoSumRed.scala [filename]
"""
def main(args: Array[String]) {
val carte = createMap(args) match {
case None => return
case Some(m) => m
}
var t: Int = 1
carte.foreach {
case (bucket, values) => {
var toCheck: Array[Long] = Array[Long]()
if (carte.contains(-bucket)) {
toCheck = toCheck ++: carte(-bucket)
}
if (carte.contains(-bucket - 1)) {
toCheck = toCheck ++: carte(-bucket - 1)
}
if (carte.contains(-bucket + 1)) {
toCheck = toCheck ++: carte(-bucket + 1)
}
values.foreach { v =>
toCheck.foreach { c =>
if ((c + v) == t) {
println(s"$c and $v forms a 2-sum for $t")
return
}
}
}
}
}
}
def createMap(args: Array[String]): Option[mutable.HashMap[Int, Array[Long]]] = {
var carte: mutable.HashMap[Int,Array[Long]] = mutable.HashMap[Int,Array[Long]]()
if (args.length == 1) {
val filename = args.toList(0)
val lines: List[Long] = Source.fromFile(filename).getLines().map(_.toLong).toList
lines.foreach { l =>
val idx: Int = math.floor(l / 10000).toInt
if (carte.contains(idx)) {
carte(idx) = carte(idx) :+ l
} else {
carte += (idx -> Array[Long](l))
}
}
Some(carte)
} else {
println(usage)
None
}
}
}
int[] b = new int[N];
for (int i = 0; i < N; i++)
{
b[i] = x - a[N -1 - i];
}
for (int i = 0, j = 0; i < N && j < N;)
if(a[i] == b[j])
{
cout << "found";
return;
} else if(a[i] < b[j])
i++;
else
j++;
cout << "not found";
Here is a linear time complexity solution O(n) time O(1) space
public void twoSum(int[] arr){
if(arr.length < 2) return;
int max = arr[0] + arr[1];
int bigger = Math.max(arr[0], arr[1]);
int smaller = Math.min(arr[0], arr[1]);
int biggerIndex = 0;
int smallerIndex = 0;
for(int i = 2 ; i < arr.length ; i++){
if(arr[i] + bigger <= max){ continue;}
else{
if(arr[i] > bigger){
smaller = bigger;
bigger = arr[i];
biggerIndex = i;
}else if(arr[i] > smaller)
{
smaller = arr[i];
smallerIndex = i;
}
max = bigger + smaller;
}
}
System.out.println("Biggest sum is: " + max + "with indices ["+biggerIndex+","+smallerIndex+"]");
}
Solution
We need array to store the indices
Check if the array is empty or contains less than 2 elements
Define the start and the end point of the array
Iterate till condition is met
Check if the sum is equal to the target. If yes get the indices.
If condition is not met then traverse left or right based on the sum value
Traverse to the right
Traverse to the left
For more info :[http://www.prathapkudupublog.com/2017/05/two-sum-ii-input-array-is-sorted.html
Credit to leonid
His solution in java, if you want to give it a shot
I removed the return, so if the array is sorted, but DOES allow duplicates, it still gives pairs
static boolean cpp(int[] a, int x) {
int i = 0;
int j = a.length - 1;
while (j >= 0 && j < a.length && i < a.length) {
int sum = a[i] + a[j];
if (sum == x) {
System.out.printf("found %s, %s \n", i, j);
// return true;
}
if (sum > x) j--;
else i++;
if (i > j) break;
}
System.out.println("not found");
return false;
}
The classic linear time two-pointer solution does not require hashing so can solve related problems such as approximate sum (find closest pair sum to target).
First, a simple n log n solution: walk through array elements a[i], and use binary search to find the best a[j].
To get rid of the log factor, use the following observation: as the list is sorted, iterating through indices i gives a[i] is increasing, so any corresponding a[j] is decreasing in value and in index j. This gives the two-pointer solution: start with indices lo = 0, hi = N-1 (pointing to a[0] and a[N-1]). For a[0], find the best a[hi] by decreasing hi. Then increment lo and for each a[lo], decrease hi until a[lo] + a[hi] is the best. The algorithm can stop when it reaches lo == hi.

How to write an algorithm to check if the sum of any two numbers in an array/list matches a given number?

How can I write an algorithm to check if the sum of any two numbers in an array/list matches a given number
with a complexity of nlogn?
I'm sure there's a better way, but here's an idea:
Sort array
For every element e in the array, binary search for the complement (sum - e)
Both these operations are O(n log n).
This can be done in O(n) using a hash table. Initialize the table with all numbers in the array, with number as the key, and frequency as the value. Walk through each number in the array, and see if (sum - number) exists in the table. If it does, you have a match. After you've iterated through all numbers in the array, you should have a list of all pairs that sum up to the desired number.
array = initial array
table = hash(array)
S = sum
for each n in array
if table[S-n] exists
print "found numbers" n, S-n
The case where n and table[S-n] refer to the same number twice can be dealt with an extra check, but the complexity remains O(n).
Use a hash table. Insert every number into your hash table, along with its index. Then, let S be your desired sum. For every number array[i] in your initial array, see if S - array[i] exists in your hash table with an index different than i.
Average case is O(n), worst case is O(n^2), so use the binary search solution if you're afraid of the worst case.
Let us say that we want to find two numbers in the array A that when added together equal N.
Sort the array.
Find the largest number in the array that is less than N/2. Set the index of that number as lower.
Initialize upper to be lower + 1.
Set sum = A[lower] + A[upper].
If sum == N, done.
If sum < N, increment upper.
If sum > N, decrement lower.
If either lower or upper is outside the array, done without any matches.
Go back to 4.
The sort can be done in O(n log n). The search is done in linear time.
This is in Java : This even removes the possible duplicates.. Runtime - O(n^2)
private static int[] intArray = {15,5,10,20,25,30};
private static int sum = 35;
private static void algorithm()
{
Map<Integer, Integer> intMap = new Hashtable<Integer, Integer>();
for (int i=0; i<intArray.length; i++)
{
intMap.put(i, intArray[i]);
if(intMap.containsValue(sum - intArray[i]))
System.out.println("Found numbers : "+intArray[i] +" and "+(sum - intArray[i]));
}
System.out.println(intMap);
}
def sum_in(numbers, sum_):
"""whether any two numbers from `numbers` form `sum_`."""
a = set(numbers) # O(n)
return any((sum_ - n) in a for n in a) # O(n)
Example:
>>> sum_in([200, -10, -100], 100)
True
Here's a try in C. This isn't marked homework.
// Assumes a sorted integer array with no duplicates
void printMatching(int array[], int size, int sum)
{
int i = 0, k = size - 1;
int curSum;
while(i < k)
{
curSum = array[i] + array[k];
if(curSum == sum)
{
printf("Found match at indices %d, %d\n", i, k);
i++;k--;
}
else if(curSum < sum)
{
i++;
}
else
{
k--;
}
}
}
Here is some test output using int a[] = { 3, 5, 6, 7, 8, 9, 13, 15, 17 };
Searching for 12..
Found match at indices 0, 5
Found match at indices 1, 3
Searching for 22...
Found match at indices 1, 8
Found match at indices 3, 7
Found match at indices 5, 6
Searching for 4..
Searching for 50..
The search is linear, so O(n). The sort that takes place behind the scenes is going to be O(n*logn) if you use one of the good sorts.
Because of the math behind Big-O, the smaller term in additive terms will effectively drop out of your calculation, and you end up with O(n logn).
This one is O(n)
public static bool doesTargetExistsInList(int Target, int[] inputArray)
{
if (inputArray != null && inputArray.Length > 0 )
{
Hashtable inputHashTable = new Hashtable();
// This hash table will have all the items in the input array and how many times they appeard
Hashtable duplicateItems = new Hashtable();
foreach (int i in inputArray)
{
if (!inputHashTable.ContainsKey(i))
{
inputHashTable.Add(i, Target - i);
duplicateItems.Add(i, 1);
}
else
{
duplicateItems[i] = (int)duplicateItems[i] + 1;
}
}
foreach (DictionaryEntry de in inputHashTable)
{
if ((int)de.Key == (int)de.Value)
{
if ((int)duplicateItems[de.Key] > 1)
return true;
}
else if (inputHashTable.ContainsKey(de.Value))
{
return true;
}
}
}
return false;
}
Here is an algorithm that runs in O(n) if array is already sorted or O(n log n) if it isn't already sorted. Takes cues from lot of other answers here. Code is in Java, but here is a pseudo code as well derived from lot of existing answers, but optimized for duplicates generally
Lucky guess if first and last elements are equal to target
Create a Map with current value and its occurrences
Create a visited Set which contains items we already saw, this optimizes for duplicates such that say with an input of (1,1,1,1,1,1,2) and target 4, we only ever compute for 0 and last element and not all the 1's in the array.
Use these variables to compute existence of target in the array;
set the currentValue to array[ith];
set newTarget to target - currentValue;
set expectedCount to 2 if currentValue equals newTarget or 1 otherwise
AND return true only if
a. we never saw this integer before AND
b. we have some value for the newTarget in the map we created
c. and the count for the newTarget is equal or greater than the expectedCount
OTHERWISE
repeat step 4 till we reach end of array and return false OTHERWISE;
Like I mentioned the best possible use for a visited store is when we have duplicates, it would never help if none of elements are duplicates.
Java Code at https://gist.github.com/eded5dbcee737390acb4
Depends If you want only one sum O(N) or O(N log N) or all sums O(N^2) or O(N^2 log N). In the latter case better uses an FFT>
Step 1 : Sort the array in O(n logn)
Step 2 : Find two indices
0<=i<j<=n in a[0..n] such that a[i]+a[j]==k, where k is given key.
int i=0,j=n;
while(i<j) {
int sum = a[i]+a[j];
if(sum == k)
print(i,j)
else if (sum < k)
i++;
else if (sum > k)
j--;
}
public void sumOfTwoQualToTargetSum()
{
List<int> list= new List<int>();
list.Add(1);
list.Add(3);
list.Add(5);
list.Add(7);
list.Add(9);
int targetsum = 12;
int[] arr = list.ToArray();
for (int i = 0; i < arr.Length; i++)
{
for (int j = 0; j < arr.Length; j++)
{
if ((i != j) && ((arr[i] + arr[j]) == targetsum))
{
Console.Write("i =" + i);
Console.WriteLine("j =" + j);
}
}
}
}
Solved the question in Swift 4.0
Solved in 3 different ways (with 2 different type of return -> Boolean and Indexes)
A) TimeComplexity => 0(n Log n) SpaceComplexity => 0(n).
B) TimeComplexity => 0(n^2) SpaceComplexity => 0(1).
C) TimeComplexity => 0(n) SpaceComplexity => 0(n)
Choose Solution A, B or C depending on TradeOff.
//***********************Solution A*********************//
//This solution returns TRUE if any such two pairs exist in the array
func binarySearch(list: [Int], key: Int, start: Int, end: Int) -> Int? { //Helper Function
if end < start {
return -1
} else {
let midIndex = (start + end) / 2
if list[midIndex] > key {
return binarySearch(list: list, key: key, start: start, end: midIndex - 1)
} else if list[midIndex] < key {
return binarySearch(list: list, key: key, start: midIndex + 1, end: end)
} else {
return midIndex
}
}
}
func twoPairSum(sum : Int, inputArray: [Int]) -> Bool {
//Do this only if array isn't Sorted!
let sortedArray = inputArray.sorted()
for (currentIndex, value) in sortedArray.enumerated() {
if let indexReturned = binarySearch(list: sortedArray, key: sum - value, start: 0, end: sortedArray.count-1) {
if indexReturned != -1 && (indexReturned != currentIndex) {
return true
}
}
}
return false
}
//***********************Solution B*********************//
//This solution returns the indexes of the two pair elements if any such two pairs exists in the array
func twoPairSum(_ nums: [Int], _ target: Int) -> [Int] {
for currentIndex in 0..<nums.count {
for nextIndex in currentIndex+1..<nums.count {
if calculateSum(firstElement: nums[currentIndex], secondElement: nums[nextIndex], target: target) {
return [currentIndex, nextIndex]
}
}
}
return []
}
func calculateSum (firstElement: Int, secondElement: Int, target: Int) -> Bool {//Helper Function
return (firstElement + secondElement) == target
}
//*******************Solution C*********************//
//This solution returns the indexes of the two pair elements if any such two pairs exists in the array
func twoPairSum(_ nums: [Int], _ target: Int) -> [Int] {
var dict = [Int: Int]()
for (index, value) in nums.enumerated() {
dict[value] = index
}
for (index, value) in nums.enumerated() {
let otherIndex = dict[(target - value)]
if otherIndex != nil && otherIndex != index {
return [index, otherIndex!]
}
}
return []
}
This question is missing some more details into it. Like what is the return value, the limitation on the input.
I have seen some questions related to that, which can be this question with extra requirement, to return the actual elements that result in the input
Here is my version of the solution, it should be O(n).
import java.util.*;
public class IntegerSum {
private static final int[] NUMBERS = {1,2,3,4,5,6,7,8,9,10};
public static void main(String[] args) {
int[] result = IntegerSum.isSumExist(7);
System.out.println(Arrays.toString(result));
}
/**
* n = x + y
* 7 = 1 + 6
* 7 - 1 = 6
* 7 - 6 = 1
* The subtraction of one element in the array should result into the value of the other element if it exist;
*/
public static int[] isSumExist(int n) {
// validate the input, based on the question
// This to return the values that actually result in the sum. which is even more tricky
int[] output = new int[2];
Map resultsMap = new HashMap<Integer, Integer>();
// O(n)
for (int number : NUMBERS) {
if ( number > n )
throw new IllegalStateException("The number is not in the array.");
if ( resultsMap.containsKey(number) ) {
output[0] = number;
output[1] = (Integer) resultsMap.get(number);
return output;
}
resultsMap.put(n - number, number);
}
throw new IllegalStateException("The number is not in the array.");
}
}

How to find pythagorean triplets in an array faster than O(N^2)?

Can someone suggest an algorithm that finds all Pythagorean triplets among numbers in a given array? If it's possible, please, suggest an algorithm faster than O(n2).
Pythagorean triplet is a set {a,b,c} such that a2 = b2 + c2. Example: for array [9, 2, 3, 4, 8, 5, 6, 10] the output of the algorithm should be {3, 4, 5} and {6, 8, 10}.
I understand this question as
Given an array, find all such triplets i,j and k, such that a[i]2 = a[j]2+a[k]2
The key idea of the solution is:
Square each element. (This takes O(n) time). This will reduce the original task to "find three numbers in array, one of which is the sum of other two".
Now it you know how to solve such task in less than O(n2) time, use such algorithm. Out of my mind comes only the following O(n2) solution:
Sort the array in ascending order. This takes O(n log n).
Now consider each element a[i]. If a[i]=a[j]+a[k], then, since numbers are positive and array is now sorted, k<i and j<i.
To find such indexes, run a loop that increases j from 1 to i, and decreases k from i to 0 at the same time, until they meet. Increase j if a[j]+a[k] < a[i], and decrease k if the sum is greater than a[i]. If the sum is equal, that's one of the answers, print it, and shift both indexes.
This takes O(i) operations.
Repeat step 2 for each index i. This way you'll need totally O(n2) operations, which will be the final estimate.
No one knows how to do significantly better than quadratic for the closely related 3SUM problem ( http://en.wikipedia.org/wiki/3SUM ). I'd rate the possibility of a fast solution to your problem as unlikely.
The 3SUM problem is finding a + b + c = 0. Let PYTHTRIP be the problem of finding a^2 + b^2 = c^2 when the inputs are real algebraic numbers. Here is the O(n log n)-time reduction from 3SUM to PYTHTRIP. As ShreevatsaR points out, this doesn't exclude the possibility of a number-theoretic trick (or a solution to 3SUM!).
First we reduce 3SUM to a problem I'll call 3SUM-ALT. In 3SUM-ALT, we want to find a + b = c where all array entries are nonnegative. The finishing reduction from 3SUM-ALT to PYTHTRIP is just taking square roots.
To solve 3SUM using 3SUM-ALT, first eliminate the possibility of triples where one of a, b, c is zero (O(n log n)). Now, any satisfying triple has two positive numbers and one negative, or two negative and one positive. Let w be a number greater than three times the absolute value of any input number. Solve two instances of 3SUM-ALT: one where all negative x are mapped to w - x and all positive x are mapped to 2w + x; one where all negative x are mapped to 2w - x and all positive x are mapped to w + x. The rest of the proof is straightforward.
I have one more solution,
//sort the array in ascending order
//find the square of each element in the array
//let 'a' be the array containing square of each element in ascending order
for(i->0 to (a.length-1))
for (j->i+1 to (a.length-1))
//search the a[i]+a[j] ahead in the array from j+1 to the end of array
//if found get the triplet according to sqrt(a[i]),sqrt(a[j]) & sqrt(a[i]+a[j])
endfor
endfor
Not sure if this is any better but you can compute them in time proportional to the maximum value in the list by just computing all possible triples less than or equal to it. The following Perl code does. The time complexity of the algorithm is proportional to the maximum value since the sum of inverse squares 1 + 1/2^2 + 1/3^3 .... is equal to Pi^2/6, a constant.
I just used the formula from the Wikipedia page for generating none unique triples.
my $list = [9, 2, 3, 4, 8, 5, 6, 10];
pythagoreanTriplets ($list);
sub pythagoreanTriplets
{
my $list = $_[0];
my %hash;
my $max = 0;
foreach my $value (#$list)
{
$hash{$value} = 1;
$max = $value if ($value > $max);
}
my $sqrtMax = 1 + int sqrt $max;
for (my $n = 1; $n <= $sqrtMax; $n++)
{
my $n2 = $n * $n;
for (my $m = $n + 1; $m <= $sqrtMax; $m++)
{
my $m2 = $m * $m;
my $maxK = 1 + int ($max / ($m2 + $n2));
for (my $k = 1; $k <= $maxK; $k++)
{
my $a = $k * ($m2 - $n2);
my $b = $k * (2 * $m * $n);
my $c = $k * ($m2 + $n2);
print "$a $b $c\n" if (exists ($hash{$a}) && exists ($hash{$b}) && exists ($hash{$c}));
}
}
}
}
Here's a solution which might scale better for large lists of small numbers. At least it's different ;v) .
According to http://en.wikipedia.org/wiki/Pythagorean_triple#Generating_a_triple,
a = m^2 - n^2, b = 2mn, c = m^2 + n^2
b looks nice, eh?
Sort the array in O(N log N) time.
For each element b, find the prime factorization. Naively using a table of primes up to the square root of the largest input value M would take O(sqrt M/log M) time and space* per element.
For each pair (m,n), m > n, b = 2mn (skip odd b), search for m^2-n^2 and m^2+n^2 in the sorted array. O(log N) per pair, O(2^(Ω(M))) = O(log M)** pairs per element, O(N (log N) (log M)) total.
Final analysis: O( N ( (sqrt M/log M) + (log N * log M) ) ), N = array size, M = magnitude of values.
(* To accept 64-bit input, there are about 203M 32-bit primes, but we can use a table of differences at one byte per prime, since the differences are all even, and perhaps also generate large primes in sequence on demand. To accept 32-bit input, a table of 16-bit primes is needed, which is small enough to fit in L1 cache. Time here is an overestimate assuming all prime factors are just less than the square root.)
(** Actual bound lower because of duplicate prime factors.)
Solution in O(N).
find out minimum element in array. min O(n).
find out maximum element in array. max O(n).
make a hastable of elements so that element can be searched in O(1).
m^2-1 = min .... put min from step 1. find out m in this equation.O(1)
2m = min .... put min from step 1. find out m in this equation.O(1)
m^2+1= max .... put max from step 2. find out m in this equation.O(1)
choose floor of min of (steps 4,5,6) let say minValue.O(1)
choose ceil of max of (steps 4,5,6) let say maxValue.O(1)
loop from j=minValue to maxValue. maxvalue-minvalue will be less than root of N.
9.a calculate three numbers j^2-1,2j,j^2+1.
9.b search these numbers in hashtable. if found return success.
return failure.
A few of my co-workers were asked this very same problem in a java cert course they were taking the solution we came up with was O(N^2). We shaved off as much of the problem space as we could but we could not find a way to drop the complexity to N Log N or better.
public static List<int[]> pythagoreanTripplets(int[] input) {
List<int[]> answers = new ArrayList<int[]>();
Map<Long, Integer> map = new HashMap<Long, Integer>();
for (int i = 0; i < input.length; i++) {
map.put((long)input[i] * (long)input[i], input[i]);
}
Long[] unique = (Long[]) map.keySet().toArray(new Long[0]);
Arrays.sort(unique);
long comps =0;
for(int i = 1 ; i < unique.length;i++)
{
Long halfC = unique[i]/2;
for(int j = i-1 ; j>= 0 ; j--)
{
if(unique[j] < halfC) break;
if(map.containsKey(unique[i] - unique[j]))
{
answers.add(new int[]{map.get(unique[i] - unique[j]),map.get(unique[j]),map.get(unique[i])});
}
}
}
return answers;
}
If (a, b, c) is a Pythagorean triple, then so is (ka, kb, kc) for any positive integer.
so simply find one value for a, b, and c and then you can calculate as many new ones as you want.
Pseudo code:
a = 3
b = 4
c = 5
for k in 1..N:
P[k] = (ka, kb, kc)
Let me know if this is not exactly what you're looking for.
It can be done in O(n) time. first hash the elements in map for existence check. after that apply the below algorithm
Scan the array and if element is even number, (n,n^2/2 +1, n^2/2 -1) is triplet to be found. just check for that's existence using hash map lookup. if all elements in triplet exists, print the triplet.
This is the one i had implemented ...
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
/**
*
* #author Pranali Choudhari (pranali_choudhari#persistent.co.in)
*/
public class PythagoreanTriple {
/
//I hope this is optimized
public static void main(String[] args) {
Map<Long,Set<Long>> triples = new HashMap<Long,Set<Long>>();
List<Long> l1 = new ArrayList<Long>();
addValuesToArrayList(l1);
long n =0;
for(long i : l1){
//if its side a.
n = (i-1L)/2L;
if (n!=0 && n > 0){
putInMap(triples,n,i);
n=0;
}
//if its side b
n = ((-1 + Math.round(Math.sqrt(2*i+1)))/2);
if (n != 0 && n > 0){
putInMap(triples,n,i);
n=0;
}
n= ((-1 - Math.round(Math.sqrt(2*i+1)))/2);
if (n != 0 && n > 0){
putInMap(triples,n,i);
n=0;
}
//if its side c
n = ((-1 + Math.round(Math.sqrt(2*i-1)))/2);
if (n != 0 && n > 0){
putInMap(triples,n,i);
n=0;
}
n= ((-1 - Math.round(Math.sqrt(2*i-1)))/2);
if (n != 0 && n > 0){
putInMap(triples,n,i);
n=0;
}
}
for(Map.Entry<Long, Set<Long>> e : triples.entrySet()){
if(e.getValue().size() == 3){
System.out.println("Tripples" + e.getValue());
}
//need to handle scenario when size() > 3
//even those are tripples but we need to filter the wrong ones
}
}
private static void putInMap( Map<Long,Set<Long>> triples, long n, Long i) {
Set<Long> set = triples.get(n);
if(set == null){
set = new HashSet<Long>();
triples.put(n, set);
}
set.add(i);
}
//add values here
private static void addValuesToArrayList(List<Long> l1) {
l1.add(1L);
l1.add(2L);
l1.add(3L);
l1.add(4L);
l1.add(5L);
l1.add(12L);
l1.add(13L);
}
}
Here's the implementation in Java:
/**
* Step1: Square each of the elements in the array [O(n)]
* Step2: Sort the array [O(n logn)]
* Step3: For each element in the array, find all the pairs in the array whose sum is equal to that element [O(n2)]
*
* Time Complexity: O(n2)
*/
public static Set<Set<Integer>> findAllPythogoreanTriplets(int [] unsortedData) {
// O(n) - Square all the elements in the array
for (int i = 0; i < unsortedData.length; i++)
unsortedData[i] *= unsortedData[i];
// O(n logn) - Sort
int [] sortedSquareData = QuickSort.sort(unsortedData);
// O(n2)
Set<Set<Integer>> triplets = new HashSet<Set<Integer>>();
for (int i = 0; i < sortedSquareData.length; i++) {
Set<Set<Integer>> pairs = findAllPairsThatSumToAConstant(sortedSquareData, sortedSquareData[i]);
for (Set<Integer> pair : pairs) {
Set<Integer> triplet = new HashSet<Integer>();
for (Integer n : pair) {
triplet.add((int)Math.sqrt(n));
}
triplet.add((int)Math.sqrt(sortedSquareData[i])); // adding the third element to the pair to make it a triplet
triplets.add(triplet);
}
}
return triplets;
}
public static Set<Set<Integer>> findAllPairsThatSumToAConstant(int [] sortedData, int constant) {
// O(n)
Set<Set<Integer>> pairs = new HashSet<Set<Integer>>();
int p1 = 0; // pointing to the first element
int p2 = sortedData.length - 1; // pointing to the last element
while (p1 < p2) {
int pointersSum = sortedData[p1] + sortedData[p2];
if (pointersSum > constant)
p2--;
else if (pointersSum < constant)
p1++;
else {
Set<Integer> set = new HashSet<Integer>();
set.add(sortedData[p1]);
set.add(sortedData[p2]);
pairs.add(set);
p1++;
p2--;
}
}
return pairs;
}
if the problem is the one "For an Array of integers find all triples such that a^2+b^2 = c^2
Sort the array into ascending order
Set three pointers p1,p2,p3 at entries 0,1,2
set pEnd to past the last entry in the array
while (p2 < pend-2)
{
sum = (*p1 * *p1 + *p2 * *p2)
while ((*p3 * *p3) < sum && p3 < pEnd -1)
p3++;
if ( *p3 == sum)
output_triple(*p1, *p2, *p3);
p1++;
p2++;
}
it's moving 3 pointers up the array so it O(sort(n) + n)
it's not n2 because the next pass starts at the next largest number and doesn't reset.
if the last number was too small for the triple, it's still to small when you go to the next bigger a and b
public class FindPythagorusCombination {
public static void main(String[] args) {
int[] no={1, 5, 3, 4, 8, 10, 6 };
int[] sortedno= sorno(no);
findPythaComb(sortedno);
}
private static void findPythaComb(int[] sortedno) {
for(int i=0; i<sortedno.length;i++){
int lSum=0, rSum=0;
lSum= sortedno[i]*sortedno[i];
for(int j=i+1; j<sortedno.length; j++){
for(int k=j+1; k<sortedno.length;k++){
rSum= (sortedno[j]*sortedno[j])+(sortedno[k]*sortedno[k]);
if(lSum==rSum){
System.out.println("Pythagorus combination found: " +sortedno[i] +" " +sortedno[j]+" "+sortedno[k]);
}else
rSum=0;
}
}
}
}
private static int[] sorno(int[] no) {
for(int i=0; i<no.length;i++){
for(int j=i+1; j<no.length;j++){
if(no[i]<no[j]){
int temp= no[i];
no[i]= no[j];
no[j]=temp;
}
}
}
return no;
}
}
import java.io.*;
import java.lang.*;
import java.util.*;
class PythagoreanTriplets
{
public static void main(String args[])throws IOException
{
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
int n = Integer.parseInt(br.readLine());
int arr[] = new int[n];
int i,j,k,sum;
System.out.println("Enter the numbers ");
for(i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
arr[i]=arr[i]*arr[i];
}
Arrays.sort(arr);
for(i=n-1;i>=0;i--)
{
for(j=0,k=i-1;j<k;)
{
sum=arr[j]+arr[k];
if(sum==arr[i]){System.out.println((int)Math.sqrt(arr[i]) +","+(int)Math.sqrt(arr[j])+","+(int)Math.sqrt(arr[k]));break;}
else if(sum>arr[i])k--;
else j++;
}
}
}
}
Finding Pythagorean triplets in O(n)
Algorithm:
For each element in array, check it is prime or not
if it is prime, calculate other two number as ((n^2)+1)/2 and ((n^2)-1)/2 and check whether these two calculated number is in array
if it is not prime, calculate other two number as mentioned in else case in code given below
Repeat until end of array is reached
int arr[]={1,2,3,4,5,6,7,8,9,10,12,13,11,60,61};
int prim[]={3,5,7,11};//store all the prime numbers
int r,l;
List<Integer> prime=new ArrayList<Integer>();//storing in list,so that it is easy to search
for(int i=0;i<4;i++){
prime.add(prim[i]);
}
List<Integer> n=new ArrayList<Integer>();
for(int i=0;i<arr.length;i++)
{
n.add(arr[i]);
}
double v1,v2,v3;
int dummy[]=new int[arr.length];
for(int i=0;i<arr.length;i++)
dummy[i]=arr[i];
Integer x=0,y=0,z=0;
List<Integer> temp=new ArrayList<Integer>();
for(int i=0;i<arr.length;i++)
{
temp.add(arr[i]);
}
for(int j:n){
if(prime.contains(j)){//if it is prime
double a,b;
v1=(double)j;
v2=Math.ceil(((j*j)+1)/2);
v3=Math.ceil(((j*j)-1)/2);
if(n.contains((int)v2) && n.contains((int)v3)){
System.out.println((int)v1+" "+(int)v2+" "+(int)v3);
}
}
else//if it is not prime
{
if(j%3==0){
x=j;
y=4*(j/3);
z=5*(j/3);
if(temp.contains(y) && temp.contains(z)){
System.out.println(x+" "+y+" "+z);
//replacing those three elements with 0
dummy[temp.indexOf(x)-1]=0;
dummy[temp.indexOf(y)-1]=0;
dummy[temp.indexOf(z)-1]=0;
}
}
}//else end
}//for end
Complexity: O(n)
Take a look at the following code that I wrote.
#include <iostream>
#include <vector>
using namespace std;
typedef long long ll;
bool existTriplet(vector<ll> &vec)
{
for(auto i = 0; i < vec.size(); i++)
{
vec[i] = vec[i] * vec[i]; //Square all the array elements
}
sort(vec.begin(), vec.end()); //Sort it
for(auto i = vec.size() - 1; i >= 2; i--)
{
ll l = 0;
ll r = i - 1;
while(l < r)
{
if(vec[l] + vec[r] == vec[i])
return true;
vec[l] + vec[r] < vec[i] ? l++ : r--;
}
}
return false;
}
int main() {
int T;
cin >> T;
while(T--)
{
ll n;
cin >> n;
vector<ll> vec(n);
for(auto i = 0; i < n; i++)
{
cin >> vec[i];
}
if(existTriplet(vec))
cout << "Yes";
else
cout << "No";
cout << endl;
}
return 0;
}
Plato's formula for Pythagorean Triples:
Plato, a Greek Philosopher, came up with a great formula for finding Pythagorean triples.
(2m)^2 + (m^2 - 1)^2 = (m^2 + 1)^2
bool checkperfectSquare(int num){
int sq=(int)round(sqrt(num));
if(sq==num/sq){
return true;
}
else{
return false;
}
}
void solve(){
int i,j,k,n;
// lenth of array
cin>>n;
int ar[n];
// reading all the number in array
for(i=0;i<n;i++){
cin>>ar[i];
}
// sort the array
sort(ar,ar+n);
for(i=0;i<n;i++){
if(ar[i]<=2){
continue;
}
else{
int tmp1=ar[i]+1;
int m;
if(checkperfectSquare(tmp1)){
m=(ll)round(sqrt(tmp1));
int b=2*m,c=(m*m)+1;
if(binary_search(ar,ar+n,b)&&binary_search(ar,ar+n,c)){
cout<<ar[i]<<" "<<b<<" "<<c<<endl;
break;
}
}
if(ar[i]%2==0){
m=ar[i]/2;
int b=(m*m-1),c=(m*m+1);
if(binary_search(ar,ar+n,b)&&binary_search(ar,ar+n,c)){
cout<<ar[i]<<" "<<b<<" "<<c<<endl;
break;
}
}
}
}
}

How to find the kth largest element in an unsorted array of length n in O(n)?

I believe there's a way to find the kth largest element in an unsorted array of length n in O(n). Or perhaps it's "expected" O(n) or something. How can we do this?
This is called finding the k-th order statistic. There's a very simple randomized algorithm (called quickselect) taking O(n) average time, O(n^2) worst case time, and a pretty complicated non-randomized algorithm (called introselect) taking O(n) worst case time. There's some info on Wikipedia, but it's not very good.
Everything you need is in these powerpoint slides. Just to extract the basic algorithm of the O(n) worst-case algorithm (introselect):
Select(A,n,i):
Divide input into ⌈n/5⌉ groups of size 5.
/* Partition on median-of-medians */
medians = array of each group’s median.
pivot = Select(medians, ⌈n/5⌉, ⌈n/10⌉)
Left Array L and Right Array G = partition(A, pivot)
/* Find ith element in L, pivot, or G */
k = |L| + 1
If i = k, return pivot
If i < k, return Select(L, k-1, i)
If i > k, return Select(G, n-k, i-k)
It's also very nicely detailed in the Introduction to Algorithms book by Cormen et al.
If you want a true O(n) algorithm, as opposed to O(kn) or something like that, then you should use quickselect (it's basically quicksort where you throw out the partition that you're not interested in). My prof has a great writeup, with the runtime analysis: (reference)
The QuickSelect algorithm quickly finds the k-th smallest element of an unsorted array of n elements. It is a RandomizedAlgorithm, so we compute the worst-case expected running time.
Here is the algorithm.
QuickSelect(A, k)
let r be chosen uniformly at random in the range 1 to length(A)
let pivot = A[r]
let A1, A2 be new arrays
# split into a pile A1 of small elements and A2 of big elements
for i = 1 to n
if A[i] < pivot then
append A[i] to A1
else if A[i] > pivot then
append A[i] to A2
else
# do nothing
end for
if k <= length(A1):
# it's in the pile of small elements
return QuickSelect(A1, k)
else if k > length(A) - length(A2)
# it's in the pile of big elements
return QuickSelect(A2, k - (length(A) - length(A2))
else
# it's equal to the pivot
return pivot
What is the running time of this algorithm? If the adversary flips coins for us, we may find that the pivot is always the largest element and k is always 1, giving a running time of
T(n) = Theta(n) + T(n-1) = Theta(n2)
But if the choices are indeed random, the expected running time is given by
T(n) <= Theta(n) + (1/n) ∑i=1 to nT(max(i, n-i-1))
where we are making the not entirely reasonable assumption that the recursion always lands in the larger of A1 or A2.
Let's guess that T(n) <= an for some a. Then we get
T(n)
<= cn + (1/n) ∑i=1 to nT(max(i-1, n-i))
= cn + (1/n) ∑i=1 to floor(n/2) T(n-i) + (1/n) ∑i=floor(n/2)+1 to n T(i)
<= cn + 2 (1/n) ∑i=floor(n/2) to n T(i)
<= cn + 2 (1/n) ∑i=floor(n/2) to n ai
and now somehow we have to get the horrendous sum on the right of the plus sign to absorb the cn on the left. If we just bound it as 2(1/n) ∑i=n/2 to n an, we get roughly 2(1/n)(n/2)an = an. But this is too big - there's no room to squeeze in an extra cn. So let's expand the sum using the arithmetic series formula:
∑i=floor(n/2) to n i
= ∑i=1 to n i - ∑i=1 to floor(n/2) i
= n(n+1)/2 - floor(n/2)(floor(n/2)+1)/2
<= n2/2 - (n/4)2/2
= (15/32)n2
where we take advantage of n being "sufficiently large" to replace the ugly floor(n/2) factors with the much cleaner (and smaller) n/4. Now we can continue with
cn + 2 (1/n) ∑i=floor(n/2) to n ai,
<= cn + (2a/n) (15/32) n2
= n (c + (15/16)a)
<= an
provided a > 16c.
This gives T(n) = O(n). It's clearly Omega(n), so we get T(n) = Theta(n).
A quick Google on that ('kth largest element array') returned this: http://discuss.joelonsoftware.com/default.asp?interview.11.509587.17
"Make one pass through tracking the three largest values so far."
(it was specifically for 3d largest)
and this answer:
Build a heap/priority queue. O(n)
Pop top element. O(log n)
Pop top element. O(log n)
Pop top element. O(log n)
Total = O(n) + 3 O(log n) = O(n)
You do like quicksort. Pick an element at random and shove everything either higher or lower. At this point you'll know which element you actually picked, and if it is the kth element you're done, otherwise you repeat with the bin (higher or lower), that the kth element would fall in. Statistically speaking, the time it takes to find the kth element grows with n, O(n).
A Programmer's Companion to Algorithm Analysis gives a version that is O(n), although the author states that the constant factor is so high, you'd probably prefer the naive sort-the-list-then-select method.
I answered the letter of your question :)
The C++ standard library has almost exactly that function call nth_element, although it does modify your data. It has expected linear run-time, O(N), and it also does a partial sort.
const int N = ...;
double a[N];
// ...
const int m = ...; // m < N
nth_element (a, a + m, a + N);
// a[m] contains the mth element in a
You can do it in O(n + kn) = O(n) (for constant k) for time and O(k) for space, by keeping track of the k largest elements you've seen.
For each element in the array you can scan the list of k largest and replace the smallest element with the new one if it is bigger.
Warren's priority heap solution is neater though.
Although not very sure about O(n) complexity, but it will be sure to be between O(n) and nLog(n). Also sure to be closer to O(n) than nLog(n). Function is written in Java
public int quickSelect(ArrayList<Integer>list, int nthSmallest){
//Choose random number in range of 0 to array length
Random random = new Random();
//This will give random number which is not greater than length - 1
int pivotIndex = random.nextInt(list.size() - 1);
int pivot = list.get(pivotIndex);
ArrayList<Integer> smallerNumberList = new ArrayList<Integer>();
ArrayList<Integer> greaterNumberList = new ArrayList<Integer>();
//Split list into two.
//Value smaller than pivot should go to smallerNumberList
//Value greater than pivot should go to greaterNumberList
//Do nothing for value which is equal to pivot
for(int i=0; i<list.size(); i++){
if(list.get(i)<pivot){
smallerNumberList.add(list.get(i));
}
else if(list.get(i)>pivot){
greaterNumberList.add(list.get(i));
}
else{
//Do nothing
}
}
//If smallerNumberList size is greater than nthSmallest value, nthSmallest number must be in this list
if(nthSmallest < smallerNumberList.size()){
return quickSelect(smallerNumberList, nthSmallest);
}
//If nthSmallest is greater than [ list.size() - greaterNumberList.size() ], nthSmallest number must be in this list
//The step is bit tricky. If confusing, please see the above loop once again for clarification.
else if(nthSmallest > (list.size() - greaterNumberList.size())){
//nthSmallest will have to be changed here. [ list.size() - greaterNumberList.size() ] elements are already in
//smallerNumberList
nthSmallest = nthSmallest - (list.size() - greaterNumberList.size());
return quickSelect(greaterNumberList,nthSmallest);
}
else{
return pivot;
}
}
I implemented finding kth minimimum in n unsorted elements using dynamic programming, specifically tournament method. The execution time is O(n + klog(n)). The mechanism used is listed as one of methods on Wikipedia page about Selection Algorithm (as indicated in one of the posting above). You can read about the algorithm and also find code (java) on my blog page Finding Kth Minimum. In addition the logic can do partial ordering of the list - return first K min (or max) in O(klog(n)) time.
Though the code provided result kth minimum, similar logic can be employed to find kth maximum in O(klog(n)), ignoring the pre-work done to create tournament tree.
Sexy quickselect in Python
def quickselect(arr, k):
'''
k = 1 returns first element in ascending order.
can be easily modified to return first element in descending order
'''
r = random.randrange(0, len(arr))
a1 = [i for i in arr if i < arr[r]] '''partition'''
a2 = [i for i in arr if i > arr[r]]
if k <= len(a1):
return quickselect(a1, k)
elif k > len(arr)-len(a2):
return quickselect(a2, k - (len(arr) - len(a2)))
else:
return arr[r]
As per this paper Finding the Kth largest item in a list of n items the following algorithm will take O(n) time in worst case.
Divide the array in to n/5 lists of 5 elements each.
Find the median in each sub array of 5 elements.
Recursively find the median of all the medians, lets call it M
Partition the array in to two sub array 1st sub-array contains the elements larger than M , lets say this sub-array is a1 , while other sub-array contains the elements smaller then M., lets call this sub-array a2.
If k <= |a1|, return selection (a1,k).
If k− 1 = |a1|, return M.
If k> |a1| + 1, return selection(a2,k −a1 − 1).
Analysis: As suggested in the original paper:
We use the median to partition the list into two halves(the first half,
if k <= n/2 , and the second half otherwise). This algorithm takes
time cn at the first level of recursion for some constant c, cn/2 at
the next level (since we recurse in a list of size n/2), cn/4 at the
third level, and so on. The total time taken is cn + cn/2 + cn/4 +
.... = 2cn = o(n).
Why partition size is taken 5 and not 3?
As mentioned in original paper:
Dividing the list by 5 assures a worst-case split of 70 − 30. Atleast
half of the medians greater than the median-of-medians, hence atleast
half of the n/5 blocks have atleast 3 elements and this gives a
3n/10 split, which means the other partition is 7n/10 in worst case.
That gives T(n) = T(n/5)+T(7n/10)+O(n). Since n/5+7n/10 < 1, the
worst-case running time isO(n).
Now I have tried to implement the above algorithm as:
public static int findKthLargestUsingMedian(Integer[] array, int k) {
// Step 1: Divide the list into n/5 lists of 5 element each.
int noOfRequiredLists = (int) Math.ceil(array.length / 5.0);
// Step 2: Find pivotal element aka median of medians.
int medianOfMedian = findMedianOfMedians(array, noOfRequiredLists);
//Now we need two lists split using medianOfMedian as pivot. All elements in list listOne will be grater than medianOfMedian and listTwo will have elements lesser than medianOfMedian.
List<Integer> listWithGreaterNumbers = new ArrayList<>(); // elements greater than medianOfMedian
List<Integer> listWithSmallerNumbers = new ArrayList<>(); // elements less than medianOfMedian
for (Integer element : array) {
if (element < medianOfMedian) {
listWithSmallerNumbers.add(element);
} else if (element > medianOfMedian) {
listWithGreaterNumbers.add(element);
}
}
// Next step.
if (k <= listWithGreaterNumbers.size()) return findKthLargestUsingMedian((Integer[]) listWithGreaterNumbers.toArray(new Integer[listWithGreaterNumbers.size()]), k);
else if ((k - 1) == listWithGreaterNumbers.size()) return medianOfMedian;
else if (k > (listWithGreaterNumbers.size() + 1)) return findKthLargestUsingMedian((Integer[]) listWithSmallerNumbers.toArray(new Integer[listWithSmallerNumbers.size()]), k-listWithGreaterNumbers.size()-1);
return -1;
}
public static int findMedianOfMedians(Integer[] mainList, int noOfRequiredLists) {
int[] medians = new int[noOfRequiredLists];
for (int count = 0; count < noOfRequiredLists; count++) {
int startOfPartialArray = 5 * count;
int endOfPartialArray = startOfPartialArray + 5;
Integer[] partialArray = Arrays.copyOfRange((Integer[]) mainList, startOfPartialArray, endOfPartialArray);
// Step 2: Find median of each of these sublists.
int medianIndex = partialArray.length/2;
medians[count] = partialArray[medianIndex];
}
// Step 3: Find median of the medians.
return medians[medians.length / 2];
}
Just for sake of completion, another algorithm makes use of Priority Queue and takes time O(nlogn).
public static int findKthLargestUsingPriorityQueue(Integer[] nums, int k) {
int p = 0;
int numElements = nums.length;
// create priority queue where all the elements of nums will be stored
PriorityQueue<Integer> pq = new PriorityQueue<Integer>();
// place all the elements of the array to this priority queue
for (int n : nums) {
pq.add(n);
}
// extract the kth largest element
while (numElements - k + 1 > 0) {
p = pq.poll();
k++;
}
return p;
}
Both of these algorithms can be tested as:
public static void main(String[] args) throws IOException {
Integer[] numbers = new Integer[]{2, 3, 5, 4, 1, 12, 11, 13, 16, 7, 8, 6, 10, 9, 17, 15, 19, 20, 18, 23, 21, 22, 25, 24, 14};
System.out.println(findKthLargestUsingMedian(numbers, 8));
System.out.println(findKthLargestUsingPriorityQueue(numbers, 8));
}
As expected output is:
18
18
Find the median of the array in linear time, then use partition procedure exactly as in quicksort to divide the array in two parts, values to the left of the median lesser( < ) than than median and to the right greater than ( > ) median, that too can be done in lineat time, now, go to that part of the array where kth element lies,
Now recurrence becomes:
T(n) = T(n/2) + cn
which gives me O (n) overal.
Below is the link to full implementation with quite an extensive explanation how the algorithm for finding Kth element in an unsorted algorithm works. Basic idea is to partition the array like in QuickSort. But in order to avoid extreme cases (e.g. when smallest element is chosen as pivot in every step, so that algorithm degenerates into O(n^2) running time), special pivot selection is applied, called median-of-medians algorithm. The whole solution runs in O(n) time in worst and in average case.
Here is link to the full article (it is about finding Kth smallest element, but the principle is the same for finding Kth largest):
Finding Kth Smallest Element in an Unsorted Array
How about this kinda approach
Maintain a buffer of length k and a tmp_max, getting tmp_max is O(k) and is done n times so something like O(kn)
Is it right or am i missing something ?
Although it doesn't beat average case of quickselect and worst case of median statistics method but its pretty easy to understand and implement.
There is also one algorithm, that outperforms quickselect algorithm. It's called Floyd-Rivets (FR) algorithm.
Original article: https://doi.org/10.1145/360680.360694
Downloadable version: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.309.7108&rep=rep1&type=pdf
Wikipedia article https://en.wikipedia.org/wiki/Floyd%E2%80%93Rivest_algorithm
I tried to implement quickselect and FR algorithm in C++. Also I compared them to the standard C++ library implementations std::nth_element (which is basically introselect hybrid of quickselect and heapselect). The result was quickselect and nth_element ran comparably on average, but FR algorithm ran approx. twice as fast compared to them.
Sample code that I used for FR algorithm:
template <typename T>
T FRselect(std::vector<T>& data, const size_t& n)
{
if (n == 0)
return *(std::min_element(data.begin(), data.end()));
else if (n == data.size() - 1)
return *(std::max_element(data.begin(), data.end()));
else
return _FRselect(data, 0, data.size() - 1, n);
}
template <typename T>
T _FRselect(std::vector<T>& data, const size_t& left, const size_t& right, const size_t& n)
{
size_t leftIdx = left;
size_t rightIdx = right;
while (rightIdx > leftIdx)
{
if (rightIdx - leftIdx > 600)
{
size_t range = rightIdx - leftIdx + 1;
long long i = n - (long long)leftIdx + 1;
long long z = log(range);
long long s = 0.5 * exp(2 * z / 3);
long long sd = 0.5 * sqrt(z * s * (range - s) / range) * sgn(i - (long long)range / 2);
size_t newLeft = fmax(leftIdx, n - i * s / range + sd);
size_t newRight = fmin(rightIdx, n + (range - i) * s / range + sd);
_FRselect(data, newLeft, newRight, n);
}
T t = data[n];
size_t i = leftIdx;
size_t j = rightIdx;
// arrange pivot and right index
std::swap(data[leftIdx], data[n]);
if (data[rightIdx] > t)
std::swap(data[rightIdx], data[leftIdx]);
while (i < j)
{
std::swap(data[i], data[j]);
++i; --j;
while (data[i] < t) ++i;
while (data[j] > t) --j;
}
if (data[leftIdx] == t)
std::swap(data[leftIdx], data[j]);
else
{
++j;
std::swap(data[j], data[rightIdx]);
}
// adjust left and right towards the boundaries of the subset
// containing the (k - left + 1)th smallest element
if (j <= n)
leftIdx = j + 1;
if (n <= j)
rightIdx = j - 1;
}
return data[leftIdx];
}
template <typename T>
int sgn(T val) {
return (T(0) < val) - (val < T(0));
}
iterate through the list. if the current value is larger than the stored largest value, store it as the largest value and bump the 1-4 down and 5 drops off the list. If not,compare it to number 2 and do the same thing. Repeat, checking it against all 5 stored values. this should do it in O(n)
i would like to suggest one answer
if we take the first k elements and sort them into a linked list of k values
now for every other value even for the worst case if we do insertion sort for rest n-k values even in the worst case number of comparisons will be k*(n-k) and for prev k values to be sorted let it be k*(k-1) so it comes out to be (nk-k) which is o(n)
cheers
Explanation of the median - of - medians algorithm to find the k-th largest integer out of n can be found here:
http://cs.indstate.edu/~spitla/presentation.pdf
Implementation in c++ is below:
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int findMedian(vector<int> vec){
// Find median of a vector
int median;
size_t size = vec.size();
median = vec[(size/2)];
return median;
}
int findMedianOfMedians(vector<vector<int> > values){
vector<int> medians;
for (int i = 0; i < values.size(); i++) {
int m = findMedian(values[i]);
medians.push_back(m);
}
return findMedian(medians);
}
void selectionByMedianOfMedians(const vector<int> values, int k){
// Divide the list into n/5 lists of 5 elements each
vector<vector<int> > vec2D;
int count = 0;
while (count != values.size()) {
int countRow = 0;
vector<int> row;
while ((countRow < 5) && (count < values.size())) {
row.push_back(values[count]);
count++;
countRow++;
}
vec2D.push_back(row);
}
cout<<endl<<endl<<"Printing 2D vector : "<<endl;
for (int i = 0; i < vec2D.size(); i++) {
for (int j = 0; j < vec2D[i].size(); j++) {
cout<<vec2D[i][j]<<" ";
}
cout<<endl;
}
cout<<endl;
// Calculating a new pivot for making splits
int m = findMedianOfMedians(vec2D);
cout<<"Median of medians is : "<<m<<endl;
// Partition the list into unique elements larger than 'm' (call this sublist L1) and
// those smaller them 'm' (call this sublist L2)
vector<int> L1, L2;
for (int i = 0; i < vec2D.size(); i++) {
for (int j = 0; j < vec2D[i].size(); j++) {
if (vec2D[i][j] > m) {
L1.push_back(vec2D[i][j]);
}else if (vec2D[i][j] < m){
L2.push_back(vec2D[i][j]);
}
}
}
// Checking the splits as per the new pivot 'm'
cout<<endl<<"Printing L1 : "<<endl;
for (int i = 0; i < L1.size(); i++) {
cout<<L1[i]<<" ";
}
cout<<endl<<endl<<"Printing L2 : "<<endl;
for (int i = 0; i < L2.size(); i++) {
cout<<L2[i]<<" ";
}
// Recursive calls
if ((k - 1) == L1.size()) {
cout<<endl<<endl<<"Answer :"<<m;
}else if (k <= L1.size()) {
return selectionByMedianOfMedians(L1, k);
}else if (k > (L1.size() + 1)){
return selectionByMedianOfMedians(L2, k-((int)L1.size())-1);
}
}
int main()
{
int values[] = {2, 3, 5, 4, 1, 12, 11, 13, 16, 7, 8, 6, 10, 9, 17, 15, 19, 20, 18, 23, 21, 22, 25, 24, 14};
vector<int> vec(values, values + 25);
cout<<"The given array is : "<<endl;
for (int i = 0; i < vec.size(); i++) {
cout<<vec[i]<<" ";
}
selectionByMedianOfMedians(vec, 8);
return 0;
}
There is also Wirth's selection algorithm, which has a simpler implementation than QuickSelect. Wirth's selection algorithm is slower than QuickSelect, but with some improvements it becomes faster.
In more detail. Using Vladimir Zabrodsky's MODIFIND optimization and the median-of-3 pivot selection and paying some attention to the final steps of the partitioning part of the algorithm, i've came up with the following algorithm (imaginably named "LefSelect"):
#define F_SWAP(a,b) { float temp=(a);(a)=(b);(b)=temp; }
# Note: The code needs more than 2 elements to work
float lefselect(float a[], const int n, const int k) {
int l=0, m = n-1, i=l, j=m;
float x;
while (l<m) {
if( a[k] < a[i] ) F_SWAP(a[i],a[k]);
if( a[j] < a[i] ) F_SWAP(a[i],a[j]);
if( a[j] < a[k] ) F_SWAP(a[k],a[j]);
x=a[k];
while (j>k & i<k) {
do i++; while (a[i]<x);
do j--; while (a[j]>x);
F_SWAP(a[i],a[j]);
}
i++; j--;
if (j<k) {
while (a[i]<x) i++;
l=i; j=m;
}
if (k<i) {
while (x<a[j]) j--;
m=j; i=l;
}
}
return a[k];
}
In benchmarks that i did here, LefSelect is 20-30% faster than QuickSelect.
Haskell Solution:
kthElem index list = sort list !! index
withShape ~[] [] = []
withShape ~(x:xs) (y:ys) = x : withShape xs ys
sort [] = []
sort (x:xs) = (sort ls `withShape` ls) ++ [x] ++ (sort rs `withShape` rs)
where
ls = filter (< x)
rs = filter (>= x)
This implements the median of median solutions by using the withShape method to discover the size of a partition without actually computing it.
Here is a C++ implementation of Randomized QuickSelect. The idea is to randomly pick a pivot element. To implement randomized partition, we use a random function, rand() to generate index between l and r, swap the element at randomly generated index with the last element, and finally call the standard partition process which uses last element as pivot.
#include<iostream>
#include<climits>
#include<cstdlib>
using namespace std;
int randomPartition(int arr[], int l, int r);
// This function returns k'th smallest element in arr[l..r] using
// QuickSort based method. ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCT
int kthSmallest(int arr[], int l, int r, int k)
{
// If k is smaller than number of elements in array
if (k > 0 && k <= r - l + 1)
{
// Partition the array around a random element and
// get position of pivot element in sorted array
int pos = randomPartition(arr, l, r);
// If position is same as k
if (pos-l == k-1)
return arr[pos];
if (pos-l > k-1) // If position is more, recur for left subarray
return kthSmallest(arr, l, pos-1, k);
// Else recur for right subarray
return kthSmallest(arr, pos+1, r, k-pos+l-1);
}
// If k is more than number of elements in array
return INT_MAX;
}
void swap(int *a, int *b)
{
int temp = *a;
*a = *b;
*b = temp;
}
// Standard partition process of QuickSort(). It considers the last
// element as pivot and moves all smaller element to left of it and
// greater elements to right. This function is used by randomPartition()
int partition(int arr[], int l, int r)
{
int x = arr[r], i = l;
for (int j = l; j <= r - 1; j++)
{
if (arr[j] <= x) //arr[i] is bigger than arr[j] so swap them
{
swap(&arr[i], &arr[j]);
i++;
}
}
swap(&arr[i], &arr[r]); // swap the pivot
return i;
}
// Picks a random pivot element between l and r and partitions
// arr[l..r] around the randomly picked element using partition()
int randomPartition(int arr[], int l, int r)
{
int n = r-l+1;
int pivot = rand() % n;
swap(&arr[l + pivot], &arr[r]);
return partition(arr, l, r);
}
// Driver program to test above methods
int main()
{
int arr[] = {12, 3, 5, 7, 4, 19, 26};
int n = sizeof(arr)/sizeof(arr[0]), k = 3;
cout << "K'th smallest element is " << kthSmallest(arr, 0, n-1, k);
return 0;
}
The worst case time complexity of the above solution is still O(n2).In worst case, the randomized function may always pick a corner element. The expected time complexity of above randomized QuickSelect is Θ(n)
Have Priority queue created.
Insert all the elements into heap.
Call poll() k times.
public static int getKthLargestElements(int[] arr)
{
PriorityQueue<Integer> pq = new PriorityQueue<>((x , y) -> (y-x));
//insert all the elements into heap
for(int ele : arr)
pq.offer(ele);
// call poll() k times
int i=0;
while(i<k)
{
int result = pq.poll();
}
return result;
}
This is an implementation in Javascript.
If you release the constraint that you cannot modify the array, you can prevent the use of extra memory using two indexes to identify the "current partition" (in classic quicksort style - http://www.nczonline.net/blog/2012/11/27/computer-science-in-javascript-quicksort/).
function kthMax(a, k){
var size = a.length;
var pivot = a[ parseInt(Math.random()*size) ]; //Another choice could have been (size / 2)
//Create an array with all element lower than the pivot and an array with all element higher than the pivot
var i, lowerArray = [], upperArray = [];
for (i = 0; i < size; i++){
var current = a[i];
if (current < pivot) {
lowerArray.push(current);
} else if (current > pivot) {
upperArray.push(current);
}
}
//Which one should I continue with?
if(k <= upperArray.length) {
//Upper
return kthMax(upperArray, k);
} else {
var newK = k - (size - lowerArray.length);
if (newK > 0) {
///Lower
return kthMax(lowerArray, newK);
} else {
//None ... it's the current pivot!
return pivot;
}
}
}
If you want to test how it perform, you can use this variation:
function kthMax (a, k, logging) {
var comparisonCount = 0; //Number of comparison that the algorithm uses
var memoryCount = 0; //Number of integers in memory that the algorithm uses
var _log = logging;
if(k < 0 || k >= a.length) {
if (_log) console.log ("k is out of range");
return false;
}
function _kthmax(a, k){
var size = a.length;
var pivot = a[parseInt(Math.random()*size)];
if(_log) console.log("Inputs:", a, "size="+size, "k="+k, "pivot="+pivot);
// This should never happen. Just a nice check in this exercise
// if you are playing with the code to avoid never ending recursion
if(typeof pivot === "undefined") {
if (_log) console.log ("Ops...");
return false;
}
var i, lowerArray = [], upperArray = [];
for (i = 0; i < size; i++){
var current = a[i];
if (current < pivot) {
comparisonCount += 1;
memoryCount++;
lowerArray.push(current);
} else if (current > pivot) {
comparisonCount += 2;
memoryCount++;
upperArray.push(current);
}
}
if(_log) console.log("Pivoting:",lowerArray, "*"+pivot+"*", upperArray);
if(k <= upperArray.length) {
comparisonCount += 1;
return _kthmax(upperArray, k);
} else if (k > size - lowerArray.length) {
comparisonCount += 2;
return _kthmax(lowerArray, k - (size - lowerArray.length));
} else {
comparisonCount += 2;
return pivot;
}
/*
* BTW, this is the logic for kthMin if we want to implement that... ;-)
*
if(k <= lowerArray.length) {
return kthMin(lowerArray, k);
} else if (k > size - upperArray.length) {
return kthMin(upperArray, k - (size - upperArray.length));
} else
return pivot;
*/
}
var result = _kthmax(a, k);
return {result: result, iterations: comparisonCount, memory: memoryCount};
}
The rest of the code is just to create some playground:
function getRandomArray (n){
var ar = [];
for (var i = 0, l = n; i < l; i++) {
ar.push(Math.round(Math.random() * l))
}
return ar;
}
//Create a random array of 50 numbers
var ar = getRandomArray (50);
Now, run you tests a few time.
Because of the Math.random() it will produce every time different results:
kthMax(ar, 2, true);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 2);
kthMax(ar, 34, true);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
kthMax(ar, 34);
If you test it a few times you can see even empirically that the number of iterations is, on average, O(n) ~= constant * n and the value of k does not affect the algorithm.
I came up with this algorithm and seems to be O(n):
Let's say k=3 and we want to find the 3rd largest item in the array. I would create three variables and compare each item of the array with the minimum of these three variables. If array item is greater than our minimum, we would replace the min variable with the item value. We continue the same thing until end of the array. The minimum of our three variables is the 3rd largest item in the array.
define variables a=0, b=0, c=0
iterate through the array items
find minimum a,b,c
if item > min then replace the min variable with item value
continue until end of array
the minimum of a,b,c is our answer
And, to find Kth largest item we need K variables.
Example: (k=3)
[1,2,4,1,7,3,9,5,6,2,9,8]
Final variable values:
a=7 (answer)
b=8
c=9
Can someone please review this and let me know what I am missing?
Here is the implementation of the algorithm eladv suggested(I also put here the implementation with random pivot):
public class Median {
public static void main(String[] s) {
int[] test = {4,18,20,3,7,13,5,8,2,1,15,17,25,30,16};
System.out.println(selectK(test,8));
/*
int n = 100000000;
int[] test = new int[n];
for(int i=0; i<test.length; i++)
test[i] = (int)(Math.random()*test.length);
long start = System.currentTimeMillis();
random_selectK(test, test.length/2);
long end = System.currentTimeMillis();
System.out.println(end - start);
*/
}
public static int random_selectK(int[] a, int k) {
if(a.length <= 1)
return a[0];
int r = (int)(Math.random() * a.length);
int p = a[r];
int small = 0, equal = 0, big = 0;
for(int i=0; i<a.length; i++) {
if(a[i] < p) small++;
else if(a[i] == p) equal++;
else if(a[i] > p) big++;
}
if(k <= small) {
int[] temp = new int[small];
for(int i=0, j=0; i<a.length; i++)
if(a[i] < p)
temp[j++] = a[i];
return random_selectK(temp, k);
}
else if (k <= small+equal)
return p;
else {
int[] temp = new int[big];
for(int i=0, j=0; i<a.length; i++)
if(a[i] > p)
temp[j++] = a[i];
return random_selectK(temp,k-small-equal);
}
}
public static int selectK(int[] a, int k) {
if(a.length <= 5) {
Arrays.sort(a);
return a[k-1];
}
int p = median_of_medians(a);
int small = 0, equal = 0, big = 0;
for(int i=0; i<a.length; i++) {
if(a[i] < p) small++;
else if(a[i] == p) equal++;
else if(a[i] > p) big++;
}
if(k <= small) {
int[] temp = new int[small];
for(int i=0, j=0; i<a.length; i++)
if(a[i] < p)
temp[j++] = a[i];
return selectK(temp, k);
}
else if (k <= small+equal)
return p;
else {
int[] temp = new int[big];
for(int i=0, j=0; i<a.length; i++)
if(a[i] > p)
temp[j++] = a[i];
return selectK(temp,k-small-equal);
}
}
private static int median_of_medians(int[] a) {
int[] b = new int[a.length/5];
int[] temp = new int[5];
for(int i=0; i<b.length; i++) {
for(int j=0; j<5; j++)
temp[j] = a[5*i + j];
Arrays.sort(temp);
b[i] = temp[2];
}
return selectK(b, b.length/2 + 1);
}
}
it is similar to the quickSort strategy, where we pick an arbitrary pivot, and bring the smaller elements to its left, and the larger to the right
public static int kthElInUnsortedList(List<int> list, int k)
{
if (list.Count == 1)
return list[0];
List<int> left = new List<int>();
List<int> right = new List<int>();
int pivotIndex = list.Count / 2;
int pivot = list[pivotIndex]; //arbitrary
for (int i = 0; i < list.Count && i != pivotIndex; i++)
{
int currentEl = list[i];
if (currentEl < pivot)
left.Add(currentEl);
else
right.Add(currentEl);
}
if (k == left.Count + 1)
return pivot;
if (left.Count < k)
return kthElInUnsortedList(right, k - left.Count - 1);
else
return kthElInUnsortedList(left, k);
}
Go to the End of this link : ...........
http://www.geeksforgeeks.org/kth-smallestlargest-element-unsorted-array-set-3-worst-case-linear-time/
You can find the kth smallest element in O(n) time and constant space. If we consider the array is only for integers.
The approach is to do a binary search on the range of Array values. If we have a min_value and a max_value both in integer range, we can do a binary search on that range.
We can write a comparator function which will tell us if any value is the kth-smallest or smaller than kth-smallest or bigger than kth-smallest.
Do the binary search until you reach the kth-smallest number
Here is the code for that
class Solution:
def _iskthsmallest(self, A, val, k):
less_count, equal_count = 0, 0
for i in range(len(A)):
if A[i] == val: equal_count += 1
if A[i] < val: less_count += 1
if less_count >= k: return 1
if less_count + equal_count < k: return -1
return 0
def kthsmallest_binary(self, A, min_val, max_val, k):
if min_val == max_val:
return min_val
mid = (min_val + max_val)/2
iskthsmallest = self._iskthsmallest(A, mid, k)
if iskthsmallest == 0: return mid
if iskthsmallest > 0: return self.kthsmallest_binary(A, min_val, mid, k)
return self.kthsmallest_binary(A, mid+1, max_val, k)
# #param A : tuple of integers
# #param B : integer
# #return an integer
def kthsmallest(self, A, k):
if not A: return 0
if k > len(A): return 0
min_val, max_val = min(A), max(A)
return self.kthsmallest_binary(A, min_val, max_val, k)
What I would do is this:
initialize empty doubly linked list l
for each element e in array
if e larger than head(l)
make e the new head of l
if size(l) > k
remove last element from l
the last element of l should now be the kth largest element
You can simply store pointers to the first and last element in the linked list. They only change when updates to the list are made.
Update:
initialize empty sorted tree l
for each element e in array
if e between head(l) and tail(l)
insert e into l // O(log k)
if size(l) > k
remove last element from l
the last element of l should now be the kth largest element
First we can build a BST from unsorted array which takes O(n) time and from the BST we can find the kth smallest element in O(log(n)) which over all counts to an order of O(n).

Resources