Get min and max of unsorted list - sorting

I am hypothetically given an unsorted list of x numbers and I would like to find the min. and max. numbers in the list.
Now based on the research I have done the most efficient sorting method seems to be quick sort.

Sorting is at best an O(N log N) operation. In order to sort a list, you have to look at each value at least once O(N)
It would be faster to just loop through the array and keep track of the smallest, and biggest numbers
int[] unordered = { 5, 9, 4, 2, 6, 4 };
int min=0, max=0;
for (int i = 0; i < unordered.Length; i++)
{
if (min > unordered[i]) min = unordered[i];
if (max < unordered[i]) max = unordered[i];
}

You dont need to sort the values and get the first and last indexed values - the following simply iterates over the array of numbers and checks against the minNum and maxNum values and sets / resets those accordingly.
let numbers = [10,7,4,6,3,77,232,56,99];
let minNum = numbers[0];
let maxNum = numbers[0];
numbers.forEach(function(number) {
if(number < minNum ) {minNum = number};
if(number > maxNum ) {maxNum = number};
})
console.log(minNum) // gives 3
console.log(maxNum) // gives 232

Related

How to solve "fixed size maximum subarray" using divide and conquer approach?

Disclaimer: I know this problem can be solved with single pass of array very efficiently, but I am interested in doing this with divide and conquer because it is bit different than typical problems we tackle with divide and conquer.
Suppose you are given a floating point array X[1:n] of size n and interval length l. The problem is to design a divide and conquer algorithm to find the sub-array of length l from the array that has the maximum sum.
Here is what I came up with.For array of length n there are n-l+1 sub-arrays of l consecutive elements. For example for array of length n = 10 and l = 3, there will be 8 sub-arrays of length 3.
Now, to divide the problem into two half, I decided to break array at n-l+1/2 so that equal number of sub-arrays will be distributed to both halves of my division as depicted in algorithm below. Again, for n = 10, l = 3, n-l+1 = 8, so I divided the problem at (n-l+1)/2 = 4. But for 4th sub-array I need array elements up-to 6 i.e. (n+l-1)/2.
void FixedLengthMS(input: X[1:n], l, output: k, max_sum)
{
if(l==n){//only one sub-array
sum = Sumof(X[1:n]);
k=1;
}
int kl, kr;
float sum_l, sum_r;
FixedLengthMS(X[1:(n+l-1)/2], l, kl, sum_l);
FixedLengthMS(X[(n-l+3)/2:n], l, kr, sum_r);
if(sum_l >= sum_r){
sum = sum_l;
k = kl;
}
else{
sum = sum_r;
k = n-l+1/2 + kr;
}
}
Note: to clear out array indexing
for sub-array starting at (n-l+1)/2 we need array elements up-to (n-l+1)/2 + l-1 = (n+l-1)/2
My concern:
To apply divide and conquer I have used some data elements in both array, so I am looking for another method that avoids the extra storage.
Faster method will be appreciated.
Please ignore the syntax of code section, I am just trying to give overview of algorithm.
You don't need divide and conquer. A simple one pass algorithm can be used for the task. Let's suppose, that array is big enough. Then:
double sum = 0;
for (size_t i = 0; i < l; ++i)
sum += X[i];
size_t max_index = 0;
double max_sum = sum;
for (int i = 0; i < n - l; ++i) {
sum += X[i + l] - X[i];
if (sum > max_sum) {
max_sum = sum;
max_index = i;
}
}

Any faster way to find the number of "lucky triples"?

I am working on a code challenge problem -- "find lucky triples". "Lucky triple" is defined as "In a list lst, for any combination of triple like (lst[i], lst[j], lst[k]) where i < j < k, where lst[i] divides lst[j] and lst[j] divides lst[k].
My task is to find the number of lucky triples in a given list. The brute force way is to use three loops but it takes too much time to solve the problem. I wrote this one and the system respond "time exceed". The problems looks silly and easy but the array is unsorted so general methods like binary search do not work. I am stun in the problem for one day and hope someone can give me a hint. I am seeking a way to solve the problem faster, at least the time complexity should be lower than O(N^3).
A simple dynamic programming-like algorithm will do this in quadratic time and linear space. You just have to maintain a counter c[i] for each item in the list, that represents the number of previous integers that divides L[i].
Then, as you go through the list and test each integer L[k] with all previous item L[j], if L[j] divides L[k], you just add c[j] (which could be 0) to your global counter of triples, because that also implies that there exist exactly c[j] items L[i] such that L[i] divides L[j] and i < j.
int c[] = {0}
int nbTriples = 0
for k=0 to n-1
for j=0 to k-1
if (L[k] % L[j] == 0)
c[k]++
nbTriples += c[j]
return nbTriples
There may be some better algorithm that uses fancy discrete maths to do it faster, but if O(n^2) is ok, this will do just fine.
In regard to your comment:
Why DP? We have something that can clearly be modeled as having a left to right order (DP orange flag), and it feels like reusing previously computed values could be interesting, because the brute force algorithm does the exact same computations a lot of times.
How to get from that to a solution? Run a simple example (hint: it should better be by treating input from left to right). At step i, compute what you can compute from this particular point (ignoring everything on the right of i), and try to pinpoint what you compute over and over again for different i's: this is what you want to cache. Here, when you see a potential triple at step k (L[k] % L[j] == 0), you have to consider what happens on L[j]: "does it have some divisors on its left too? Each of these would give us a new triple. Let's see... But wait! We already computed that on step j! Let's cache this value!" And this is when you jump on your seat.
Full working solution in python:
c = [0] * len(l)
print c
count = 0
for i in range(0,len(l)):
j=0
for j in range(0, i):
if l[i] % l[j] == 0:
c[i] = c[i] + 1
count = count + c[j]
print j
print c
print count
Read up on the Sieve of Eratosthenes, a common technique for finding prime numbers, which could be adapted to find your 'lucky triples'. Essentially, you would need to iterate your list in increasing value order, and for each value, multiply it by an increasing factor until it is larger than the largest list element, and each time one of these multiples equals another value in the list, the multiple is divisible by the base number. If the list is sorted when given to you, then the i < j < k requirement would also be satisfied.
e.g. Given the list [3, 4, 8, 15, 16, 20, 40]:
Start at 3, which has multiples [6, 9, 12, 15, 18 ... 39] within the range of the list. Of those multiples, only 15 is contained in the list, so record under 15 that it has a factor 3.
Proceed to 4, which has multiples [8, 12, 16, 20, 24, 28, 32, 36, 40]. Mark those as having a factor 4.
Continue through the list. When you reach an element that has an existing known factor, then if you find any multiples of that number in the list, then you have a triple. In this case, for 16, this has a multiple 32 which is in the list. So now you know that 32 is divisible by 16, which is divisible by 4. Whereas for 15, that has no multiples in the list, so there is no value that can form a triplet with 3 and 15.
A precomputation step to the problem can help reduce time complexity.
Precomputation Step:
For every element(i), iterate the array to find which are the elements(j) such that lst[j]%lst[i]==0
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(a[j]%a[i] == 0)
// mark those j's. You decide how to store this data
}
}
This Precomputation Step will take O(n^2) time.
In the Ultimate Step, use the details of the Precomputation Step, to help find the triplets..
Forming a graph - an array of the indices which are multiples ahead of the current index. Then calculating the collective sum of multiples of these indices, referred from the graph. It has a complexity of O(n^2)
For example, for a list {1,2,3,4,5,6} there will be an array of the multiples. The graph will look like
{ 0:[1,2,3,4,5], 1:[3,5], 2: [5], 3:[],4:[], 5:[]}
So, total triplets will be {0->1 ->3/5} and {0->2 ->5} ie., 3
package com.welldyne.mx.dao.core;
import java.util.LinkedList;
import java.util.List;
public class LuckyTriplets {
public static void main(String[] args) {
int[] integers = new int[2000];
for (int i = 1; i < 2001; i++) {
integers[i - 1] = i;
}
long start = System.currentTimeMillis();
int n = findLuckyTriplets(integers);
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms");
System.out.println(n);
}
private static int findLuckyTriplets(int[] integers) {
List<Integer>[] indexMultiples = new LinkedList[integers.length];
for (int i = 0; i < integers.length; i++) {
indexMultiples[i] = getMultiples(integers, i);
}
int luckyTriplets = 0;
for (int i = 0; i < integers.length - 1; i++) {
luckyTriplets += getLuckyTripletsFromMultiplesMap(indexMultiples, i);
}
return luckyTriplets;
}
private static int getLuckyTripletsFromMultiplesMap(List<Integer>[] indexMultiples, int n) {
int sum = 0;
for (int i = 0; i < indexMultiples[n].size(); i++) {
sum += indexMultiples[(indexMultiples[n].get(i))].size();
}
return sum;
}
private static List<Integer> getMultiples(int[] integers, int n) {
List<Integer> multiples = new LinkedList<>();
for (int i = n + 1; i < integers.length; i++) {
if (isMultiple(integers[n], integers[i])) {
multiples.add(i);
}
}
return multiples;
}
/*
* if b is the multiple of a
*/
private static boolean isMultiple(int a, int b) {
return b % a == 0;
}
}
I just wanted to share my solution, which passed. Basically, the problem can be condensed to a tree problem. You need to pay attention to the wording of the question, it only treats numbers different on basis of the index not value. so {1,1,1} will have only 1 triple, but {1,1,1,1} will have 4. the constraint is {li,lj,lk} such that the divide and i<j<k
def solution(l):
count = 0
data = l
max_element = max(data)
tree_list = []
for p,element in enumerate(data):
if element == 0:
tree_list.append([])
else:
temp = []
for el in data[p+1:]:
if el%element == 0:
temp.append(el)
tree_list.append(temp)
for p,element_list in enumerate(tree_list):
data[p] = 0
temp = data[:]
for element in element_list:
pos_element = temp.index(element)
count += len(tree_list[pos_element])
temp[pos_element] = 0
return count

count the sequence that has the max sum in array O(N)

if i want to count the sequence in the arrays that has the max sum, how can I do it when I have a limit of O(n) time complexity ?
For example : {1,2,3,4,-3} the output will be 4 because the sum of 1+2+3+4 is the maximum sum and there are 4 numbers in that sequence
I know how to do it with O(N^2) time complexity but not with O(n) help ? :)
I think you can to iterate like this :
MaxSum = 0;
CurrentSum = 0;
MaxLen = 0;
CurrentLen = 0;
Index = GetFirstPositiveValue();
// This function returns the first Index where Array[Index] > 0
// O(n)
while (Index < Array.Length()) {
// general loop to parse the whole array
while (Array[Index] > 0 && Index < Array.Length()) {
CurrentSum += Array[Index];
CurrentLen++;
Index++
}
// We computed a sum of positive integer, we store the values
// if it is higher than the current max
if (CurrentSum > MaxSum) {
MaxSum = CurrentSum;
MaxLen = CurrentLen;
}
// We reset the current values only if we get to a negative sum
while (Array[Index] < 0 && Index < Array.Length()) {
CurrentSum += Array[Index];
CurrentLen++;
Index++;
}
//We encountered a positive value. We check if we need to reset the current sum
if (CurrentSum < 0) {
CurrentSum = 0;
CurrentLen = 0;
}
}
// At this point, MaxLen is what you want, and we only went through
// the array once in the while loop.
Start on the first positive element. If every element is negative, then just pick the highest and the problem is over, this is a 1 element sequence.
We keep on summing as long as we have positive values, so we have a current max value. When we have a negative, we check if the current max is higher than the stored max. If so, we replace the stored max and sequence length by the new values.
Now, we sum negative numbers. When we find another positive, we have to check something :
If the current sum is positive, then we can still have a max sum with this sequence. If it's negative, then we can throw the current sum away, because the max sum won't contain it :
In {1,-2,3,4}, 3+4 is greater than 1-2+3+4
As long as we haven't been through the entire array, we restart this process. We only reset the sequence when we have a subsequence generating a negative sum, and we store the max values only if we have a greater value.
I think this works as intended, and we only go through the array one or two times. So it's O(n)
I hope that's understandable, I have trouble making my thoughts clear. Executing this algorithm with small examples such as {1,2,3,-4,5} / {1,2,3,-50,5} / {1,2,3,-50,4,5} may help if I'm not clear enough :)
If you know the maximum sum of a subarray at the end of an array of length N, you can trivially calculate it for one of length N+1:
[..., X] has max subsum S
[..., X, Y] has max subsum max(0, S + Y)
since either you include Y or you have an empty subarray (since the subarray is at the end of the list).
You can find all maximum sums for subarrays ending at any position by building this from an empty list:
[] S = 0
[1] S = 1
[1, 2] S = 3
[1, 2, -4] S = 0
[1, 2, -4, 5] S = 5
You then only need to keep track of the maximum and its width. Here is some Python code demonstrating the algorithm.
def ranges(values):
width = cum_sum = 0
for value in values:
cum_sum += value
width += 1
if cum_sum < 0:
width = cum_sum = 0
yield (cum_sum, width)
total, width = max(ranges([-2, 1, 2, 3, -8, 4, -3]))
total, width
#>>> (6, 3)

checking if 2 numbers of array add up to I

I saw a interview question as follows:
Give an unsorted array of integers A and and an integer I, find out if any two members of A add up to I.
any clues?
time complexity should be less
Insert the elements into hashtable.
While inserting x, check if I-x already exists. O(n) expected time.
Otherwise, sort the array ascending (from index 0 to n-1). Have two pointers, one at max and one at min (call them M and m respectively).
If a[M] + a[m] > I then M--
If a[M] + a[m] < I then m++
If a[M] + a[m] == I you have found it
If m > M, no such numbers exist.
If you have the range which the integers are within, you can use a counting sort-like solution where you scan over the array and count an array up. Ex you have the integers
input = [0,1,5,2,6,4,2]
And you create an array like this:
count = int[7]
which (in Java,C# etc.) are suited for counting integers between 0 and 6.
foreach integer in input
count[i] = count[i] + 1
This will give you the array [1,1,2,0,1,1,1]. Now you can scan over this array (half of it) and check whether there are integers which adds up to i like
for j = 0 to count.length - 1
if count[j] != 0 and count[i - j] != 0 then // Check for array out-of-bounds here
WUHUU! the integers j and i - j adds up
Overall this algorithm gives you O(n + k) where n is from the scan over the input of length n and k is the scan over the count array of length k (integers between 0 and k - 1). This means that if n > k then you have a guaranteed O(n) solution.
For example, loop and add possible number to set or hash and if found, just return it.
>>> A = [11,3,2,9,12,15]
>>> I = 14
>>> S = set()
>>> for x in A:
... if x in S:
... print I-x, x
... S.add(I-x)
...
11 3
2 12
>>>
sort the array
for each element X in A, perform a binary search for I-X. If I-X is in A, we have a solution.
This is O(nlogn).
If A contains integers in a given (small enough) range, we can use a trick to make it O(n):
we have an array V. For each element X in A, we increment V[X].
when we increment V[X] we also check if V[I-X] is >0. If it is, we have a solution.
public static boolean findSum2(int[] a, int sum) {
if (a.length == 0) {
return false;
}
Arrays.sort(a);
int i = 0;
int j = a.length - 1;
while (i < j) {
int tmp = a[i] + a[j];
if (tmp == sum) {
System.out.println(a[i] + "+" + a[j] + "=" + sum);
return true;
} else if (tmp > sum) {
j--;
} else {
i++;
}
}
return false;
}
O(n) time and O(1) space
If the array is sorted there is a solution in O(n) time complexity.
Suppose are array is
array = {0, 1, 3, 5, 8, 10, 14}
And our x1 + x2 = k = 13, so output should be= 5, 8
Take two pointers one at start of array, one at end of array
Add both the elements at ptr1 and ptr2
array[ptr1] + array[ptr2]
if sum > k then decrement ptr2 else increment ptr1
Repeat step2 and step3 till ptr1 != ptr2
Same thing explained in detail here. Seems like an Amazon interview Question
http://inder-gnu.blogspot.com/2007/10/find-two-nos-in-array-whose-sum-x.html
for nlogn : Sort the array and for each element [0<=j<len A] , subtract i-A[j] and do a binary search for this element in sorted array.
hashmap (frequency of no, number) should work in O(n).
for each ele in the array
if (sum - ele) is hashed and hashed value is not equal to index of ele
print ele, sum-ele
end-if
Hash ele as key and index as value
end-for
PERL implementation to detect if a sorted array contains two integer that sum up to Number
my #a = (11,3,2,9,12,15);
my #b = sort {$a <=> $b} #a;
my %hash;
my $sum = 14;
my $index = 0;
foreach my $ele (#b) {
my $sum_minus_ele = $sum - $ele;
print "Trace: $ele :: $index :: $sum_minus_ele\n";
if(exists($hash{$sum_minus_ele}) && $hash{$sum_minus_ele} != $index ) {
print "\tElement: ".$ele." :: Sum-ele: ".$sum_minus_ele."\n";
}
$hash{$ele} = $index;
$index++;
}
This might be possible in the following way: Before putting the elements into the hashmap, you can check if the element is greater than the required sum. If it is, you can simply skip that element, else you can proceed with putting it into the hashmap. Its a slight improvement on your algorithm, although the overall time still remains the same.
This can be solved using the UNION-FIND algorithm, which can check in constant time whether an element is into a set.
So, the algorithm would be so :
foundsum0 = false;
foreach (el: array) {
if find (-x): foundsum0 = true;
else union (x);
}
FIND and UNION are constant, O(1).
here is a O(n) solution in java using O(n) extra space. This uses hashSet to implement it
http://www.dsalgo.com/UnsortedTwoSumToK.php
Here is a solution witch takes into account duplicate entries. It is written in javascript and assumes array is sorted. The solution runs in O(n) time and does not use any extra memory aside from variable. Choose a sorting algorithm of choice. (radix O(kn)!) and then run the array through this baby.
var count_pairs = function(_arr,x) {
if(!x) x = 0;
var pairs = 0;
var i = 0;
var k = _arr.length-1;
if((k+1)<2) return pairs;
var halfX = x/2;
while(i<k) {
var curK = _arr[k];
var curI = _arr[i];
var pairsThisLoop = 0;
if(curK+curI==x) {
// if midpoint and equal find combinations
if(curK==curI) {
var comb = 1;
while(--k>=i) pairs+=(comb++);
break;
}
// count pair and k duplicates
pairsThisLoop++;
while(_arr[--k]==curK) pairsThisLoop++;
// add k side pairs to running total for every i side pair found
pairs+=pairsThisLoop;
while(_arr[++i]==curI) pairs+=pairsThisLoop;
} else {
// if we are at a mid point
if(curK==curI) break;
var distK = Math.abs(halfX-curK);
var distI = Math.abs(halfX-curI);
if(distI > distK) while(_arr[++i]==curI);
else while(_arr[--k]==curK);
}
}
return pairs;
}
I solved this during an interview for a large corporation. They took it but not me.
So here it is for everyone.
Start at both side of the array and slowly work your way inwards making sure to count duplicates if they exist.
It only counts pairs but can be reworked to
find the pairs
find pairs < x
find pairs > x
Enjoy and don't forget to bump if its the best solution!
Split the array into two groups <= I/2 and > I/2. Then split those into <= I/4,>I/4 and <= 3I/4,>3I/4
And repeat for log(I) steps and check the pairs joining from the outside e.g 1I/8<= and >7I/8 and if they both contain at least one element then they add to I.
This will take n.Log(I) + n/2 steps and for I
An implementation in python
def func(list,k):
temp={} ## temporary dictionary
for i in range(len(list)):
if(list[i] in temp): ## if temp already has the key just increment its value
temp[list[i]] +=1
else: ## else initialize the key in temp with count as 0
temp[list[i]]=0
if(k-list[i] in temp and ((k/2 != list[i]) or temp[list[i]]>=1)): ## if the corresponding other value to make the sum k is in the dictionary and its either not k/2 or the count for that number is more than 1
return True
return False
Input:
list is a list of numbers (A in the question above)...
k is the sum (I in the question above)....
The function outputs True if there exist a pair in the list whose sum is equal to k and False otherwise...
I am using a dictionary whose key is the element in the array(list) and value is the count of that element(number of times that element is present in that list).
Average running time complexity is O(n).
This implementation also takes care of two important edge cases:
repeated numbers in the list and
not adding the same number twice.

Find the x smallest integers in a list of length n

You have a list of n integers and you want the x smallest. For example,
x_smallest([1, 2, 5, 4, 3], 3) should return [1, 2, 3].
I'll vote up unique runtimes within reason and will give the green check to the best runtime.
I'll start with O(n * x): Create an array of length x. Iterate through the list x times, each time pulling out the next smallest integer.
Edits
You have no idea how big or small these numbers are ahead of time.
You don't care about the final order, you just want the x smallest.
This is already being handled in some solutions, but let's say that while you aren't guaranteed a unique list, you aren't going to get a degenerate list either such as [1, 1, 1, 1, 1] either.
You can find the k-th smallest element in O(n) time. This has been discussed on StackOverflow before. There are relatively simple randomized algorithms, such as QuickSelect, that run in O(n) expected time and more complicated algorithms that run in O(n) worst-case time.
Given the k-th smallest element you can make one pass over the list to find all elements less than the k-th smallest and you are done. (I assume that the result array does not need to be sorted.)
Overall run-time is O(n).
Maintain the list of the x highest so far in sorted order in a skip-list. Iterate through the array. For each element, find where it would be inserted in the skip list (log x time). If in the interior of the list, it is one of the smallest x so far, so insert it and remove the element at the end of the list. Otherwise do nothing.
Time O(n*log(x))
Alternative implementation: maintain the collection of x highest so far in a max-heap, compare each new element with top element of the heap, and pop + insert new element only if the new element is less than the top element. Since comparison to top element is O(1) and pop/insert O(log x), this is also O(nlog(x))
Add all n numbers to a heap and delete x of them. Complexity is O((n + x) log n). Since x is obviously less than n, it's O(n log n).
If the range of numbers (L) is known, you can do a modified counting sort.
given L, x, input[]
counts <- array[0..L]
for each number in input
increment counts[number]
next
#populate the output
index <- 0
xIndex <- 0
while xIndex < x and index <= L
if counts[index] > 0 then
decrement counts[index]
output[xIndex] = index
increment xIndex
else
increment index
end if
loop
This has a runtime of O(n + L) (with memory overhead of O(L)) which makes it pretty attractive if the range is small (L < n log n).
def x_smallest(items, x):
result = sorted(items[:x])
for i in items[x:]:
if i < result[-1]:
result[-1] = i
j = x - 1
while j > 0 and result[j] < result[j-1]:
result[j-1], result[j] = result[j], result[j-1]
j -= 1
return result
Worst case is O(x*n), but will typically be closer to O(n).
Psudocode:
def x_smallest(array<int> arr, int limit)
array<int> ret = new array[limit]
ret = {INT_MAX}
for i in arr
for j in range(0..limit)
if (i < ret[j])
ret[j] = i
endif
endfor
endfor
return ret
enddef
In pseudo code:
y = length of list / 2
if (x > y)
iterate and pop off the (length - x) largest
else
iterate and pop off the x smallest
O(n/2 * x) ?
sort array
slice array 0 x
Choose the best sort algorithm and you're done: http://en.wikipedia.org/wiki/Sorting_algorithm#Comparison_of_algorithms
You can sort then take the first x values?
Java: with QuickSort O(n log n)
import java.util.Arrays;
import java.util.Random;
public class Main {
public static void main(String[] args) {
Random random = new Random(); // Random number generator
int[] list = new int[1000];
int lenght = 3;
// Initialize array with positive random values
for (int i = 0; i < list.length; i++) {
list[i] = Math.abs(random.nextInt());
}
// Solution
int[] output = findSmallest(list, lenght);
// Display Results
for(int x : output)
System.out.println(x);
}
private static int[] findSmallest(int[] list, int lenght) {
// A tuned quicksort
Arrays.sort(list);
// Send back correct lenght
return Arrays.copyOf(list, lenght);
}
}
Its pretty fast.
private static int[] x_smallest(int[] input, int x)
{
int[] output = new int[x];
for (int i = 0; i < x; i++) { // O(x)
output[i] = input[i];
}
for (int i = x; i < input.Length; i++) { // + O(n-x)
int current = input[i];
int temp;
for (int j = 0; j < output.Length; j++) { // * O(x)
if (current < output[j]) {
temp = output[j];
output[j] = current;
current = temp;
}
}
}
return output;
}
Looking at the complexity:
O(x + (n-x) * x) -- assuming x is some constant, O(n)
What about using a splay tree? Because of the splay tree's unique approach to adaptive balancing it makes for a slick implementation of the algorithm with the added benefit of being able to enumerate the x items in order afterwards. Here is some psuedocode.
public SplayTree GetSmallest(int[] array, int x)
{
var tree = new SplayTree();
for (int i = 0; i < array.Length; i++)
{
int max = tree.GetLargest();
if (array[i] < max || tree.Count < x)
{
if (tree.Count >= x)
{
tree.Remove(max);
}
tree.Add(array[i]);
}
}
return tree;
}
The GetLargest and Remove operations have an amortized complexity of O(log(n)), but because the last accessed item bubbles to the top it would normally be O(1). So the space complexity is O(x) and the runtime complexity is O(n*log(x)). If the array happens to already be ordered then this algorithm would acheive its best case complexity of O(n) with either an ascending or descending ordered array. However, a very odd or peculiar ordering could result in a O(n^2) complexity. Can you guess how the array would have to be ordered for that to happen?
In scala, and probably other functional languages, a no brainer:
scala> List (1, 3, 6, 4, 5, 1, 2, 9, 4) sortWith ( _<_ ) take 5
res18: List[Int] = List(1, 1, 2, 3, 4)

Resources