I need to evenly select n elements from an array. I guess the best way to explain is by example.
say I have:
array [0,1,2,3,4] and I need to select 3 numbers.. 0,2,4.
of course, if the array length <= n, I just need to return the whole array.
I'm pretty sure there's a defined algorithm for this, been trying to search, and I took a look at Introduction to algorithms but couldn't find anything that met my needs (probably overlooked it)
The problem I'm having is that I can't figure out a way to scale this up to any array [ p..q ], selecting N evenly elements.
note: I can't just select the even elements from the example above..
A couple other examples;
array[0,1,2,3,4,5,6], 3 elements ; I need to get 0,3,6
array[0,1,2,3,4,5], 3 elements ; I need to get 0, either 2 or 3, and 5
EDIT:
more examples:
array [0,1,2], 2 elems : 0,2
array [0,1,2,3,4,5,6,7], 5 elems : 0,2, either 3 or 4, 5,7
and yes, I'd like to include first and last elements always.
EDIT 2:
what I was thinking was something like .. first + last element, then work my way up using the median value. Though I got stuck/confused when trying to do so.
I'll take a look at the algo you're posting. thanks!
EDIT 3:
Here's a souped up version of incrediman solution with PHP. Works with associative arrays as well, while retaining the keys.
<?php
/**
* Selects $x elements (evenly distributed across $set) from $set
*
* #param $set array : array set to select from
* #param $x int : number of elements to select. positive integer
*
* #return array|bool : selected set, bool false on failure
*/
///FIXME when $x = 1 .. return median .. right now throws a warning, division by zero
function select ($set, $x) {
//check params
if (!is_array($set) || !is_int($x) || $x < 1)
return false;
$n = count($set);
if ($n <= $x)
return $set;
$selected = array ();
$step = ($n - 1) / ($x - 1);
$keys = array_keys ($set);
$values = array_values($set);
for ($i=0; $i<$x; $i++) {
$selected[$keys[round($step*$i)]] = $values[round($step*$i)];
}
return $selected;
}
?>
You can probably implement an Iterator but I don't need to take it that far.
Pseudo-code:
function Algorithm(int N,array A)
float step=(A.size-1)/(N-1) //set step size
array R //declare return array
for (int i=0, i<N, i++)
R.push(A[round(step*i)]) //push each element of a position which is a
//multiple of step to R
return R
Probably the easiest mistake to make here would be to cast step as an integer or round it at the beginning. However, in order to make sure that the correct elements are pulled, you must declare step as a floating point number, and round multiples of step as you are iterating through the array.
Tested example here in php:
<?
function Algorithm($N,$A){
$step=(sizeof($A)-1)/($N-1);
for ($i=0;$i<$N;$i++)
echo $A[round($step*$i)]." ";
echo "\n";
}
//some of your test cases:
Algorithm(3,array(1,2,3));
Algorithm(5,array(0,1,2,3,4,5,6,7));
Algorithm(2,array(0,1,2));
Algorithm(3,array(0,1,2,3,4,5,6));
?>
Outputs:
1 2 3
0 2 4 5 7
0 2
0 3 6
(you can see your test cases in action and try new ones here: http://codepad.org/2eZp98eD)
Let n+1 be the number of elements you want, already bounded to the length of the array.
Then you want elements at indices 0/n, 1/n, ..., n/n of the way to the end of the array.
Let m+1 be the length of the array. Then your indices are round(m*i/n) (with the division done with floating point).
Your step size is (ArraySize-1)/(N-1).
Just add the step size to a floating point accumulator, and round off the accumulator to get the array index. Repeat until accumulator > array size.
It looks like you want to include both the first and last elements in your list.
If you want to pull X items from your list of N items, your step size will be (N-1)/(X-1). Just round however you want as you pull out each one.
Based on #Rex's answer. Psuedocode! Or some might even say it's JS
/// Selects |evenly spaced| elements from any given array. Handles all the edge cases.
function select(array: [Int], selectionCount: Int) {
let iterationCount = array.length - 1; // Number of iterations
let expectedToBeSelected = selectionCount - 1; // Number of elements to be selected
let resultsArray: [Int] = []; // Result Array
if (selectionCount < 1 || selectionCount > array.length) {
console.log("Invalid selection count!");
return resultsArray;
}
var i;
for (i in array) {
if (selectionCount == 1) {
resultsArray.push(array[i]);
break;
}
let selectedSoFar = Math.round(iterationCount * i / expectedToBeSelected);
if (selectedSoFar < array.length) {
resultsArray.push(array[selectedSoFar]);
} else {
break; // If selectedSoFar is greater than the length then do not proceed further.
}
}
return resultsArray;
}
Related
I'm practicing for a coding interview and I found some question the company usally made to new juniors like me in a website and I would like to know if exist a better solution to this one (it's a pseudocode):
"Given an array of N size of integers, find the most X frequents numbers (data array may contain duplicates)"
V[N] = {...} //Data Array
C1[N] = {...} //Count Array (Store the V[k] number)
C2[N] = {...} //Count Array (Store the V[k] number frequency)
M[X] = {...} //Most Frequent Array
lastFreePositionInC = 0;
//iterate over the data array to count all ocurrence of V[k]
for i=0 to N
indexOfViInC1 = checkIfViExistInC1(V[i],C1); //This iterates over C1 to find V[i]
if indexOfViInC1 != -1
C2[indexOfViInC1]++;
else //Couldn't find the number, must be added to C
C1[lastFreePositionInC] = V[i]
C2[lastFreePositionInC++] = 1
findXMostFrequent(M,X,C1,C2); //You can sort the C array so this is just a merge sort
And yes, it's "ilegal" to sort the data array to solve the challenge.
Note: This is not a homework, it's for an internship project.
Situation:
You have a list of n groups of varying sizes,
No group can contain more than X elements,
Say you have a function merge(G1, G2) which adds all elements of group G2 into group G1 and removes G2 from the list of groups.
EDIT: Every element member of a group is unique to all groups (i.e if group 1 has an elemen a; a does not exist in any other group)
Problem:
You need to minimize the number of groups by merging groups whose combined size is smaller than X
My initial thought:
To use a greedy algorithm that functions as follows:
sort the arrays by decreasing order,
Then while array.size > 0:
Pop largest group (lets call it GL) from the main list, and add it to a toBig list
Then loop through the array until you find a group that can be merged with GL
Merge the groups and add the merged group to a toRemove list
Keep going and merging any group that fits
once loop ends, remove all groups in toRemove from the main list
Continue While Loop
What do you guys think about this approach, will it yield the minimum number of groups (or something close)? Is there a more elegant approach or a more efficient algorithm?
Thank you for the input
P.S. I attempted to search this problem but i have no idea what the name of the problem is, and searching a description of it on SE and google yielded no relevant results.
First, the actual element values don't matter as far as the solution goes. They're merely a merging detail. The only thing that matters is the number of elements in the group. In other words, create a "count list" that just has the counts.
Sort the list. Lowest-to-highest or highest-to-lowest, it doesn't matter. We're going to "pop" off the high end. Let's call the count list ginp and the output list gout. Let's call the popped element gbig.
So, if gbig is of size X, just add it to gout. Keep popping until we have gout < X. Now traverse ginp, hi-to-lo, call this gtry. If (gtry + gbig) <= X, merge gtry into gbig. If gbig hits X, add to gout and start again. Note that since gbig will be getting smaller, it becomes easier to merge more and more.
Do this until ginp is exhausted. This is a "first fit" algorithm. It's a baseline to work from. Because of the sort, it may even be the best solution, but I suspect that you'll need a "best fit" which is a bit more complicated.
Consider an alternate strategy. With a given gbig and given ginp, they form a "current state". Suppose [hi-to-lo sort], that after gbig is popped, that ginp[0] will fit. In the first fit, we took it.
But, suppose we skipped it [just for grins], and chose ginp[1] instead as gtry and took that. Now, for the next addition to gbig, we can chose ginp[2] [if it fits], or we can skip. Repeat this for the remaining ginp (either select or skip) until the either gbig hits X or we've moved past the end of ginp.
At the end, now pop another gbig and repeat, using only the elements not selected in the previous round. Note that you just keep recursing until ginp is finally exhausted. At each step, we select a fit from the remaining. Doing this recursively forms a [binary] tree (based on take/skip) that will enumerate all possible selections. Note that some nodes will not have a take because the next element in ginp is too big (e.g. gbig would overflow X)
Within this recursion, when ginp is exhausted, keep the minimum value for count of elements in gout (it too just needs to be a count). The path to the root node gives you what you need: the list of merge actions, etc. Save that anytime you get a new minimum gout.
The maximum tree depth will be <= n
UPDATE: Need test data? Here's a generator program:
#!/usr/bin/perl
# grpgen -- generate test data for group reduction problem
#
# arguments:
# "-X" - maximum value for X
# "-n" - maximum value for n
# "-T" - number of tests to generate
# "-O" - output file
# "-f" -- generate full element data
#
# NOTE: with no arguments or missing arguments will prompt
master(#ARGV);
exit(0);
# master -- master control
sub master
{
local(#argv) = #_;
select(STDOUT);
$| = 1;
$Xmax = getstr(2,"-X","maximum for X");
$nmax = getstr(2,"-n","maximum for n");
$tstmax = getstr(2,"-T","number of tests");
$keyf = getstr(1,"-f","generate full element data");
$ofile = getstr(0,"-O","output file name");
open($xfdst,">$ofile") ||
die("master: unable to open '$ofile' -- $!\n");
for ($tstcur = 1; $tstcur <= $tstmax; ++$tstcur) {
gentest();
}
close($xfdst);
}
# getstr -- get a string/number
sub getstr
{
my($numflg,$opt,$prompt) = #_;
my($arg);
my($askflg);
my($val);
{
# search command line for -whatever
foreach $arg (#argv) {
if ($arg =~ /^$opt(.*)$/) {
$val = $1;
$val = 1
if ($numflg && ($val eq ""));
last;
}
}
last if (defined($val));
$askflg = 1;
while (1) {
printf("Enter ")
if ($numflg != 1);
printf("%s",$prompt);
if ($numflg == 1) {
printf(" (0/1)? ");
}
else {
printf(": ");
}
$val = <STDIN>;
chomp($val);
if ($numflg == 0) {
last if ($val ne "");
next;
}
next unless ($val =~ /^\d+$/);
$val += 0;
last if ($val > 0);
last if ($numflg == 1);
}
}
unless ($askflg) {
printf("%s: %s\n",$prompt,$val);
}
$val;
}
# gentest -- generate a test
sub gentest
{
local($lhs);
local($pre);
$Xlim = getrand($Xmax);
$xfmt = fmtof($Xlim);
$nlim = getrand($nmax);
$nfmt = fmtof($nlim);
printf($xfdst "\n");
printf($xfdst "T=%d X=%d n=%d\n",$tstcur,$Xlim,$nlim);
for ($nidx = 1; $nidx <= $nlim; ++$nidx) {
$xcur = getrand($Xmax);
if ($keyf) {
gengrpx();
}
else {
gengrp();
}
}
genout();
}
# gengrp -- generate group (counts only)
sub gengrp
{
my($rhs);
$rhs = sprintf($xfmt,$xcur);
genout($rhs);
}
# gengrpx -- generate group (with element data)
sub gengrpx
{
my($elidx,$rhs);
$pre = sprintf("$nfmt:",$nidx);
# NOTE: this is all irrelevant semi-junk data, so just wing it
for ($elidx = 1; $elidx <= $xcur; ++$elidx) {
$rhs = sprintf($xfmt,$elidx);
genout($rhs);
}
genout();
}
# genout -- output data
sub genout
{
my($rhs) = #_;
{
if (defined($rhs)) {
last if ((length($pre) + length($lhs) + length($rhs)) < 78);
}
last if ($lhs eq "");
print($xfdst $pre,$lhs,"\n");
undef($lhs);
}
$lhs .= $rhs
if (defined($rhs));
}
# getrand -- get random number
sub getrand
{
my($lim) = #_;
my($val);
$val = int(rand($lim));
$val += 1;
$val;
}
# fmtof -- get number format
sub fmtof
{
my($num) = #_;
my($fmt);
$fmt = sprintf("%d",$num);
$fmt = length($fmt);
$fmt = sprintf(" %%%dd",$fmt);
$fmt;
}
In your problem, you never really care about the groups themselves, but rather their sizes. So I suggest we first rewrite the problem as a simpler yet equivalent problem with only integers: We will replace groups by their sizes, and replace the merge function by addition.
I will first give a simpler version of your algorithm (does the same thing if I understood it correctly), just because it ignores the irrelevant stuff like the groups themselves, the implementation details, etc. I will then give an example that shows why the algorithm is not optimal, and finally show how it is in fact within a factor of 2 of the optimal solution.
Algorithm
Input: the integer x>0 and a list of positive integers
Do
Find two numbers whose sum is less than x
Merge them
Repeat the above until no such two numbers exist
Return the final list
Why it is not optimal
Consider the following input:
x = 10
list: (3, 3, 3, 3, 4, 4)
In this case, the optimal solution would be to add ("merge") two 3's and a 4, twice, giving (3+3+4, 3+3+4), i.e., (10, 10).
However, your solution might decide to add the two 4's together, which will result in the new list (9, 1, 8), which is longer than (10, 10).
In fact, even if you decide to always add the two largest numbers or the two smallest numbers, you would get the same result.
For any similar scheme I could think of, I could always come up with a counter-example.
Why it is approximately optimal
Your algorithm will always result in a list that contains at most one element that is less than or equal to floor(x/2). Indeed, if there were two such elements, the algorithm will find them and add them.
If your solution list had a size of m, then it follows that the total sum of all the elements is at least m*floor(x/2). Call this number S.
However, any optimal solution must have at least ceil(S/x) elements (otherwise they won't all be less than x). Therefore:
optimal >= ceil(S/x) >= ceil( m * floor(x/2) / x )
>= m*floor(x/2)/x ~ m/2
Thus the algorithm is within a factor of 2 of the optimal solution.
Saw this question recently:
Given 2 arrays, the 2nd array containing some of the elements of the 1st array, return the minimum window in the 1st array which contains all the elements of the 2nd array.
Eg :
Given A={1,3,5,2,3,1} and B={1,3,2}
Output : 3 , 5 (where 3 and 5 are indices in the array A)
Even though the range 1 to 4 also contains the elements of A, the range 3 to 5 is returned Since it contains since its length is lesser than the previous range ( ( 5 - 3 ) < ( 4 - 1 ) )
I had devised a solution but I am not sure if it works correctly and also not efficient.
Give an Efficient Solution for the problem. Thanks in Advance
A simple solution of iterating through the list.
Have a left and right pointer, initially both at zero
Move the right pointer forwards until [L..R] contains all the elements (or quit if right reaches the end).
Move the left pointer forwards until [L..R] doesn't contain all the elements. See if [L-1..R] is shorter than the current best.
This is obviously linear time. You'll simply need to keep track of how many of each element of B is in the subarray for checking whether the subarray is a potential solution.
Pseudocode of this algorithm.
size = bestL = A.length;
needed = B.length-1;
found = 0; left=0; right=0;
counts = {}; //counts is a map of (number, count)
for(i in B) counts.put(i, 0);
//Increase right bound
while(right < size) {
if(!counts.contains(right)) continue;
amt = count.get(right);
count.set(right, amt+1);
if(amt == 0) found++;
if(found == needed) {
while(found == needed) {
//Increase left bound
if(counts.contains(left)) {
amt = count.get(left);
count.set(left, amt-1);
if(amt == 1) found--;
}
left++;
}
if(right - left + 2 >= bestL) continue;
bestL = right - left + 2;
bestRange = [left-1, right] //inclusive
}
}
Not a homework question, but a possible interview question...
Given an array of integers, write an algorithm that will check if the sum of any two is zero.
What is the Big O of this solution?
Looking for non brute force methods
Use a lookup table: Scan through the array, inserting all positive values into the table. If you encounter a negative value of the same magnitude (which you can easily lookup in the table); the sum of them will be zero. The lookup table can be a hashtable to conserve memory.
This solution should be O(N).
Pseudo code:
var table = new HashSet<int>();
var array = // your int array
foreach(int n in array)
{
if ( !table.Contains(n) )
table.Add(n);
if ( table.Contains(n*-1) )
// You found it.;
}
The hashtable solution others have mentioned is usually O(n), but it can also degenerate to O(n^2) in theory.
Here's a Theta(n log n) solution that never degenerates:
Sort the array (optimal quicksort, heap sort, merge sort are all Theta(n log n))
for i = 1, array.len - 1
binary search for -array[i] in i+1, array.len
If your binary search ever returns true, then you can stop the algorithm and you have a solution.
An O(n log n) solution (i.e., the sort) would be to sort all the data values then run a pointer from lowest to highest at the same time you run a pointer from highest to lowest:
def findmatch(array n):
lo = first_index_of(n)
hi = last_index_of(n)
while true:
if lo >= hi: # Catch where pointers have met.
return false
if n[lo] = -n[hi]: # Catch the match.
return true
if sign(n[lo]) = sign(n[hi]): # Catch where pointers are now same sign.
return false
if -n[lo] > n[hi]: # Move relevant pointer.
lo = lo + 1
else:
hi = hi - 1
An O(n) time complexity solution is to maintain an array of all values met:
def findmatch(array n):
maxval = maximum_value_in(n) # This is O(n).
array b = new array(0..maxval) # This is O(1).
zero_all(b) # This is O(n).
for i in index(n): # This is O(n).
if n[i] = 0:
if b[0] = 1:
return true
b[0] = 1
nextfor
if n[i] < 0:
if -n[i] <= maxval:
if b[-n[i]] = 1:
return true;
b[-n[i]] = -1
nextfor
if b[n[i]] = -1:
return true;
b[n[i]] = 1
This works by simply maintaining a sign for a given magnitude, every possible magnitude between 0 and the maximum value.
So, if at any point we find -12, we set b[12] to -1. Then later, if we find 12, we know we have a pair. Same for finding the positive first except we set the sign to 1. If we find two -12's in a row, that still sets b[12] to -1, waiting for a 12 to offset it.
The only special cases in this code are:
0 is treated specially since we need to detect it despite its somewhat strange properties in this algorithm (I treat it specially so as to not complicate the positive and negative cases).
low negative values whose magnitude is higher than the highest positive value can be safely ignored since no match is possible.
As with most tricky "minimise-time-complexity" algorithms, this one has a trade-off in that it may have a higher space complexity (such as when there's only one element in the array that happens to be positive two billion).
In that case, you would probably revert to the sorting O(n log n) solution but, if you know the limits up front (say if you're restricting the integers to the range [-100,100]), this can be a powerful optimisation.
In retrospect, perhaps a cleaner-looking solution may have been:
def findmatch(array num):
# Array empty means no match possible.
if num.size = 0:
return false
# Find biggest value, no match possible if empty.
max_positive = num[0]
for i = 1 to num.size - 1:
if num[i] > max_positive:
max_positive = num[i]
if max_positive < 0:
return false
# Create and init array of positives.
array found = new array[max_positive+1]
for i = 1 to found.size - 1:
found[i] = false
zero_found = false
# Check every value.
for i = 0 to num.size - 1:
# More than one zero means match is found.
if num[i] = 0:
if zero_found:
return true
zero_found = true
# Otherwise store fact that you found positive.
if num[i] > 0:
found[num[i]] = true
# Check every value again.
for i = 0 to num.size - 1:
# If negative and within positive range and positive was found, it's a match.
if num[i] < 0 and -num[i] <= max_positive:
if found[-num[i]]:
return true
# No matches found, return false.
return false
This makes one full pass and a partial pass (or full on no match) whereas the original made the partial pass only but I think it's easier to read and only needs one bit per number (positive found or not found) rather than two (none, positive or negative found). In any case, it's still very much O(n) time complexity.
I think IVlad's answer is probably what you're after, but here's a slightly more off the wall approach.
If the integers are likely to be small and memory is not a constraint, then you can use a BitArray collection. This is a .NET class in System.Collections, though Microsoft's C++ has a bitset equivalent.
The BitArray class allocates a lump of memory, and fills it with zeroes. You can then 'get' and 'set' bits at a designated index, so you could call myBitArray.Set(18, true), which would set the bit at index 18 in the memory block (which then reads something like 00000000, 00000000, 00100000). The operation to set a bit is an O(1) operation.
So, assuming a 32 bit integer scope, and 1Gb of spare memory, you could do the following approach:
BitArray myPositives = new BitArray(int.MaxValue);
BitArray myNegatives = new BitArray(int.MaxValue);
bool pairIsFound = false;
for each (int testValue in arrayOfIntegers)
{
if (testValue < 0)
{
// -ve number - have we seen the +ve yet?
if (myPositives.get(-testValue))
{
pairIsFound = true;
break;
}
// Not seen the +ve, so log that we've seen the -ve.
myNegatives.set(-testValue, true);
}
else
{
// +ve number (inc. zero). Have we seen the -ve yet?
if (myNegatives.get(testValue))
{
pairIsFound = true;
break;
}
// Not seen the -ve, so log that we've seen the +ve.
myPositives.set(testValue, true);
if (testValue == 0)
{
myNegatives.set(0, true);
}
}
}
// query setting of pairIsFound to see if a pair totals to zero.
Now I'm no statistician, but I think this is an O(n) algorithm. There is no sorting required, and the longest duration scenario is when no pairs exist and the whole integer array is iterated through.
Well - it's different, but I think it's the fastest solution posted so far.
Comments?
Maybe stick each number in a hash table, and if you see a negative one check for a collision? O(n). Are you sure the question isn't to find if ANY sum of elements in the array is equal to 0?
Given a sorted array you can find number pairs (-n and +n) by using two pointers:
the first pointer moves forward (over the negative numbers),
the second pointer moves backwards (over the positive numbers),
depending on the values the pointers point at you move one of the pointers (the one where the absolute value is larger)
you stop as soon as the pointers meet or one passed 0
same values (one negative, one possitive or both null) are a match.
Now, this is O(n), but sorting (if neccessary) is O(n*log(n)).
EDIT: example code (C#)
// sorted array
var numbers = new[]
{
-5, -3, -1, 0, 0, 0, 1, 2, 4, 5, 7, 10 , 12
};
var npointer = 0; // pointer to negative numbers
var ppointer = numbers.Length - 1; // pointer to positive numbers
while( npointer < ppointer )
{
var nnumber = numbers[npointer];
var pnumber = numbers[ppointer];
// each pointer scans only its number range (neg or pos)
if( nnumber > 0 || pnumber < 0 )
{
break;
}
// Do we have a match?
if( nnumber + pnumber == 0 )
{
Debug.WriteLine( nnumber + " + " + pnumber );
}
// Adjust one pointer
if( -nnumber > pnumber )
{
npointer++;
}
else
{
ppointer--;
}
}
Interesting: we have 0, 0, 0 in the array. The algorithm will output two pairs. But in fact there are three pairs ... we need more specification what exactly should be output.
Here's a nice mathematical way to do it: Keep in mind all prime numbers (i.e. construct an array prime[0 .. max(array)], where n is the length of the input array, so that prime[i] stands for the i-th prime.
counter = 1
for i in inputarray:
if (i >= 0):
counter = counter * prime[i]
for i in inputarray:
if (i <= 0):
if (counter % prime[-i] == 0):
return "found"
return "not found"
However, the problem when it comes to implementation is that storing/multiplying prime numbers is in a traditional model just O(1), but if the array (i.e. n) is large enough, this model is inapropriate.
However, it is a theoretic algorithm that does the job.
Here's a slight variation on IVlad's solution which I think is conceptually simpler, and also n log n but with fewer comparisons. The general idea is to start on both ends of the sorted array, and march the indices towards each other. At each step, only move the index whose array value is further from 0 -- in only Theta(n) comparisons, you'll know the answer.
sort the array (n log n)
loop, starting with i=0, j=n-1
if a[i] == -a[j], then stop:
if a[i] != 0 or i != j, report success, else failure
if i >= j, then stop: report failure
if abs(a[i]) > abs(a[j]) then i++ else j--
(Yeah, probably a bunch of corner cases in here I didn't think about. You can thank that pint of homebrew for that.)
e.g.,
[ -4, -3, -1, 0, 1, 2 ] notes:
^i ^j a[i]!=a[j], i<j, abs(a[i])>abs(a[j])
^i ^j a[i]!=a[j], i<j, abs(a[i])>abs(a[j])
^i ^j a[i]!=a[j], i<j, abs(a[i])<abs(a[j])
^i ^j a[i]==a[j] -> done
The sum of two integers can only be zero if one is the negative of the other, like 7 and -7, or 2 and -2.
Say I got a set of 10 random numbers between 0 and 100.
An operator gives me also a random number between 0 and 100.
Then I got to find the number in the set that is the closest from the number the operator gave me.
example
set = {1,10,34,39,69,89,94,96,98,100}
operator number = 45
return = 39
And how do translate this into code? (javascript or something)
if set is ordered, do a binary search to find the value, (or the 2 values) that are closest. Then distinguish which of 2 is closest by ... subtracting?
If set is not ordered, just iterate through the set, (Sorting it would itself take more than one pass), and for each member, check to see if the difference is smaller than the smallest difference you have seen so far, and if it is, record it as the new smallest difference, and that number as the new candidate answer. .
public int FindClosest(int targetVal, int[] set)
{
int dif = 100, cand = 0;
foreach(int x in set)
if (Math.Abs(x-targetVal) < dif)
{
dif = Math.Abs(x-targetVal);
cand = x;
}
return cand;
}
given an array called input, create another array of the same size
each element of this new array is the Math.abs(input[i] - operatorNumber)
select the index of the mininum element (let's call it k)
your answer is input[k]
NB
sorting is not needed
you can do it without the extra array
Sample implementation in JavaScript
function closestTo(number, set) {
var closest = set[0];
var prev = Math.abs(set[0] - number);
for (var i = 1; i < set.length; i++) {
var diff = Math.abs(set[i] - number);
if (diff < prev) {
prev = diff;
closest = set[i];
}
}
return closest;
}
How about this:
1) Put the set into a binary tree.
2) Insert the operator number into the tree
3) Return the Operators parent
order the set
binary search for the input
if you end up between two elements, check the difference, and return the one with the smallest difference.
Someone tagged this question Mathematica, so here's a Mathematica answer:
set = {1,10,34,39,69,89,94,96,98,100};
opno = 45;
set[[Flatten[
Position[set - opno, i_ /; Abs[i] == Min[Abs[set - opno]]]]]]
It works when there are multiple elements of set equally distant from the operator number.
python example:
#!/usr/bin/env python
import random
from operator import itemgetter
sample = random.sample(range(100), 10)
pivot = random.randint(0, 100)
print 'sample: ', sample
print 'pivot:', pivot
print 'closest:', sample[
sorted(
map(lambda i, e: (i, abs(e - pivot)), range(10), sample),
key=itemgetter(1)
)[1][0]]
# sample: [61, 2, 3, 85, 15, 18, 19, 8, 66, 4]
# pivot: 51
# closest: 66