Given a list of scalar values, how can we split the list into K evenly-sized groups such that the groups have similar distributions? Note that simplicity is strongly favored over efficiency.
I am currently doing:
sort values
create K empty groups: group_1, ... group_k
while values is not empty:
for group in groups:
group.add(values.pop())
if values is empty:
break
This is a variation on what #m.raynal came up with that will work well even when n is just a fairly small multiple of k.
Sort the elements from smallest to largest.
Create k empty groups.
Put them into a Priority Queue sorted from least elements to most, then largest sum to smallest. (So the next element is always the one with the largest sum among all of those with the fewest elements.)
For each element, take a group off of the priority queue, add that element, put the group back in the priority queue.
In practice this means that the first k elements go to groups randomly, the next k elements go in reverse order. And then it gets clever about keeping things balanced.
Depending on your application, the fact that the bottom two values are spaced predictably far apart could be a problem. If that is the case then you could complicate this by going "middle out". But that scheme is much more complicated.
Here is a way to (somehow) distribute values evenly.
Let's assume your array of scalars A is of size n, with n being a multiple of k to make it more simple.
One way could then be :
sort(A)
d = n/k
g = 0
for i from 0 to d-1 do {
for j from 0 to k-1 do {
group[(j+g) % k].add(A[k*i + j])
}
g ++
}
You then add the first k elements to the groups 1, ..., k, the k following to the groups 2, ..., k, 1, then 3, ...k, 1, 2 etc.
It would not work well if k² > n, in this case you should not increment g by 1, but by a larger value close to k/d. If k is almost n, then this algorithm becomes simply useless.
This gives absolutely no guarantee about an even distribution of the scalars if some extreme values were to be in A. But in the case A itself would be somehow well distributed, and n > k², then it would somehow distribute the values among the k groups.
It has at least the advantage of running in O(n) once A is sorted.
Related
Let a1,...,an be a sequence of real numbers. Let m be the minimum of the sequence, and let M be the maximum of the sequence.
I proved that there exists 2 elements in the sequence, x,y, such that |x-y|<=(M-m)/n.
Now, is there a way to find an algorithm that finds such 2 elements in time complexity of O(n)?
I thought about sorting the sequence, but since I dont know anything about M I cannot use radix/bucket or any other linear time algorithm that I'm familier with.
I'd appreciate any idea.
Thanks in advance.
First find out n, M, m. If not already given they can be determined in O(n).
Then create a memory storage of n+1 elements; we will use the storage for n+1 buckets with width w=(M-m)/n.
The buckets cover the range of values equally: Bucket 1 goes from [m; m+w[, Bucket 2 from [m+w; m+2*w[, Bucket n from [m+(n-1)*w; m+n*w[ = [M-w; M[, and the (n+1)th bucket from [M; M+w[.
Now we go once through all the values and sort them into the buckets according to the assigned intervals. There should be at a maximum 1 element per bucket. If the bucket is already filled, it means that the elements are closer together than the boundaries of the half-open interval, e.g. we found elements x, y with |x-y| < w = (M-m)/n.
If no such two elements are found, afterwards n buckets of n+1 total buckets are filled with one element. And all those elements are sorted.
We once more go through all the buckets and compare the distance of the content of neighbouring buckets only, whether there are two elements, which fulfil the condition.
Due to the width of the buckets, the condition cannot be true for buckets, which are not adjoining: For those the distance is always |x-y| > w.
(The fulfilment of the last inequality in 4. is also the reason, why the interval is half-open and cannot be closed, and why we need n+1 buckets instead of n. An alternative would be, to use n buckets and make the now last bucket a special case with [M; M+w]. But O(n+1)=O(n) and using n+1 steps is preferable to special casing the last bucket.)
The running time is O(n) for step 1, 0 for step 2 - we actually do not do anything there, O(n) for step 3 and O(n) for step 4, as there is only 1 element per bucket. Altogether O(n).
This task shows, that either sorting of elements, which are not close together or coarse sorting without considering fine distances can be done in O(n) instead of O(n*log(n)). It has useful applications. Numbers on computers are discrete, they have a finite precision. I have sucessfuly used this sorting method for signal-processing / fast sorting in real-time production code.
About #Damien 's remark: The real threshold of (M-m)/(n-1) is provably true for every such sequence. I assumed in the answer so far the sequence we are looking at is a special kind, where the stronger condition is true, or at least, for all sequences, if the stronger condition was true, we would find such elements in O(n).
If this was a small mistake of the OP instead (who said to have proven the stronger condition) and we should find two elements x, y with |x-y| <= (M-m)/(n-1) instead, we can simplify:
-- 3. We would do steps 1 to 3 like above, but with n buckets and the bucket width set to w = (M-m)/(n-1). The bucket n now goes from [M; M+w[.
For step 4 we would do the following alternative:
4./alternative: n buckets are filled with one element each. The element at bucket n has to be M and is at the left boundary of the bucket interval. The distance of this element y = M to the element x in the n-1th bucket for every such possible element x in the n-1thbucket is: |M-x| <= w = (M-m)/(n-1), so we found x and y, which fulfil the condition, q.e.d.
First note that the real threshold should be (M-m)/(n-1).
The first step is to calculate the min m and max M elements, in O(N).
You calculate the mid = (m + M)/2value.
You concentrate the value less than mid at the beginning, and more than mid at the end of he array.
You select the part with the largest number of elements and you iterate until very few numbers are kept.
If both parts have the same number of elements, you can select any of them. If the remaining part has much more elements than n/2, then in order to maintain a O(n) complexity, you can keep onlyn/2 + 1 of them, as the goal is not to find the smallest difference, but one difference small enough only.
As indicated in a comment by #btilly, this solution could fail in some cases, for example with an input [0, 2.1, 2.9, 5]. For that, it is needed to calculate the max value of the left hand, and the min value of the right hand, and to test if the answer is not right_min - left_max. This doesn't change the O(n) complexity, even if the solution becomes less elegant.
Complexity of the search procedure: O(n) + O(n/2) + O(n/4) + ... + O(2) = O(2n) = O(n).
Damien is correct in his comment that the correct results is that there must be x, y such that |x-y| <= (M-m)/(n-1). If you have the sequence [0, 1, 2, 3, 4] you have 5 elements, but no two elements are closer than (M-m)/n = (4-0)/5 = 4/5.
With the right threshold, the solution is easy - find M and m by scanning through the input once, and then bucket the input into (n-1) buckets of size (M-m)/(n-1), putting values that are on the boundaries of a pair of buckets into both buckets. At least one bucket must have two values in it by the pigeon-hole principle.
Lets say that you are given n sorted arrays of numbers and you need to pick one number from each array such that the minimum distance between the n chosen elements is maximized.
Example:
arrays:
[0, 500]
[100, 350]
[200]
2<=n<=10 and every array could have ~10^3-10^4 elements.
In this example the optimal solution to maximize minimum distance is pick numbers: 500, 350, 200 or 0, 200, 350 where min distance is 150 and is the maximum possible of every combination.
I am looking for an algorithm to solve this. I know that I could binary search the max min distance but I can't see how to decide is there is a solution with max min distance of at least d, in order for the binary search to work. I am thinking maybe dynamic programming could help but haven't managed to find a solution with dp.
Of course generating all combination with n elements is not efficient. I have already tried backtracking but it is slow since it tries every combination.
n ≤ 10 suggests that we can take an exponential dependence on n. Here's
an O(2n m n)-time algorithm where m is the total size of the
arrays.
The dynamic programming approach I have in mind is, for each subset of
arrays, calculate all of the pairs (maximum number, minimum distance) on
the efficient frontier, where we have to choose one number from each of
the arrays in the subset. By efficient frontier I mean that if we have
two pairs (a, b) ≠ (c, d) with a ≤ c and b ≥ d, then (c, d) is not on
the efficient frontier. We'll want to keep these frontiers sorted for
fast merges.
The base case with the empty subset is easy: there's one pair, (minimum
distance = ∞, maximum number = −∞).
For every nonempty subset of arrays in some order that extends the
inclusion order, we compute a frontier for each array in the subset,
representing the subset of solutions where that array contributes the
maximum number. Then we merge these frontiers. (Naively this costs us
another factor of log n, which maybe isn't worth the hassle to avoid
given that n ≤ 10, but we can avoid it by merging the arrays once at the
beginning to enable future merges to use bucketing.)
To construct a new frontier from a subset of arrays and another array
also involves a merge. We initialize an iterator at the start of the
frontier (i.e., least maximum number) and an iterator at the start of
the array (i.e., least number). While neither iterator is past the end,
Emit a candidate pair (min(minimum distance, array number − maximum
number), array number).
If the min was less than or equal to minimum distance, increment the
frontier iterator. If the min was less than or equal to array number
− maximum number, increment the array iterator.
Cull the candidate pairs to leave only the efficient frontier. There is
an elegant way to do this in code that is more trouble to explain.
I am going to give an algorithm that for a given distance d, will output whether it is possible to make a selection where the distance between any pair of chosen numbers is at least d. Then, you can binary-search the maximum d for which the algorithm outputs "YES", in order to find the answer to your problem.
Assume the minimum distance d be given. Here is the algorithm:
for every permutation p of size n do:
last := -infinity
ok := true
for p_i in p do:
x := take the smallest element greater than or equal to last+d in the p_i^th array (can be done efficiently with binary search).
if no such x was found; then
ok = false
break
end
last = x
done
if ok; then
return "YES"
end
done
return "NO"
So, we brute-force the order of arrays. Then, for every possible order, we use a greedy method to choose elements from each array, following the order. For example, take the example you gave:
arrays:
[0, 500]
[100, 350]
[200]
and assume d = 150. For the permutation 1 3 2, we first take 0 from the 1st array, then we find the smallest element in the 3rd array that is greater than or equal to 0+150 (it is 200), then we find the smallest element in the 2nd array which is greater than or equal to 200+150 (it is 350). Since we could find an element from every array, the algorithm outputs "YES". But for d = 200 for instance, the algorithm would output "NO" because none of the possible orderings would result in a successful selection.
The complexity for the above algorithm is O(n! * n * log(m)) where m is the maximum number of elements in an array. I believe it would be sufficient, since n is very small. (For m = 10^4, 10! * 10 * 13 ~ 5*10^8. It can be computed under a second on a modern CPU.)
Lets look at an example with optimal choices, x (horizontal arrays A, B, C, D):
A x
B b x b
C x c
D d x
Our recurrence based on range could be: let f(low, excluded) represent the maximum closest distance between two chosen elements (from arrays 1 to n) of the subset without elements in excluded, where low is the lowest chosen element. Then:
(1)
f(low, excluded) when |excluded| = n-1:
max(low)
for low in the only permitted array
(2)
f(low, excluded):
max(
min(
a - low,
f(a, excluded')
)
)
for a ≥ low, a not in excluded'
where excluded' = excluded ∪ {low's array}
We can limit a. For one thing the maximum we can achieve is
(3)
m = (highest - low) / (n - |excluded| - 1)
which means a need not go higher than low + m.
Secondly, we can store results for all f(a, excluded'), keyed by excluded' (we have 2^10 possible keys), each in a decorated binary tree ordered by a. The decoration will be the highest result achievable in the right subtree, meaning we can find the max for all f(v, excluded'), v ≥ a in logarithmic time.
The latter establishes a dominance relationship and clearly we are intetested in both a larger a and a larger f(a, excluded') so as to maximise the min function in (2). Picking an a in the middle, we can use a binary search. If we have:
a - low < max(v, excluded'), v ≥ a
where max(v, excluded') is the lookup
for a in the decorated tree
then we look to the right since max(v, excluded) indicates there's a better answer on the right, where a - low is also larger.
And if we have:
a - low ≥ max(v, excluded), v ≥ a
then we record this candidate and look to the left since to the right, the answer is fixed at max(v, excluded), given that a - low could not decrease.
In order to conduct the binary search on the range, [low, low + m] (see (3)), rather than merge and label all the arrays at the outset, we can keep them separate and compare the closest candidates to mid out of each array we are currently permitted to choose a from. (The trees have the mixed results, keyed by subset.) (The flow of this part is not completely clear to me.)
Worst case with this method, given that n = C is constant seems to be
O(C * array_length * 2^C * C * log(array_length) * log(C * array_length))
C * array_length is the iteration on low
Each low can be paired with 2^C inclusions
C * log(array_length) is the separated binary-search
And log(C * array_length) is the tree lookup
Simplifying:
= O(array_length * log^2(array_length))
although in practice, there could be many dead-end branches that exit early where a full selection wouldn't be possible.
In case, it wasn't clear, the iteration is on a fixed lowest element in the selection. In other words, we want the best f(low, excluded) for all different lows (and excludeds). For bottom-up, we would iterate from the highest value down so our results for a get stored as we iterate.
I have three different permutations of the set {1,2,..n}, and I would like to write some code to count the number of pairs of numbers that come in the same order in all three permutations.
As an example with permutation of.
{1,2,3}
(1,2,3).
(3,2,1)
there are 0 such pairs where they come in the same order because (1,2,3) and (3,2,1) are sorted in both increasing and decreasing order
I want an optimal O(N*logN) solution. A hint was given, in which you have to count the number of inversions of each permutation, i.e
an inversion is the of pair (i,j) such that i > j but a[j] > a[i]
I can do this in O(NlogN).
So definitely if one pair came in the increasing order in each of the permutations it would add 1 to each inversion count for each permutation. But that isn't true if i > j and
a[j] > a[i] (all came in decreasing order) as I should be increasing the count but this doesn't contribute anything to the inversion count. Also afterwards if I can count the number of inversions in each array, but I don't see a link between that and the number of same-ordered pairs.
For each permutation you have, you should count the number of cases where a[i] < a[j], for each pair of indices i and j such that i < j. Let's call this non-inversions. Then, you can find out the result by taking the minimum of the non-inversion counts you found.
For instance, in your sample case, the values corresponding for the permutations (1,2,3), (1,2,3) and (3,2,1) are 3, 3, and 0, respectively.
For a different sample case, you can examine (1,2,3,4), (1,2,4,3), (1,3,2,4) and (4,1,3,2). The corresponding counts for these permutations are 6, 4, 5, and 2. The result is min(6,4,5,2) because the only tuples that remain ordered in each case are (1, 2) and (1, 3).
The key idea behind this solution is based on what non-inversion count implies. In an ordered array of size N, there are N(N-1)/2 ordered pairs contributing to the non-inversion count. As you introduce some inversions into that array, the relative order of some elements are lost, while some remain. By finding the minimum of the non-inversion counts, you can find the number of pairs that preserve their relative ordering in the 'worst' case. (even though this alone is not enough not identify them individually)
If you insist on counting the inversions (i.e. as opposed to the non-inversions) the procedure is pretty much the same. Count the inversions for each permutation given to you. Then simply subtract the maximum value you find from N(N-1)/2, and obtain the result.
I'm attempting to create a scoring system for a card game which would preclude ties in scoring, by setting the point value of each card such that no two combinations of cards could add up to the same score. (For this particular case, I need a set of 17 integers, since there are 17 scorable cards.)
I've tried several heuristic approaches (various winnowing procedures along the lines of taking an array of integers, iteratively generating random subsets, and discard those which appear in subsets sharing a common sum); then exhaustively validating the results (by enumerating their subsets).
From what I've seen, the theoretical limit to the size of such a set is near log2(n), where n is the number of members of the superset from which the subset-distinct-sum subset is drawn. However, while I've been able to approach this, I've not been able to match it. My best result so far is a set of 13 integers, drawn from the 250,000 integers between 10,000 and 25,000,000, counting by hundreds (the latter is immaterial to the algorithm, but is a domain constrain of my use case):
[332600,708900,2130500,2435900,5322500,7564200,10594500,12776200,17326700,17925700,22004400,23334700,24764900]
I've hunted around, and most of the SDS generators are sequence generators that make no pretense of creating dense sets, but instead have the ability to be continued indefinitely to larger and larger numbers (e.g. the Conway-Guy Sequence). I have no such constraint, and would prefer a denser set without requiring a sequence relationship with each other.
(I did consider using the Conway-Guy Sequence n=2..18 * 10,000, but the resulting set has a broader range than I would like. I'd also really like a more general algorithm.)
Edit: For clarity, I'm looking for a way (non-deterministic or dynamic-programming methods are fine) to generate an SDS set denser than those provided by simply enumerating exponents or using a sequence like Conway-Guy. I hope, by discarding the "sequence generator" constraint, I can find numbers much closer together than such sequences provide.
For any value of N, it is readily possible to generate up to Floor(Log2(N))-1 numbers (which we'll call the set "S") such that:
All members of S are less than or equal to N, and
No two distinct subsets of S have the same sum, and
All members of S are within a factor of two of each other.
Your suspicions were correct in that S would not be in any sense extensible (you could not add more members to it)
Method:
For N, find T = 2^P , where T is the highest power of two that is less than or equal to N. That is:
P = Floor( Log2(N) ), and
T = 2^P
Then the members of S can be generated as:
for( i=0 to P-2 ): S(i) = 2^i + 2^(P-1)
Or, to put it another way, S(i) = 2^i, for 0<= i < P-1
This makes for a total of P-1 (or Floor(Log2(N))-1) members. Can two distinct subsets of S ever sum to the same number? No:
Proof
Let's consider any two subsets of S: U and V, which are distinct (that is, they have no members in common). Then the sum of U is:
Sum(U) = O(U)*(T/2) + Sum(2^i| S(i):U)
Where
O(U) is the Order of the set U (how many elements it has),
"S(i):U" means "S(i) is an element of U", and
"|" is the conditioning operator (means "given that.." or "where.."),
So, putting the last two together, Sum(2^i| S(i):U) just means "the sum of all of the powers of two that are elements of U" (remembering that S(i) = 2^i)).
And likewise, the sum of V is:
Sum(V) = O(V)*(2^(P-1)) + Sum(2^i| S(i):V)
Now because U and V are distinct: Sum(2^i| S(i):U) can never be equal, because no two sums of distinct powers of two can ever be equal.
Also, because Sum(2^i; 0 <= i < P-1) = 2^(P-1)-1), these sums of the powers of two must always be less than 2^P-1. This means that the sums of U and V could only be equal if:
O(U)*(2^(P-1)) = O(V)*(2^(P-1))
or
O(U) = O(V)
That is, if U and V have the same number of elements, so that the first terms will be equal (because the second terms can never be as large as any differences in the first terms).
In such a case (O(U) = (O(V)) the first terms are equal, so Sum(U) would equal Sum(V) IFF their second terms (the binary sums) are also equal. However, we already know that they can never be equal, therefore, it can never be true that Sum(U) = Sum(V).
It seems like another way of phrasing the problem is to make sure that the previous terms never sum to the current term. If that's never the case, you'll never have two sums that add up to the same.
Ex: 2, 3, 6, 12, 24, 48, 96, ...
Summing to any single element {i} takes 1 more than the sum of the previous terms, and summing to any multi-element set {i,j} takes more than the sum of previous elements to i and previous elements to j.
More mathematically: (i-1), i, 2i, 4i, 8i, ... 2^n i Should work for any i, n.
The only way this doesn't work is if you're allowed to choose the same number twice in your subset (if that's the case, you should specify it in the problem). But that brings up the issue that Sum{i} = Sum{i} for any number, so that seems like an issue.
Given an unsorted set of integers in the form of array, find all possible subsets whose sum is greater than or equal to a const integer k,
eg:- Our set is {1,2,3} and k=2
Possible subsets:-
{2},
{3},
{1,2},
{1,3},
{2,3},
{1,2,3}
I can only think of a naive algorithm which lists all the subsets of set and checks if sum of subset is >=k or not, but its an exponential algorithm and listing all subsets requires O(2^N). Can I use dynamic programming to solve it in polynomial time?
Listing all the subsets is going to be still O(2^N) because in the worst case you may still have to list all subsets apart from the empty one.
Dynamic programming can help you count the number of sets that have sum >= K
You go bottom-up keeping track of how many subsets summed to some value from range [1..K]. An approach like this will be O(N*K) which is going to be only feasible for small K.
The idea with the dynamic programming solution is best illustrated with an example. Consider this situation. Assume you know that out of all the sets composed of the first i elements you know that t1 sum to 2 and t2 sum to 3. Let's say that the next i+1 element is 4. Given all the existing sets we can build all the new sets by either appending the element i+1 or leaving it out. If we leave it out we get t1 subsets that sum to 2 and t2 subsets that sum to 3. If we append it then we obtain t1 subsets that sum to 6 (2 + 4) and t2 that sum to 7 (3 + 4) and one subset which contains just i+1 which sums to 4. That gives us the numbers of subsets that sum to (2,3,4,6,7) consisting of the first i+1 elements. We continue until N.
In pseudo-code this could look something like this:
int DP[N][K];
int set[N];
//go through all elements in the set by index
for i in range[0..N-1]
//count the one element subset consisting only of set[i]
DP[i][set[i]] = 1
if (i == 0) continue;
//case 1. build and count all subsets that don't contain element set[i]
for k in range[1..K-1]
DP[i][k] += DP[i-1][k]
//case 2. build and count subsets that contain element set[i]
for k in range[0..K-1]
if k + set[i] >= K then break inner loop
DP[i][k+set[i]] += DP[i-1][k]
//result is the number of all subsets - number of subsets with sum < K
//the -1 is for the empty subset
return 2^N - sum(DP[N-1][1..K-1]) - 1
Can I use dynamic programming to solve it in polynomial time?
No. The problem is even harder than #amit (in the comments) mentions. Finding if there exists a subset that sums to a specific k is the subset-sum problem, which is NP-hard. Instead you are asking for how many solutions are equal to a specific k, which is in the much more difficult class of P#. In addition, your exact problem is slightly more difficult since you want to not only count, but enumerate all the possible subsets for k and targets < k.
If k is 0, and every element of the set is positive then you have no choice but to output every possible subset, so the lower-bound to this problem is O(2N) -- the time taken to produce the output.
Unless you know something more about the value k that you haven't told us, there's no faster general solution that to just check every subset.