Optimum redistribution algorothm - algorithm

Let's assume I have a bucket of identical items (n in number) distributed in identical buckets (m in number). The given distribution may or may not be fair/uniform. The goal is to write an algorithm that can uniformly redistribute these items by transferring some items. Each transfer has a cost associated with it, so the number of transfers must be minimum.
For example I have total 7 items in 3 buckets with this distribution. Input is a vector of size 'm' of number of items in each bucket with - 4, 2, 1
The solution would involve 1 transfer from bucket 1 to bucket 3 and the resulting distribution will look like - 3, 2, 2
Since n(7) is not perfectly divisible by m(3), this is the closest achievable uniform distribution.
Another sample case -
input: {1, 4, 5, 11}
output: {5, 5, 5, 6}
Number of transfer to get to the output: 5
I'm looking for some existing algorithms that can solve this problem statement. Thanks!

While your use case doesn't necessitate putting much thought into the implementation, since the numbers are apparently small, it may sometimes be desirable to use an algorithm with a time complexity not essentially dependent on the number of items, e. g.:
import numpy as np
def dist(inp):
inp = np.array(inp)
m = len(inp) # number of buckets
n = sum(inp) # number of items
l = n//m # low number in an output bucket
h = (n+m-1)//m # high number in an output bucket
lo = inp < l # buckets which need transfer to
hi = inp > h # buckets which need transfer from
out = np.empty(m, int)
out[lo] = l # fill underfull buckets with low
out[hi] = h # fill overfull buckets with high
out[~lo & ~hi] = inp[~lo & ~hi] # keep other buckets as is
o = sum(out) # check missing or surplus items
if o < n: out[np.where(out == l)[0][:n-o]] = h # adjust
if o > n: out[np.where(out == h)[0][:o-n]] = l # adjust
return out

Related

Sort Thousands of Chuck E. Cheese Tickets

I need to sort an n-thousand size array of random unique positive integers into groups of consecutive integers, each of group size k or larger, and then further grouped into dividends of some arbitrary positive integer j.
In other words, let's say I work at Chuck E. Cheese and we sometimes give away free tickets. I have a couple hundred thousand tickets on the floor and want to find out what employee handed out what but only for ticket groupings of consecutive integers that are larger than 500. Each employee has a random number from 0 to 100 assigned to them. That number corresponds to what "batch" of tickets where handed out, i.e. tickets from #000000 to #001499 where handed out by employee 1, tickets from #001500 to #002999 were handed out by employee 2, and so on. A large number of tickets are lost or are missing. I only care about groups of consecutive ticket numbers larger than 500.
What is the fastest way for me to sort through this pile?
Edit:
As requested by #trincot, here is a worked out example:
I have 150,000 unique tickets on the floor ranging from ticket #000000 to #200000 (i.e. missing 50,001 random tickets from the pile)
Step 1: sort each ticket from smallest to largest using an introsort algorithm.
Step 2: go through the list of tickets one by one and gather only tickets with "consecutiveness" greater than 500. i.e. I keep a tally of how many consecutive values I have found and only keep those with tallys 500 or higher. If I have tickets #409 thru #909 but not #408 or #1000 then I would keep that group but if that group had missed a ticket anywhere from #409 to #909, I would have thrown out the group and moved on.
Step 3: combine all my newly sorted groups together, each of which are size 500 or larger.
Step 4: figure out what tickets belong to who by going through the final numbers one by one again, dividing each by 1500, rounding down to nearest whole number, and putting them in their respective pile where each pile represents an employee.
The end result is a set of piles telling me which employees gave out more than 500 tickets at a time, how many times they did so, and what tickets they did so with.
Sample with numbers:
where k == 3 and j = 1500; k is minimum consecutive integer grouping size, j is final ticket interval grouping size i.e. 5,6, and 7 fall into the 0th group of intervals of size 1500 and 5996, 5997, 5998, 5999 fall into the third group of intervals of size 1500.
Input: [5 , 5996 , 8111 , 1000 , 1001, 5999 , 8110 , 7 , 5998 , 2500 , 1250 , 6 , 8109 , 5997]
Output:[ 0:[5, 6, 7] , 3:[5996, 5997, 5998, 5999] , 5:[8109, 8110, 8111] ]
Here is how you could do it in Python:
from collections import defaultdict
def partition(data, k, j):
data = sorted(data)
start = data[0] # assuming data is not an empty list
count = 0
output = defaultdict(list) # to automatically create a partition when referenced
for value in data:
bucket = value // j # integer division
if value % j == start % j + count: # in same partition & consecutive?
count += 1
if count == k:
# Add the k entries that we skipped so far:
output[bucket].extend(list(range(start, start+count)))
elif count > k:
output[bucket].append(value)
else:
start = value
count = 1
return dict(output)
# The example given in the question:
data = [5, 5996, 8111, 1000, 1001, 5999, 8110, 7, 5998, 2500, 1250, 6, 8109, 5997]
print(partition(data, k=3, j=1500))
# outputs {0: [5, 6, 7], 3: [5996, 5997, 5998, 5999], 5: [8109, 8110, 8111]}
Here is untested Python for the fastest approach that I can think of. It will return just pairs of first/last ticket for each range of interest found.
def grouped_tickets (tickets, min_group_size, partition_size):
tickets = sorted(tickets)
answer = {}
min_ticket = -1
max_ticket = -1
next_partition = 0
for ticket in tickets:
if next_partition <= ticket or max_ticket + 1 < ticket:
if min_group_size <= max_ticket - min_ticket + 1:
partition = min_ticket // partition_size
if partition in answer:
answer[partition].append((min_ticket, max_ticket))
else:
answer[partition] = [(min_ticket, max_ticket)]
# Find where the next partition is.
next_partition = (ticket // partition_size) * partition_size + partition_size
min_ticket = ticket
max_ticket = ticket
else:
max_ticket = ticket
# And don't lose the last group!
if min_group_size <= max_ticket - min_ticket + 1:
partition = min_ticket // partition_size
if partition in answer:
answer[partition].append((min_ticket, max_ticket))
else:
answer[partition] = [(min_ticket, max_ticket)]
return answer

How to sort K sorted arrays, with MERGE SORT

I know that this question has been asked, and there is a very nice elegant solution using a min heap.
MY question is how would one do this using the merge function of merge sort.
You already have an array of sorted arrays. So you should be able to merge all of them into one array in O(nlog K) time, correct?
I just can't figure out how to do this!
Say I have
[ [5,6], [3,4], [1,2], [0] ]
Step 1: [ [3,4,5,6], [0,1,2] ]
Step2: [ [0,1,2,3,4,5,6] ]
Is there a simple way to do this? Is O(nlog K) theoretically achievable with mergesort?
As others have said, using the min heap to hold the next items is the optimal way. It's called an N-way merge. Its complexity is O(n log k).
You can use a 2-way merge algorithm to sort k arrays. Perhaps the easiest way is to modify the standard merge sort so that it uses non-constant partition sizes. For example, imagine that you have 4 arrays with lengths 10, 8, 12, and 33. Each array is sorted. If you concatenated the arrays into one, you would have these partitions (the numbers are indexes into the array, not values):
[0-9][10-17][18-29][30-62]
The first pass of your merge sort would have starting indexes of 0 and 10. You would merge that into a new array, just as you would with the standard merge sort. The next pass would start at positions 18 and 30 in the second array. When you're done with the second pass, your output array contains:
[0-17][18-62]
Now your partitions start at 0 and 18. You merge those two into a single array and you're done.
The only real difference is that rather than starting with a partition size of 2 and doubling, you have non-constant partition sizes. As you make each pass, the new partition size is the sum of the sizes of the two partitions you used in the previous pass. This really is just a slight modification of the standard merge sort.
It will take log(k) passes to do the sort, and at each pass you look at all n items. The algorithm is O(n log k), but with a much higher constant than the N-way merge.
For implementation, build an array of integers that contains the starting indexes of each of your sub arrays. So in the example above you would have:
int[] partitions = [0, 10, 18, 30];
int numPartitions = 4;
Now you do your standard merge sort. But you select your partitions from the partitions array. So your merge would start with:
merge (inputArray, outputArray, part1Index, part2Index, outputStart)
{
part1Start = partitions[part1Index];
part2Start = partitions[part2Index];
part1Length = part2Start - part1Start;
part2Length = partitions[part2Index-1] - part2Start;
// now merge part1 and part2 into the output array,
// starting at outputStart
}
And your main loop would look something like:
while (numPartitions > 1)
{
for (int p = 0; p < numPartitions; p += 2)
{
outputStart = partitions[p];
merge(inputArray, outputArray, p, p+1, outputStart);
// update partitions table
partitions[p/2] = partitions[p] + partitions[p+1];
}
numPartitions /= 2;
}
That's the basic idea. You'll have to do some work to handle the dangling partition when the number is odd, but in general that's how it's done.
You can also do it by maintaining an array of arrays, and merging each two arrays into a new array, adding that to an output array of arrays. Lather, rinse, repeat.
You should note that when we say complexity is O(n log k), we assume that n means TOTAL number of elements in ALL of k arrays, i.e. number of elements in a final merged array.
For example, if you want to merge k arrays that contain n elements each, total number of elements in final array will be nk. So complexity will be O(nk log k).
There different ways to merge arrays. To accoplish that task in N*Log(K) time you can use a structure called Heap (it is good structure to implement priority queue). I suppose that you already have it, if you don’t then pick up any available implementation: http://en.wikipedia.org/wiki/Heap_(data_structure)
Then you can do that like this:
1. We have A[1..K] array of arrays to sort, Head[1..K] - current pointer for every array and Count[1..K] - number of items for every array.
2. We have Heap of pairs (Value: int; NumberOfArray: int) - empty at start.
3. We put to the heap first item of every array - initialization phase.
4. Then we organize cycle:
5. Get pair (Value, NumberOfArray) from the heap.
6. Value is next value to output.
7. NumberOfArray – is number of array where we need to take next item (if any) and place to the heap.
8. If heap is not empty, then repeat from step 5
So for every item we operate only with heap built from K items as maximum. It mean that we will have N*Log(K) complexity as you asked.
I implemented it in python. The main idea is similar to mergesort. There are k arrays in lists. In function mainMerageK, just divide lists (k) into left (k/2) and right (k/2). Therefore, the total count of partition is log(k). Regarding function merge, it is easy to know the runtime is O(n). Finally, we get O(nlog k)
By the way, it also can be implemented in min heap, and there is a link: Merging K- Sorted Lists using Priority Queue
def mainMergeK(*lists):
# implemented by k-way partition
k = len(lists)
if k > 1:
mid = int(k / 2)
B = mainMergeK(*lists[0: mid])
C = mainMergeK(*lists[mid:])
A = merge(B, C)
print B, ' + ', C, ' = ', A
return A
return lists[0]
def merge(B, C):
A = []
p = len(B)
q = len(C)
i = 0
j = 0
while i < p and j < q:
if B[i] <= C[j]:
A.append(B[i])
i += 1
else:
A.append(C[j])
j += 1
if i == p:
for c in C[j:]:
A.append(c)
else:
for b in B[i:]:
A.append(b)
return A
if __name__ == '__main__':
x = mainMergeK([1, 3, 5], [2, 4, 6], [7, 8, 10], [9])
print x
The output likes below:
[1, 3, 5] + [2, 4, 6] = [1, 2, 3, 4, 5, 6]
[7, 8, 10] + [9] = [7, 8, 9, 10]
[1, 2, 3, 4, 5, 6] + [7, 8, 9, 10] = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Just do it like a 2-way merge except with K items. Will result in O(NK). If you want O(N logK) you will need to use a min-heap to keep track of the K pointers(with source array as a metadata) in the algorithm below:
Keep an array of K elements - i.e K pointers showing position in each array.
Mark all K elements are valid.
loop:
Compare values in K pointers that are valid. if the value is minimum, select least index pointer and increment it into the next value in the array. If incremented value has crossed it's array, mark it invalid.
Add the least value into the result.
Repeat till all K elements are invalid.
For example,:
Positions Arrays
p1:0 Array 1: 0 5 10
p2:3 Array 2: 3 6 9
p3:2 Array 3: 2 4 6
Output (min of 0,3,2)=> 0. So output is {0}
Array
p1:5 0 5 10
p2:3 3 6 9
p3:2 2 4 6
Output (min of 5,3,2)=> 2. So {0,2}
Array
p1:5 0 5 10
p2:3 3 6 9
p3:4 2 4 6
Output (min of 5,3,4)=>3. So {0,2,3}
..and so on..until you come to a state where output is {0,2,3,4,5,6}
Array
p1:5 0 5 10
p2:9 3 6 9
p3:6 2 4 6
Output (min of 5,9,6)=>6. So {0,2,3,4,5,6}+{6} when you mark p3 as "invalid" as you have exhausted the array. (or if you are using a min-heap you will simply remove the min-item, get it's source array metadata: in this case array 3, see that it's done so you will not add anything new to the min-heap)

Allocate an array of integers proportionally compensating for rounding errors

I have an array of non-negative values. I want to build an array of values who's sum is 20 so that they are proportional to the first array.
This would be an easy problem, except that I want the proportional array to sum to exactly
20, compensating for any rounding error.
For example, the array
input = [400, 400, 0, 0, 100, 50, 50]
would yield
output = [8, 8, 0, 0, 2, 1, 1]
sum(output) = 20
However, most cases are going to have a lot of rounding errors, like
input = [3, 3, 3, 3, 3, 3, 18]
naively yields
output = [1, 1, 1, 1, 1, 1, 10]
sum(output) = 16 (ouch)
Is there a good way to apportion the output array so that it adds up to 20 every time?
There's a very simple answer to this question: I've done it many times. After each assignment into the new array, you reduce the values you're working with as follows:
Call the first array A, and the new, proportional array B (which starts out empty).
Call the sum of A elements T
Call the desired sum S.
For each element of the array (i) do the following:
a. B[i] = round(A[i] / T * S). (rounding to nearest integer, penny or whatever is required)
b. T = T - A[i]
c. S = S - B[i]
That's it! Easy to implement in any programming language or in a spreadsheet.
The solution is optimal in that the resulting array's elements will never be more than 1 away from their ideal, non-rounded values. Let's demonstrate with your example:
T = 36, S = 20. B[1] = round(A[1] / T * S) = 2. (ideally, 1.666....)
T = 33, S = 18. B[2] = round(A[2] / T * S) = 2. (ideally, 1.666....)
T = 30, S = 16. B[3] = round(A[3] / T * S) = 2. (ideally, 1.666....)
T = 27, S = 14. B[4] = round(A[4] / T * S) = 2. (ideally, 1.666....)
T = 24, S = 12. B[5] = round(A[5] / T * S) = 2. (ideally, 1.666....)
T = 21, S = 10. B[6] = round(A[6] / T * S) = 1. (ideally, 1.666....)
T = 18, S = 9. B[7] = round(A[7] / T * S) = 9. (ideally, 10)
Notice that comparing every value in B with it's ideal value in parentheses, the difference is never more than 1.
It's also interesting to note that rearranging the elements in the array can result in different corresponding values in the resulting array. I've found that arranging the elements in ascending order is best, because it results in the smallest average percentage difference between actual and ideal.
Your problem is similar to a proportional representation where you want to share N seats (in your case 20) among parties proportionnaly to the votes they obtain, in your case [3, 3, 3, 3, 3, 3, 18]
There are several methods used in different countries to handle the rounding problem. My code below uses the Hagenbach-Bischoff quota method used in Switzerland, which basically allocates the seats remaining after an integer division by (N+1) to parties which have the highest remainder:
def proportional(nseats,votes):
"""assign n seats proportionaly to votes using Hagenbach-Bischoff quota
:param nseats: int number of seats to assign
:param votes: iterable of int or float weighting each party
:result: list of ints seats allocated to each party
"""
quota=sum(votes)/(1.+nseats) #force float
frac=[vote/quota for vote in votes]
res=[int(f) for f in frac]
n=nseats-sum(res) #number of seats remaining to allocate
if n==0: return res #done
if n<0: return [min(x,nseats) for x in res] # see siamii's comment
#give the remaining seats to the n parties with the largest remainder
remainders=[ai-bi for ai,bi in zip(frac,res)]
limit=sorted(remainders,reverse=True)[n-1]
#n parties with remainter larger than limit get an extra seat
for i,r in enumerate(remainders):
if r>=limit:
res[i]+=1
n-=1 # attempt to handle perfect equality
if n==0: return res #done
raise #should never happen
However this method doesn't always give the same number of seats to parties with perfect equality as in your case:
proportional(20,[3, 3, 3, 3, 3, 3, 18])
[2,2,2,2,1,1,10]
You have set 3 incompatible requirements. An integer-valued array proportional to [1,1,1] cannot be made to sum to exactly 20. You must choose to break one of the "sum to exactly 20", "proportional to input", and "integer values" requirements.
If you choose to break the requirement for integer values, then use floating point or rational numbers. If you choose to break the exact sum requirement, then you've already solved the problem. Choosing to break proportionality is a little trickier. One approach you might take is to figure out how far off your sum is, and then distribute corrections randomly through the output array. For example, if your input is:
[1, 1, 1]
then you could first make it sum as well as possible while still being proportional:
[7, 7, 7]
and since 20 - (7+7+7) = -1, choose one element to decrement at random:
[7, 6, 7]
If the error was 4, you would choose four elements to increment.
A naïve solution that doesn't perform well, but will provide the right result...
Write an iterator that given an array with eight integers (candidate) and the input array, output the index of the element that is farthest away from being proportional to the others (pseudocode):
function next_index(candidate, input)
// Calculate weights
for i in 1 .. 8
w[i] = candidate[i] / input[i]
end for
// find the smallest weight
min = 0
min_index = 0
for i in 1 .. 8
if w[i] < min then
min = w[i]
min_index = i
end if
end for
return min_index
end function
Then just do this
result = [0, 0, 0, 0, 0, 0, 0, 0]
result[next_index(result, input)]++ for 1 .. 20
If there is no optimal solution, it'll skew towards the beginning of the array.
Using the approach above, you can reduce the number of iterations by rounding down (as you did in your example) and then just use the approach above to add what has been left out due to rounding errors:
result = <<approach using rounding down>>
while sum(result) < 20
result[next_index(result, input)]++
So the answers and comments above were helpful... particularly the decreasing sum comment from #Frederik.
The solution I came up with takes advantage of the fact that for an input array v, sum(v_i * 20) is divisible by sum(v). So for each value in v, I mulitply by 20 and divide by the sum. I keep the quotient, and accumulate the remainder. Whenever the accumulator is greater than sum(v), I add one to the value. That way I'm guaranteed that all the remainders get rolled into the results.
Is that legible? Here's the implementation in Python:
def proportion(values, total):
# set up by getting the sum of the values and starting
# with an empty result list and accumulator
sum_values = sum(values)
new_values = []
acc = 0
for v in values:
# for each value, find quotient and remainder
q, r = divmod(v * total, sum_values)
if acc + r < sum_values:
# if the accumlator plus remainder is too small, just add and move on
acc += r
else:
# we've accumulated enough to go over sum(values), so add 1 to result
if acc > r:
# add to previous
new_values[-1] += 1
else:
# add to current
q += 1
acc -= sum_values - r
# save the new value
new_values.append(q)
# accumulator is guaranteed to be zero at the end
print new_values, sum_values, acc
return new_values
(I added an enhancement that if the accumulator > remainder, I increment the previous value instead of the current value)

Select k random elements from a list whose elements have weights

Selecting without any weights (equal probabilities) is beautifully described here.
I was wondering if there is a way to convert this approach to a weighted one.
I am also interested in other approaches as well.
Update: Sampling without replacement
If the sampling is with replacement, you can use this algorithm (implemented here in Python):
import random
items = [(10, "low"),
(100, "mid"),
(890, "large")]
def weighted_sample(items, n):
total = float(sum(w for w, v in items))
i = 0
w, v = items[0]
while n:
x = total * (1 - random.random() ** (1.0 / n))
total -= x
while x > w:
x -= w
i += 1
w, v = items[i]
w -= x
yield v
n -= 1
This is O(n + m) where m is the number of items.
Why does this work? It is based on the following algorithm:
def n_random_numbers_decreasing(v, n):
"""Like reversed(sorted(v * random() for i in range(n))),
but faster because we avoid sorting."""
while n:
v *= random.random() ** (1.0 / n)
yield v
n -= 1
The function weighted_sample is just this algorithm fused with a walk of the items list to pick out the items selected by those random numbers.
This in turn works because the probability that n random numbers 0..v will all happen to be less than z is P = (z/v)n. Solve for z, and you get z = vP1/n. Substituting a random number for P picks the largest number with the correct distribution; and we can just repeat the process to select all the other numbers.
If the sampling is without replacement, you can put all the items into a binary heap, where each node caches the total of the weights of all items in that subheap. Building the heap is O(m). Selecting a random item from the heap, respecting the weights, is O(log m). Removing that item and updating the cached totals is also O(log m). So you can pick n items in O(m + n log m) time.
(Note: "weight" here means that every time an element is selected, the remaining possibilities are chosen with probability proportional to their weights. It does not mean that elements appear in the output with a likelihood proportional to their weights.)
Here's an implementation of that, plentifully commented:
import random
class Node:
# Each node in the heap has a weight, value, and total weight.
# The total weight, self.tw, is self.w plus the weight of any children.
__slots__ = ['w', 'v', 'tw']
def __init__(self, w, v, tw):
self.w, self.v, self.tw = w, v, tw
def rws_heap(items):
# h is the heap. It's like a binary tree that lives in an array.
# It has a Node for each pair in `items`. h[1] is the root. Each
# other Node h[i] has a parent at h[i>>1]. Each node has up to 2
# children, h[i<<1] and h[(i<<1)+1]. To get this nice simple
# arithmetic, we have to leave h[0] vacant.
h = [None] # leave h[0] vacant
for w, v in items:
h.append(Node(w, v, w))
for i in range(len(h) - 1, 1, -1): # total up the tws
h[i>>1].tw += h[i].tw # add h[i]'s total to its parent
return h
def rws_heap_pop(h):
gas = h[1].tw * random.random() # start with a random amount of gas
i = 1 # start driving at the root
while gas >= h[i].w: # while we have enough gas to get past node i:
gas -= h[i].w # drive past node i
i <<= 1 # move to first child
if gas >= h[i].tw: # if we have enough gas:
gas -= h[i].tw # drive past first child and descendants
i += 1 # move to second child
w = h[i].w # out of gas! h[i] is the selected node.
v = h[i].v
h[i].w = 0 # make sure this node isn't chosen again
while i: # fix up total weights
h[i].tw -= w
i >>= 1
return v
def random_weighted_sample_no_replacement(items, n):
heap = rws_heap(items) # just make a heap...
for i in range(n):
yield rws_heap_pop(heap) # and pop n items off it.
If the sampling is with replacement, use the roulette-wheel selection technique (often used in genetic algorithms):
sort the weights
compute the cumulative weights
pick a random number in [0,1]*totalWeight
find the interval in which this number falls into
select the elements with the corresponding interval
repeat k times
If the sampling is without replacement, you can adapt the above technique by removing the selected element from the list after each iteration, then re-normalizing the weights so that their sum is 1 (valid probability distribution function)
I know this is a very old question, but I think there's a neat trick to do this in O(n) time if you apply a little math!
The exponential distribution has two very useful properties.
Given n samples from different exponential distributions with different rate parameters, the probability that a given sample is the minimum is equal to its rate parameter divided by the sum of all rate parameters.
It is "memoryless". So if you already know the minimum, then the probability that any of the remaining elements is the 2nd-to-min is the same as the probability that if the true min were removed (and never generated), that element would have been the new min. This seems obvious, but I think because of some conditional probability issues, it might not be true of other distributions.
Using fact 1, we know that choosing a single element can be done by generating these exponential distribution samples with rate parameter equal to the weight, and then choosing the one with minimum value.
Using fact 2, we know that we don't have to re-generate the exponential samples. Instead, just generate one for each element, and take the k elements with lowest samples.
Finding the lowest k can be done in O(n). Use the Quickselect algorithm to find the k-th element, then simply take another pass through all elements and output all lower than the k-th.
A useful note: if you don't have immediate access to a library to generate exponential distribution samples, it can be easily done by: -ln(rand())/weight
I've done this in Ruby
https://github.com/fl00r/pickup
require 'pickup'
pond = {
"selmon" => 1,
"carp" => 4,
"crucian" => 3,
"herring" => 6,
"sturgeon" => 8,
"gudgeon" => 10,
"minnow" => 20
}
pickup = Pickup.new(pond, uniq: true)
pickup.pick(3)
#=> [ "gudgeon", "herring", "minnow" ]
pickup.pick
#=> "herring"
pickup.pick
#=> "gudgeon"
pickup.pick
#=> "sturgeon"
If you want to generate large arrays of random integers with replacement, you can use piecewise linear interpolation. For example, using NumPy/SciPy:
import numpy
import scipy.interpolate
def weighted_randint(weights, size=None):
"""Given an n-element vector of weights, randomly sample
integers up to n with probabilities proportional to weights"""
n = weights.size
# normalize so that the weights sum to unity
weights = weights / numpy.linalg.norm(weights, 1)
# cumulative sum of weights
cumulative_weights = weights.cumsum()
# piecewise-linear interpolating function whose domain is
# the unit interval and whose range is the integers up to n
f = scipy.interpolate.interp1d(
numpy.hstack((0.0, weights)),
numpy.arange(n + 1), kind='linear')
return f(numpy.random.random(size=size)).astype(int)
This is not effective if you want to sample without replacement.
Here's a Go implementation from geodns:
package foo
import (
"log"
"math/rand"
)
type server struct {
Weight int
data interface{}
}
func foo(servers []server) {
// servers list is already sorted by the Weight attribute
// number of items to pick
max := 4
result := make([]server, max)
sum := 0
for _, r := range servers {
sum += r.Weight
}
for si := 0; si < max; si++ {
n := rand.Intn(sum + 1)
s := 0
for i := range servers {
s += int(servers[i].Weight)
if s >= n {
log.Println("Picked record", i, servers[i])
sum -= servers[i].Weight
result[si] = servers[i]
// remove the server from the list
servers = append(servers[:i], servers[i+1:]...)
break
}
}
}
return result
}
If you want to pick x elements from a weighted set without replacement such that elements are chosen with a probability proportional to their weights:
import random
def weighted_choose_subset(weighted_set, count):
"""Return a random sample of count elements from a weighted set.
weighted_set should be a sequence of tuples of the form
(item, weight), for example: [('a', 1), ('b', 2), ('c', 3)]
Each element from weighted_set shows up at most once in the
result, and the relative likelihood of two particular elements
showing up is equal to the ratio of their weights.
This works as follows:
1.) Line up the items along the number line from [0, the sum
of all weights) such that each item occupies a segment of
length equal to its weight.
2.) Randomly pick a number "start" in the range [0, total
weight / count).
3.) Find all the points "start + n/count" (for all integers n
such that the point is within our segments) and yield the set
containing the items marked by those points.
Note that this implementation may not return each possible
subset. For example, with the input ([('a': 1), ('b': 1),
('c': 1), ('d': 1)], 2), it may only produce the sets ['a',
'c'] and ['b', 'd'], but it will do so such that the weights
are respected.
This implementation only works for nonnegative integral
weights. The highest weight in the input set must be less
than the total weight divided by the count; otherwise it would
be impossible to respect the weights while never returning
that element more than once per invocation.
"""
if count == 0:
return []
total_weight = 0
max_weight = 0
borders = []
for item, weight in weighted_set:
if weight < 0:
raise RuntimeError("All weights must be positive integers")
# Scale up weights so dividing total_weight / count doesn't truncate:
weight *= count
total_weight += weight
borders.append(total_weight)
max_weight = max(max_weight, weight)
step = int(total_weight / count)
if max_weight > step:
raise RuntimeError(
"Each weight must be less than total weight / count")
next_stop = random.randint(0, step - 1)
results = []
current = 0
for i in range(count):
while borders[current] <= next_stop:
current += 1
results.append(weighted_set[current][0])
next_stop += step
return results
In the question you linked to, Kyle's solution would work with a trivial generalization.
Scan the list and sum the total weights. Then the probability to choose an element should be:
1 - (1 - (#needed/(weight left)))/(weight at n). After visiting a node, subtract it's weight from the total. Also, if you need n and have n left, you have to stop explicitly.
You can check that with everything having weight 1, this simplifies to kyle's solution.
Edited: (had to rethink what twice as likely meant)
This one does exactly that with O(n) and no excess memory usage. I believe this is a clever and efficient solution easy to port to any language. The first two lines are just to populate sample data in Drupal.
function getNrandomGuysWithWeight($numitems){
$q = db_query('SELECT id, weight FROM theTableWithTheData');
$q = $q->fetchAll();
$accum = 0;
foreach($q as $r){
$accum += $r->weight;
$r->weight = $accum;
}
$out = array();
while(count($out) < $numitems && count($q)){
$n = rand(0,$accum);
$lessaccum = NULL;
$prevaccum = 0;
$idxrm = 0;
foreach($q as $i=>$r){
if(($lessaccum == NULL) && ($n <= $r->weight)){
$out[] = $r->id;
$lessaccum = $r->weight- $prevaccum;
$accum -= $lessaccum;
$idxrm = $i;
}else if($lessaccum){
$r->weight -= $lessaccum;
}
$prevaccum = $r->weight;
}
unset($q[$idxrm]);
}
return $out;
}
I putting here a simple solution for picking 1 item, you can easily expand it for k items (Java style):
double random = Math.random();
double sum = 0;
for (int i = 0; i < items.length; i++) {
val = items[i];
sum += val.getValue();
if (sum > random) {
selected = val;
break;
}
}
I have implemented an algorithm similar to Jason Orendorff's idea in Rust here. My version additionally supports bulk operations: insert and remove (when you want to remove a bunch of items given by their ids, not through the weighted selection path) from the data structure in O(m + log n) time where m is the number of items to remove and n the number of items in stored.
Sampling wihout replacement with recursion - elegant and very short solution in c#
//how many ways we can choose 4 out of 60 students, so that every time we choose different 4
class Program
{
static void Main(string[] args)
{
int group = 60;
int studentsToChoose = 4;
Console.WriteLine(FindNumberOfStudents(studentsToChoose, group));
}
private static int FindNumberOfStudents(int studentsToChoose, int group)
{
if (studentsToChoose == group || studentsToChoose == 0)
return 1;
return FindNumberOfStudents(studentsToChoose, group - 1) + FindNumberOfStudents(studentsToChoose - 1, group - 1);
}
}
I just spent a few hours trying to get behind the algorithms underlying sampling without replacement out there and this topic is more complex than I initially thought. That's exciting! For the benefit of a future readers (have a good day!) I document my insights here including a ready to use function which respects the given inclusion probabilities further below. A nice and quick mathematical overview of the various methods can be found here: Tillé: Algorithms of sampling with equal or unequal probabilities. For example Jason's method can be found on page 46. The caveat with his method is that the weights are not proportional to the inclusion probabilities as also noted in the document. Actually, the i-th inclusion probabilities can be recursively computed as follows:
def inclusion_probability(i, weights, k):
"""
Computes the inclusion probability of the i-th element
in a randomly sampled k-tuple using Jason's algorithm
(see https://stackoverflow.com/a/2149533/7729124)
"""
if k <= 0: return 0
cum_p = 0
for j, weight in enumerate(weights):
# compute the probability of j being selected considering the weights
p = weight / sum(weights)
if i == j:
# if this is the target element, we don't have to go deeper,
# since we know that i is included
cum_p += p
else:
# if this is not the target element, than we compute the conditional
# inclusion probability of i under the constraint that j is included
cond_i = i if i < j else i-1
cond_weights = weights[:j] + weights[j+1:]
cond_p = inclusion_probability(cond_i, cond_weights, k-1)
cum_p += p * cond_p
return cum_p
And we can check the validity of the function above by comparing
In : for i in range(3): print(i, inclusion_probability(i, [1,2,3], 2))
0 0.41666666666666663
1 0.7333333333333333
2 0.85
to
In : import collections, itertools
In : sample_tester = lambda f: collections.Counter(itertools.chain(*(f() for _ in range(10000))))
In : sample_tester(lambda: random_weighted_sample_no_replacement([(1,'a'),(2,'b'),(3,'c')],2))
Out: Counter({'a': 4198, 'b': 7268, 'c': 8534})
One way - also suggested in the document above - to specify the inclusion probabilities is to compute the weights from them. The whole complexity of the question at hand stems from the fact that one cannot do that directly since one basically has to invert the recursion formula, symbolically I claim this is impossible. Numerically it can be done using all kind of methods, e.g. Newton's method. However the complexity of inverting the Jacobian using plain Python becomes unbearable quickly, I really recommend looking into numpy.random.choice in this case.
Luckily there is method using plain Python which might or might not be sufficiently performant for your purposes, it works great if there aren't that many different weights. You can find the algorithm on page 75&76. It works by splitting up the sampling process into parts with the same inclusion probabilities, i.e. we can use random.sample again! I am not going to explain the principle here since the basics are nicely presented on page 69. Here is the code with hopefully a sufficient amount of comments:
def sample_no_replacement_exact(items, k, best_effort=False, random_=None, ε=1e-9):
"""
Returns a random sample of k elements from items, where items is a list of
tuples (weight, element). The inclusion probability of an element in the
final sample is given by
k * weight / sum(weights).
Note that the function raises if a inclusion probability cannot be
satisfied, e.g the following call is obviously illegal:
sample_no_replacement_exact([(1,'a'),(2,'b')],2)
Since selecting two elements means selecting both all the time,
'b' cannot be selected twice as often as 'a'. In general it can be hard to
spot if the weights are illegal and the function does *not* always raise
an exception in that case. To remedy the situation you can pass
best_effort=True which redistributes the inclusion probability mass
if necessary. Note that the inclusion probabilities will change
if deemed necessary.
The algorithm is based on the splitting procedure on page 75/76 in:
http://www.eustat.eus/productosServicios/52.1_Unequal_prob_sampling.pdf
Additional information can be found here:
https://stackoverflow.com/questions/2140787/
:param items: list of tuples of type weight,element
:param k: length of resulting sample
:param best_effort: fix inclusion probabilities if necessary,
(optional, defaults to False)
:param random_: random module to use (optional, defaults to the
standard random module)
:param ε: fuzziness parameter when testing for zero in the context
of floating point arithmetic (optional, defaults to 1e-9)
:return: random sample set of size k
:exception: throws ValueError in case of bad parameters,
throws AssertionError in case of algorithmic impossibilities
"""
# random_ defaults to the random submodule
if not random_:
random_ = random
# special case empty return set
if k <= 0:
return set()
if k > len(items):
raise ValueError("resulting tuple length exceeds number of elements (k > n)")
# sort items by weight
items = sorted(items, key=lambda item: item[0])
# extract the weights and elements
weights, elements = list(zip(*items))
# compute the inclusion probabilities (short: π) of the elements
scaling_factor = k / sum(weights)
π = [scaling_factor * weight for weight in weights]
# in case of best_effort: if a inclusion probability exceeds 1,
# try to rebalance the probabilities such that:
# a) no probability exceeds 1,
# b) the probabilities still sum to k, and
# c) the probability masses flow from top to bottom:
# [0.2, 0.3, 1.5] -> [0.2, 0.8, 1]
# (remember that π is sorted)
if best_effort and π[-1] > 1 + ε:
# probability mass we still we have to distribute
debt = 0.
for i in reversed(range(len(π))):
if π[i] > 1.:
# an 'offender', take away excess
debt += π[i] - 1.
π[i] = 1.
else:
# case π[i] < 1, i.e. 'save' element
# maximum we can transfer from debt to π[i] and still not
# exceed 1 is computed by the minimum of:
# a) 1 - π[i], and
# b) debt
max_transfer = min(debt, 1. - π[i])
debt -= max_transfer
π[i] += max_transfer
assert debt < ε, "best effort rebalancing failed (impossible)"
# make sure we are talking about probabilities
if any(not (0 - ε <= π_i <= 1 + ε) for π_i in π):
raise ValueError("inclusion probabilities not satisfiable: {}" \
.format(list(zip(π, elements))))
# special case equal probabilities
# (up to fuzziness parameter, remember that π is sorted)
if π[-1] < π[0] + ε:
return set(random_.sample(elements, k))
# compute the two possible lambda values, see formula 7 on page 75
# (remember that π is sorted)
λ1 = π[0] * len(π) / k
λ2 = (1 - π[-1]) * len(π) / (len(π) - k)
λ = min(λ1, λ2)
# there are two cases now, see also page 69
# CASE 1
# with probability λ we are in the equal probability case
# where all elements have the same inclusion probability
if random_.random() < λ:
return set(random_.sample(elements, k))
# CASE 2:
# with probability 1-λ we are in the case of a new sample without
# replacement problem which is strictly simpler,
# it has the following new probabilities (see page 75, π^{(2)}):
new_π = [
(π_i - λ * k / len(π))
/
(1 - λ)
for π_i in π
]
new_items = list(zip(new_π, elements))
# the first few probabilities might be 0, remove them
# NOTE: we make sure that floating point issues do not arise
# by using the fuzziness parameter
while new_items and new_items[0][0] < ε:
new_items = new_items[1:]
# the last few probabilities might be 1, remove them and mark them as selected
# NOTE: we make sure that floating point issues do not arise
# by using the fuzziness parameter
selected_elements = set()
while new_items and new_items[-1][0] > 1 - ε:
selected_elements.add(new_items[-1][1])
new_items = new_items[:-1]
# the algorithm reduces the length of the sample problem,
# it is guaranteed that:
# if λ = λ1: the first item has probability 0
# if λ = λ2: the last item has probability 1
assert len(new_items) < len(items), "problem was not simplified (impossible)"
# recursive call with the simpler sample problem
# NOTE: we have to make sure that the selected elements are included
return sample_no_replacement_exact(
new_items,
k - len(selected_elements),
best_effort=best_effort,
random_=random_,
ε=ε
) | selected_elements
Example:
In : sample_no_replacement_exact([(1,'a'),(2,'b'),(3,'c')],2)
Out: {'b', 'c'}
In : import collections, itertools
In : sample_tester = lambda f: collections.Counter(itertools.chain(*(f() for _ in range(10000))))
In : sample_tester(lambda: sample_no_replacement_exact([(1,'a'),(2,'b'),(3,'c'),(4,'d')],2))
Out: Counter({'a': 2048, 'b': 4051, 'c': 5979, 'd': 7922})
The weights sum up to 10, hence the inclusion probabilities compute to: a → 20%, b → 40%, c → 60%, d → 80%. (Sum: 200% = k.) It works!
Just one word of caution for the productive use of this function, it can be very hard to spot illegal inputs for the weights. An obvious illegal example is
In: sample_no_replacement_exact([(1,'a'),(2,'b')],2)
ValueError: inclusion probabilities not satisfiable: [(0.6666666666666666, 'a'), (1.3333333333333333, 'b')]
b cannot appear twice as often as a since both have to be always be selected. There are more subtle examples. To avoid an exception in production just use best_effort=True, which rebalances the inclusion probability mass such that we have always a valid distribution. Obviously this might change the inclusion probabilities.
I used a associative map (weight,object). for example:
{
(10,"low"),
(100,"mid"),
(10000,"large")
}
total=10110
peek a random number between 0 and 'total' and iterate over the keys until this number fits in a given range.

Create many constrained, random permutation of a list

I need to make a random list of permutations. The elements can be anything but assume that they are the integers 0 through x-1. I want to make y lists, each containing z elements. The rules are that no list may contain the same element twice and that over all the lists, the number of times each elements is used is the same (or as close as possible). For instance, if my elements are 0,1,2,3, y is 6, and z is 2, then one possible solution is:
0,3
1,2
3,0
2,1
0,1
2,3
Each row has only unique elements and no element has been used more than 3 times. If y were 7, then 2 elements would be used 4 times, the rest 3.
This could be improved, but it seems to do the job (Python):
import math, random
def get_pool(items, y, z):
slots = y*z
use_each_times = slots/len(items)
exceptions = slots - use_each_times*len(items)
if (use_each_times > y or
exceptions > 0 and use_each_times+1 > y):
raise Exception("Impossible.")
pool = {}
for n in items:
pool[n] = use_each_times
for n in random.sample(items, exceptions):
pool[n] += 1
return pool
def rebalance(ret, pool, z):
max_item = None
max_times = None
for item, times in pool.items():
if times > max_times:
max_item = item
max_times = times
next, times = max_item, max_times
candidates = []
for i in range(len(ret)):
item = ret[i]
if next not in item:
candidates.append( (item, i) )
swap, swap_index = random.choice(candidates)
swapi = []
for i in range(len(swap)):
if swap[i] not in pool:
swapi.append( (swap[i], i) )
which, i = random.choice(swapi)
pool[next] -= 1
pool[swap[i]] = 1
swap[i] = next
ret[swap_index] = swap
def plist(items, y, z):
pool = get_pool(items, y, z)
ret = []
while len(pool.keys()) > 0:
while len(pool.keys()) < z:
rebalance(ret, pool, z)
selections = random.sample(pool.keys(), z)
for i in selections:
pool[i] -= 1
if pool[i] == 0:
del pool[i]
ret.append( selections )
return ret
print plist([0,1,2,3], 6, 2)
Ok, one way to approximate that:
1 - shuffle your list
2 - take the y first elements to form the next row
4 - repeat (2) as long as you have numbers in the list
5 - if you don't have enough numbers to finish the list, reshuffle the original list and take the missing elements, making sure you don't retake numbers.
6 - Start over at step (2) as long as you need rows
I think this should be as random as you can make it and will for sure follow your criteria. Plus, you have very little tests for duplicate elements.
First, you can always randomly sort the list in the end, so let's not worry about making "random permutations" (hard); and just worry about 1) making permutations (easy) and 2) randomizing them (easy).
If you want "truly" random groups, you have to accept that randomization by nature doesn't really allow for the constraint of "even distribution" of results -- you may get that or you may get a run of similar-looking ones. If you really want even distribution, first make the sets evenly distributed, and then randomize them as a group.
Do you have to use each element in the set x evenly? It's not clear from the rules that I couldn't just make the following interpretation:
Note the following: "over all the lists, the number of times each elements is used is the same (or as close as possible)"
Based on this criteria, and the rule that z < x*, I postulate that you can simply enumerate all the items over all the lists. So you automatically make y list of the items enumerated to position z. Your example doesn't fulfill the rule above as closely as my version will. Using your example of x={0,1,2,3} y=6 and z=2, I get:
0,1 0,1 0,1 0,1 0,1 0,1
Now I didn't use 2 or 3, but you didn't say I had to use them all. If I had to use them all and I don't care to be able to prove that I am "as close as possible" to even usage, I would just enumerate across all the items through the lists, like this:
0,1 2,3 0,1 2,3 0,1 2,3
Finally, suppose I really do have to use all the elements. To calculate how many times each element can repeat, I just take (y*z)/(count of x). That way, I don't have to sit and worry about how to divide up the items in the list. If there is a remainder, or the result is less than 1, then I know that I will not get an exact number of repeats, so in those cases, it doesn't much matter to try to waste computational energy to make it perfect. I contend that the fastest result is still to just enumerate as above, and use the calculation here to show why either a perfect result was or wasn't achieved. A fancy algorithm to extract from this calculation how many positions will be duplicates could be achieved, but "it's too long to fit here in the margin".
*Each list has the same z number of elements, so it will be impossible to make lists where z is greater than x and still fulfill the rule that no list may contain the same element twice. Therefore, this rule demands that z cannot be greater than x.
Based on new details in the comments, the solution may simply be an implementation of a standard random permutation generation algorithm. There is a lengthy discussion of random permutation generation algorithms here:
http://www.techuser.net/randpermgen.html
(From Google search: random permutation generation)
This works in Ruby:
# list is the elements to be permuted
# y is the number of results desired
# z is the number of elements per result
# equalizer keeps track of who got used how many times
def constrained_permutations list, y, z
list.uniq! # Never trust the user. We want no repetitions.
equalizer = {}
list.each { |element| equalizer[element] = 0 }
results = []
# Do this until we get as many results as desired
while results.size < y
pool = []
puts pool
least_used = equalizer.each_value.min
# Find how used the least used element was
while pool.size < z
# Do this until we have enough elements in this resultset
element = nil
while element.nil?
# If we run out of "least used elements", then we need to increment
# our definition of "least used" by 1 and keep going.
element = list.shuffle.find do |x|
!pool.include?(x) && equalizer[x] == least_used
end
least_used += 1 if element.nil?
end
equalizer[element] += 1
# This element has now been used one more time.
pool << element
end
results << pool
end
return results
end
Sample usage:
constrained_permutations [0,1,2,3,4,5,6], 6, 2
=> [[4, 0], [1, 3], [2, 5], [6, 0], [2, 5], [3, 6]]
constrained_permutations [0,1,2,3,4,5,6], 6, 2
=> [[4, 5], [6, 3], [0, 2], [1, 6], [5, 4], [3, 0]]
enter code here
http://en.wikipedia.org/wiki/Fisher-Yates_shuffle

Resources