Finding positions of milestones given their pairwise distances - algorithm

There is a straight road with 'n' number of milestones. You are given
an array with the distance between all the pairs of milestones in
some random order. Find the position of milestones.
Example:
Consider a road with 4 milestones (a,b,c,d) :
a ---3Km--- b ---5Km--- c ---2Km--- d
Distance between a and b is 3
Distance between a and c is 8
Distance between a and d is 10
Distance between b and c is 5
Distance between b and d is 7
Distance between c and d is 2
All the above values are given in a random order say 7, 10, 5, 2, 8, 3.
The output must be 3, 5, 2 or 2, 5, 3.
Assuming the length of the give array is n. My idea is:
Calculate the number of milestones by solving a quadratic equation, saying it's x.
There are P(n, x-1) possibilities.
Validate every possible permutation.
Is there any better solution for this problem?

I can't find an algorithm for this that has good worst-case behaviour. However, the following heuristic may be useful for practical solution:
Say the first landmark is at position zero. You can find the last landmark. Then all other landmark positions need to appear in the input array. Their distances to the last landmark must also appear.
Let's build a graph on these possible landmark positions.
If a and b are two possible landmark positions, then either |a-b| appears in the input array or at least one of a and b isn't a landmark position. Draw an edge between a and b if |a-b| appears in the input array.
Iteratively filter out landmark positions whose degree is too small.
You wind up with something that's almost a clique-finding problem. Find an appropriately large clique; it corresponds to a positioning of the landmarks. Check that this positioning actually gives rise to the right distances.
At worst here, you've narrowed down the possible landmark positions to a more manageable set.

Ok. I will give my idea , which could reduce the number of permutations.
Finding n, is simple, you could even run a Reverse factorial https://math.stackexchange.com/questions/171882/is-there-a-way-to-reverse-factorials
Assumption:
Currently I have no idea of how to find the numbers. But I assume you have found out the numbers somehow. After finding n and elements we could apply this for partial reduction of computation.
Consider a problem like,
|<--3-->|<--6-->|<--1-->|<--7-->|
A B C D E
Now as you said, the sum they will give (in random order too) 3,9,10,17,6,7,14,1,8,7.
But you could take any combination (mostly it will be wrong ),
6-3-1-7. (say this is our taken combination)
Now,
6+3 -> 9 There, so Yes //Checking in the list whether the 2 numbers could possibly be adjacent.
3+1 -> 4 NOT THERE, so cannot
1+7 -> 8 There, So Yes
6+7 -> 13 NOT THERE, So cannot be ajacent
Heart concept :
For, 2 numbers to be adjacent, their sum must be there in the list. If the sum is not in the list, then the numbers are not adjacent.
Optimization :
So, 3 and 1 will not come nearby. And 6 and 7 will not come nearby.
Hence while doing permutation, we could eliminate
*31*,*13*,*76* and *67* combinations. Where * is 0 or more no of digits either preceding or succeeding.
i.e instead of trying permutation for 4! = 24 times, we could only check for 3617,1637,3716,1736. ie only 4 times. i.e 84% of computation is saved.
Worst case :
Say in your case it is 5,2,3.
Now, we have to perform this operation.
5+2 -> 7 There
2+3 -> 5 There
5+3 -> 8 There
Oops, your example is worst case, where we could not optimize the solution in these type of cases.

Place the milestones one by one
EDIT See new implementation below (with timings).
The key idea is the following:
Build a list of milestones one by one, starting with one milestone at 0 and a milestone at max(distances). Lets call them endpoints.
The largest distance that's not accounted for has to be from one of the endpoints, which leaves at most two positions for the corresponding milestone.
The following Python program simply checks if the milestone can be placed from the left endpoint, and if not, tries to place the milestone from the right endpoint (always using the largest distances that's not accounted for by the already placed milestones). This has to be done with back-tracking, as placements may turn out wrong later.
Note that there is another (mirrored) solution that is not output. (I don't think there can be more than 2 solutions (symmetric), but I haven't proven it.)
I consider the position of the milestones as the solution and use a helper function steps for the output desired by the OP.
from collections import Counter
def milestones_from_dists(dists, milestones=None):
if not dists: # all dist are acounted for: we have a solution!
return milestones
if milestones is None:
milestones = [0]
max_dist = max(dists)
solution_from_left = try_milestone(dists, milestones, min(milestones) + max_dist)
if solution_from_left is not None:
return solution_from_left
return try_milestone(dists, milestones, max(milestones) - max_dist)
def try_milestone(dists, milestones, new_milestone):
unused_dists = Counter(dists)
for milestone in milestones:
dist = abs(milestone - new_milestone)
if unused_dists[dist]:
unused_dists[dist] -= 1
if unused_dists[dist] == 0:
del unused_dists[dist]
else:
return None # no solution
return milestones_from_dists(unused_dists, milestones + [new_milestone])
def steps(milestones):
milestones = sorted(milestones)
return [milestones[i] - milestones[i - 1] for i in range(1, len(milestones))]
Example usage:
>>> print(steps(milestones_from_dists([7, 10, 5, 2, 8, 3])))
[3, 5, 2]
>>> import random
>>> milestones = random.sample(range(1000), 100)
>>> dists = [abs(x - y) for x in milestones for y in milestones if x < y]
>>> solution = sorted(milestones_from_dists(dists))
>>> solution == sorted(milestones)
True
>>> print(solution)
[0, 10, 16, 23, 33, 63, 72, 89, 97, 108, 131, 146, 152, 153, 156, 159, 171, 188, 210, 211, 212, 215, 219, 234, 248, 249, 273, 320, 325, 329, 339, 357, 363, 387, 394, 396, 402, 408, 412, 418, 426, 463, 469, 472, 473, 485, 506, 515, 517, 533, 536, 549, 586, 613, 614, 615, 622, 625, 630, 634, 640, 649, 651, 653, 671, 674, 697, 698, 711, 715, 720, 730, 731, 733, 747, 758, 770, 772, 773, 776, 777, 778, 783, 784, 789, 809, 828, 832, 833, 855, 861, 873, 891, 894, 918, 952, 953, 968, 977, 979]
>>> print(steps(solution))
[10, 6, 7, 10, 30, 9, 17, 8, 11, 23, 15, 6, 1, 3, 3, 12, 17, 22, 1, 1, 3, 4, 15, 14, 1, 24, 47, 5, 4, 10, 18, 6, 24, 7, 2, 6, 6, 4, 6, 8, 37, 6, 3, 1, 12, 21, 9, 2, 16, 3, 13, 37, 27, 1, 1, 7, 3, 5, 4, 6, 9, 2, 2, 18, 3, 23, 1, 13, 4, 5, 10, 1, 2, 14, 11, 12, 2, 1, 3, 1, 1, 5, 1, 5, 20, 19, 4, 1, 22, 6, 12, 18, 3, 24, 34, 1, 15, 9, 2]
New implementation incorporationg suggestions from the comments
from collections import Counter
def milestones_from_dists(dists):
dists = Counter(dists)
right_end = max(dists)
milestones = [0, right_end]
del dists[right_end]
sorted_dists = sorted(dists)
add_milestones_from_dists(dists, milestones, sorted_dists, right_end)
return milestones
def add_milestone
s_from_dists(dists, milestones, sorted_dists, right_end):
if not dists:
return True # success!
# find max dist that's not fully used yet
deleted_dists = []
while not dists[sorted_dists[-1]]:
deleted_dists.append(sorted_dists[-1])
del sorted_dists[-1]
max_dist = sorted_dists[-1]
# for both possible positions, check if this fits the already placed milestones
for new_milestone in [max_dist, right_end - max_dist]:
used_dists = Counter() # for backing up
for milestone in milestones:
dist = abs(milestone - new_milestone)
if dists[dist]: # this distance is still available
dists[dist] -= 1
if dists[dist] == 0:
del dists[dist]
used_dists[dist] += 1
else: # no solution
dists.update(used_dists) # back up
sorted_dists.extend(reversed(deleted_dists))
break
else: # unbroken
milestones.append(new_milestone)
success = add_milestones_from_dists(dists, milestones, sorted_dists, right_end)
if success:
return True
dists.update(used_dists) # back up
sorted_dists.extend(reversed(deleted_dists))
del milestones[-1]
return False
def steps(milestones):
milestones = sorted(milestones)
return [milestones[i] - milestones[i - 1] for i in range(1, len(milestones))]
Timings for random milestones in the range from 0 to 100000:
n = 10: 0.00s
n = 100: 0.05s
n = 1000: 3.20s
n = 10000: still takes too long.

The largest distance in the given set of distance is the distance between the first and the last milestone, i.e. in your example 10. You can find this in O(n) step.
For every other milestone (every one except the first or the last), you can find their distances from the first and the last milestone by looking for a pair of distances that sums up to the maximum distance, i.e. in your example 7+3 = 10, 8+2 = 10. You can find these pairs trivially in O(n^2).
Now if you think the road is from east to west, what remains is that for all the interior milestones (all but the first or the last), you need to know which one of the two distances (e.g. 7 and 3, or 8 and 2) is towards east (the other is then towards west).
You can trivially enumerate all the possibilities in time O(2^(n-2)), and for every possible orientation check that you get the same set of distances as in the problem. This is faster than enumerating through all permutations of the smallest distances in the set.
For example, if you assume 7 and 8 are towards west, then the distance between the two internal milestones is 1 mile, which is not in the problem set. So it must be 7 towards west, 8 towards east, leading to solution (or it's mirror)
WEST | -- 2 -- | -- 5 -- | -- 3 -- | EAST
For a larger set of milestones, you would just start guessing the orientation of the two distances to the endpoints, and whenever you product two milestones that have a distance between them that is not in the problem set, you backtrack.

Related

How to calculate the optimal time interval in multiple time series forecasts?

First thing first, I am new to the world of statistics.
Problem statement:
I have three predicted time series. These time series represent three independent scores, the sum of which is desired to be minimized over timeslot while selecting it. Length of the timeslot is already provided. I have read that there is confidence based selection of predicted interval for such problems, but I have used LSTM to predict the time series which may restrict me to use that approach, perhaps I think calculating the predicted interval is related to single time series.
e.g: Consider below arrays represent the three predicted time series.
arr1 = [23, 34, 16, 5, 45, 10, 2, 34, 56, 11]
arr2 = [123, 100, 124, 245, 125, 120, 298, 124, 175, 200]
arr3 = [1, 3, 10, 7, 2, 2, 10, 7, 8, 12]
time slot length = 3
As you could see, optimal timeslot for arr1 is [5, 7], for arr2 is [0, 2], and arr3 for is [3, 5], but I need only one timeslot for all three time series.
Questions:
Which error paradigm I should employ to select the optimal time slot?
I also have given weights(positive real number in [0, 1]) which represents the importance of particular time series in deciding timeslot. How do I employ it in error paradigm?

Algorithm to efficiently select rows from a matrix such that column totals are equal

The practical application of this problem is group assignment in a psychology study, but the theoretical formulation is this:
I have a matrix (the actual matrix is 27x72, but I'll pick a 4x8 as an example):
1 0 1 0
0 1 0 1
1 1 0 0
0 1 1 0
0 0 1 1
1 0 1 0
1 1 0 0
0 1 0 1
I want to pick half of the rows out of this matrix such that the column totals are equal (thus effectively creating two matrices with equivalent column totals). I cannot rearrange values within the rows.
I have tried some brute force solutions, but my matrix is too large for that to be effective, even having chosen some random restrictions first. It seems to me that the search space could be constrained with a better algorithm, but I haven't been able to think of one thus far. Any ideas? It is also possible that there is no solution, so an algorithm would have to be able to deal with that. I have been working in R, but I could switch to python easily.
Update
Found a solution thanks to ljeabmreosn. Karmarkar-Karp worked great for an algorithm, and converting the rows to base 73 was inspired. I had a surprising hard time finding code that would actually give me the sub-sequences rather than just the final difference (maybe most people are only interested in this problem in the abstract?). Anyway this was the code:
First I converted my rows in to base 73 as the poster suggested. To do this I used the basein package in python, defining an alphabet with 73 characters and then using the basein.decode function to convert to decimel.
For the algorithm, I just added code to print the sub-sequence indices from this mailing list message from Tim Peters: https://mail.python.org/pipermail/tutor/2001-August/008098.html
from __future__ import nested_scopes
import sys
import bisect
class _Num:
def __init__(self, value, index):
self.value = value
self.i = index
def __lt__(self, other):
return self.value < other.value
# This implements the Karmarkar-Karp heuristic for partitioning a set
# in two, i.e. into two disjoint subsets s.t. their sums are
# approximately equal. It produces only one result, in O(N*log N)
# time. A remarkable property is that it loves large sets: in
# general, the more numbers you feed it, the better it does.
class Partition:
def __init__(self, nums):
self.nums = nums
sorted = [_Num(nums[i], i) for i in range(len(nums))]
sorted.sort()
self.sorted = sorted
def run(self):
sorted = self.sorted[:]
N = len(sorted)
connections = [[] for i in range(N)]
while len(sorted) > 1:
bigger = sorted.pop()
smaller = sorted.pop()
# Force these into different sets, by "drawing a
# line" connecting them.
i, j = bigger.i, smaller.i
connections[i].append(j)
connections[j].append(i)
diff = bigger.value - smaller.value
assert diff >= 0
bisect.insort(sorted, _Num(diff, i))
# Now sorted contains only 1 element x, and x.value is
# the difference between the subsets' sums.
# Theorem: The connections matrix represents a spanning tree
# on the set of index nodes, and any tree can be 2-colored.
# 2-color this one (with "colors" 0 and 1).
index2color = [None] * N
def color(i, c):
if index2color[i] is not None:
assert index2color[i] == c
return
index2color[i] = c
for j in connections[i]:
color(j, 1-c)
color(0, 0)
# Partition the indices by their colors.
subsets = [[], []]
for i in range(N):
subsets[index2color[i]].append(i)
return subsets
if not sys.argv:
print "error no arguments provided"
elif sys.argv[1]:
f = open(sys.argv[1], "r")
x = [int(line.strip()) for line in f]
N = 50
import math
p = Partition(x)
s, t = p.run()
sum1 = 0L
sum2 = 0L
for i in s:
sum1 += x[i]
for i in t:
sum2 += x[i]
print "Set 1:"
print s
print "Set 2:"
print t
print "Set 1 sum", repr(sum1)
print "Set 2 sum", repr(sum2)
print "difference", repr(abs(sum1 - sum2))
This gives the following output:
Set 1:
[0, 3, 5, 6, 9, 10, 12, 15, 17, 19, 21, 22, 24, 26, 28, 31, 32, 34, 36, 38, 41, 43, 45, 47, 48, 51, 53, 54, 56, 59, 61, 62, 65, 66, 68, 71]
Set 2:
[1, 2, 4, 7, 8, 11, 13, 14, 16, 18, 20, 23, 25, 27, 29, 30, 33, 35, 37, 39, 40, 42, 44, 46, 49, 50, 52, 55, 57, 58, 60, 63, 64, 67, 69, 70]
Set 1 sum 30309344369339288555041174435706422018348623853211009172L
Set 2 sum 30309344369339288555041174435706422018348623853211009172L
difference 0L
Which provides the indices of the proper subsets in a few seconds. Thanks everybody!
Assuming each entry in the matrix can either be 0 or 1, this problem seems to be in the same family as the Partition Problem which only has a pseudo-polynomial time algorithm. Let r be the number of rows in the matrix and c be the number of columns in the matrix. Then, encode each row to a c-digit number of base r+1. This is to ensure when adding each encoding, there is no need to carry, thus equivalent numbers in this base will equate to two sets of rows whose column sums are equivalent. So in your example, you would convert each row into a 4-digit number of base 9. This would yield the numbers (converted into base 10):
10109 => 73810
01019 => 8210
11009 => 81010
01109 => 9010
00119 => 1010
10109 => 73810
11009 => 81010
01019 => 8210
Although you probably couldn't use the pseudo-polynomial time algorithm with this method, you could use a simple heuristic with some decision trees to try to speed up the bruteforce. Using the numbers above, you could try to use the Karmarkar-Karp heuristic. Implemented below is the first step of algorithm in Python 3:
# Sorted (descending) => 810, 810, 738, 738, 90, 82, 82, 10
from queue import PriorityQueue
def karmarkar_karp_partition(arr):
pqueue = PriorityQueue()
for e in arr:
pqueue.put_nowait((-e, e))
for _ in range(len(arr)-1):
_, first = pqueue.get_nowait()
_, second = pqueue.get_nowait()
diff = first - second
pqueue.put_nowait((-diff, diff))
return pqueue.get_nowait()[1]
Here is the algorithm fully implemented. Note that this method is simply a heuristic and may fail to find the best partition.

Generating a subset uniformly at random?

Here is an implementation of a combinatorial algorithm to choose a subset of an n-set, uniformly at random. Since there are 2n subsets of an n-set, each subset should have a probability: 2-n of getting selected.
I believe I have implemented the algorithm correctly (please let me know if there is a bug somewhere). When I run the program with Java 7 on my Linux box however, I get results that I am not able to reason quite well. The mystery seems to be around the Random Number Generator. I understand that one needs to run the program a 'large number' of times to 'see that the distribution reaches uniformity'. The question however is how large is large. A few runs I did suggest that unless the number of times the experiment is done is >= 1 billion, the distribution of chosen subsets is quite nonuniform.
The algorithm is based on Prof. Herbert Wilf's combinatorial algorithms book where the implementation (slightly different) is done in Fortran and the distribution is more-or-less uniform even when the program is run only 1280 times.
Here are a few sample runs (there's some variation among the run when n is constant) to get a random subset of a 4-set:
Number of times experiment is done n = 1280
Number of times experiment is done n = 12,800
Number of times experiment is done n = 128,000 (still 8 subsets only!)
Number of times experiment is done n = 1,280,000
Number of times experiment is done n = 12,800,000 (now it starts making sense)
Number of times experiment is done n = 1,280,000,000 (this is okay!)
Would you expect such performance? How could Prof. Wilf achieve similar results with only 1280 iterations of an equivalent program?
Every time you call ranInt(), you reset the RNG. Therefore in the long run, these numbers are no longer random.
Moved Random r = new Random(System.currentTimeMillis()); to the top and add static to it
class RandomSubsetSimulation {
static Random r = new Random(System.currentTimeMillis());
public static void main(String[] args) { ...
I am able to get the following results with 8-set
Total: 1000, number of subsets with a frequency > 0: 256
Total # of subsets possible: 256
Full results with 4-set
Frequencies of chosen subsets ....
[3] : 76, 4, 5.94
[4] : 72, 8, 5.63
[] : 83, -3, 6.48
[1] : 90, -10, 7.03
[2] : 80, 0, 6.25
[3, 4] : 86, -6, 6.72
[2, 3] : 88, -8, 6.88
[2, 4] : 55, 25, 4.30
[1, 2, 3] : 99, -19, 7.73
[1, 2, 4] : 75, 5, 5.86
[2, 3, 4] : 76, 4, 5.94
[1, 3] : 85, -5, 6.64
[1, 2] : 94, -14, 7.34
[1, 4] : 72, 8, 5.63
[1, 2, 3, 4] : 71, 9, 5.55
[1, 3, 4] : 78, 2, 6.09
Total: 1280, number of subsets with a frequency > 0: 16
Total # of subsets possible: 16

Spread objects evenly over multiple collections

The scenario is that there are n objects, of different sizes, unevenly spread over m buckets. The size of a bucket is the sum of all of the object sizes that it contains. It now happens that the sizes of the buckets are varying wildly.
What would be a good algorithm if I want to spread those objects evenly over those buckets so that the total size of each bucket would be about the same? It would be nice if the algorithm leaned towards less move size over a perfectly even spread.
I have this naïve, ineffective, and buggy solution in Ruby.
buckets = [ [10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2] ]
avg_size = buckets.flatten.reduce(:+) / buckets.count + 1
large_buckets = buckets.take_while {|arr| arr.reduce(:+) >= avg_size}.to_a
large_buckets.each do |large|
smallest = buckets.last
until ((small_sum = smallest.reduce(:+)) >= avg_size)
break if small_sum + large.last >= avg_size
smallest << large.pop
end
buckets.insert(0, buckets.pop)
end
=> [[3, 1, 1, 1, 2, 3], [2, 1, 2, 3, 3], [10, 4], [5, 5]]
I believe this is a variant of the bin packing problem, and as such it is NP-hard. Your answer is essentially a variant of the first fit decreasing heuristic, which is a pretty good heuristic. That said, I believe that the following will give better results.
Sort each individual bucket in descending size order, using a balanced binary tree.
Calculate average size.
Sort the buckets with size less than average (the "too-small buckets") in descending size order, using a balanced binary tree.
Sort the buckets with size greater than average (the "too-large buckets") in order of the size of their greatest elements, using a balanced binary tree (so the bucket with {9, 1} would come first and the bucket with {8, 5} would come second).
Pass1: Remove the largest element from the bucket with the largest element; if this reduces its size below the average, then replace the removed element and remove the bucket from the balanced binary tree of "too-large buckets"; else place the element in the smallest bucket, and re-index the two modified buckets to reflect the new smallest bucket and the new "too-large bucket" with the largest element. Continue iterating until you've removed all of the "too-large buckets."
Pass2: Iterate through the "too-small buckets" from smallest to largest, and select the best-fitting elements from the largest "too-large bucket" without causing it to become a "too-small bucket;" iterate through the remaining "too-large buckets" from largest to smallest, removing the best-fitting elements from them without causing them to become "too-small buckets." Do the same for the remaining "too-small buckets." The results of this variant won't be as good as they are for the more complex variant because it won't shift buckets from the "too-large" to the "too-small" category or vice versa (hence the search space will be smaller), but this also means that it has much simpler halting conditions (simply iterate through all of the "too-small" buckets and then halt), whereas the complex variant might cause an infinite loop if you're not careful.
The idea is that by moving the largest elements in Pass1 you make it easier to more precisely match up the buckets' sizes in Pass2. You use balanced binary trees so that you can quickly re-index the buckets or the trees of buckets after removing or adding an element, but you could use linked lists instead (the balanced binary trees would have better worst-case performance but the linked lists might have better average-case performance). By performing a best-fit instead of a first-fit in Pass2 you're less likely to perform useless moves (e.g. moving a size-10 object from a bucket that's 5 greater than average into a bucket that's 5 less than average - first fit would blindly perform the movie, best-fit would either query the next "too-large bucket" for a better-sized object or else would remove the "too-small bucket" from the bucket tree).
I ended up with something like this.
Sort the buckets in descending size order.
Sort each individual bucket in descending size order.
Calculate average size.
Iterate over each bucket with a size larger than average size.
Move objects in size order from those buckets to the smallest bucket until either the large bucket is smaller than average size or the target bucket reaches average size.
Ruby code example
require 'pp'
def average_size(buckets)
(buckets.flatten.reduce(:+).to_f / buckets.count + 0.5).to_i
end
def spread_evenly(buckets)
average = average_size(buckets)
large_buckets = buckets.take_while {|arr| arr.reduce(:+) >= average}.to_a
large_buckets.each do |large_bucket|
smallest_bucket = buckets.last
smallest_size = smallest_bucket.reduce(:+)
large_size = large_bucket.reduce(:+)
until (smallest_size >= average)
break if large_size <= average
if smallest_size + large_bucket.last > average and large_size > average
buckets.unshift buckets.pop
smallest_bucket = buckets.last
smallest_size = smallest_bucket.reduce(:+)
end
smallest_size += smallest_object = large_bucket.pop
large_size -= smallest_object
smallest_bucket << smallest_object
end
buckets.unshift buckets.pop if smallest_size >= average
end
buckets
end
test_buckets = [
[ [10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2] ],
[ [4, 3, 3, 2, 2, 2, 2, 1, 1], [10, 5, 3, 2, 1], [3, 3, 3], [6] ],
[ [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1] ],
[ [10, 9, 8, 7], [6, 5, 4], [3, 2], [1] ],
]
test_buckets.each do |buckets|
puts "Before spread with average of #{average_size(buckets)}:"
pp buckets
result = spread_evenly(buckets)
puts "Result and sum of each bucket:"
pp result
sizes = result.map {|bucket| bucket.reduce :+}
pp sizes
puts
end
Output:
Before spread with average of 12:
[[10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2]]
Result and sum of each bucket:
[[3, 1, 1, 4, 1, 2], [2, 1, 2, 3, 3], [10], [5, 5, 3]]
[12, 11, 10, 13]
Before spread with average of 14:
[[4, 3, 3, 2, 2, 2, 2, 1, 1], [10, 5, 3, 2, 1], [3, 3, 3], [6]]
Result and sum of each bucket:
[[3, 3, 3, 2, 3], [6, 1, 1, 2, 2, 1], [4, 3, 3, 2, 2], [10, 5]]
[14, 13, 14, 15]
Before spread with average of 4:
[[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1]]
Result and sum of each bucket:
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
[4, 4, 4, 4, 4]
Before spread with average of 14:
[[10, 9, 8, 7], [6, 5, 4], [3, 2], [1]]
Result and sum of each bucket:
[[1, 7, 9], [10], [6, 5, 4], [3, 2, 8]]
[17, 10, 15, 13]
This isn't bin packing as others have suggested. There the size of bins is fixed and you are trying to minimize the number. Here you are trying to minimize the variance among a fixed number of bins.
It turns out this is equivalent to Multiprocessor Scheduling, and - according to the reference - the algorithm below (known as "Longest Job First" or "Longest Processing Time First") is certain to produce a largest sum no more than 4/3 - 1/(3m) times optimal, where m is the number of buckets. In the test cases shonw, we'd have 4/3-1/12 = 5/4 or no more than 25% above optimal.
We just start with all bins empty, and put each item in decreasing order of size into the currently least full bin. We can track the least full bin efficiently with a min heap. With a heap having O(log n) insert and deletemin, the algorithm has O(n log m) time (n and m defined as #Jonas Elfström says). Ruby is very expressive here: only 9 sloc for the algorithm itself.
Here is code. I am not a Ruby expert, so please feel free to suggest better ways. I am using #Jonas Elfström's test cases.
require 'algorithms'
require 'pp'
test_buckets = [
[ [10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2] ],
[ [4, 3, 3, 2, 2, 2, 2, 1, 1], [10, 5, 3, 2, 1], [3, 3, 3], [6] ],
[ [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1] ],
[ [10, 9, 8, 7], [6, 5, 4], [3, 2], [1] ],
]
def relevel(buckets)
q = Containers::PriorityQueue.new { |x, y| x < y }
# Initially all buckets to be returned are empty and so have zero sums.
rtn = Array.new(buckets.length) { [] }
buckets.each_index {|i| q.push(i, 0) }
sums = Array.new(buckets.length, 0)
# Add to emptiest bucket in descending order.
# Bang! ops would generate less garbage.
buckets.flatten.sort.reverse.each do |val|
i = q.pop # Get index of emptiest bucket
rtn[i] << val # Append current value to it
q.push(i, sums[i] += val) # Update sums and min heap
end
rtn
end
test_buckets.each {|b| pp relevel(b).map {|a| a.inject(:+) }}
Results:
[12, 11, 11, 12]
[14, 14, 14, 14]
[4, 4, 4, 4, 4]
[13, 13, 15, 14]
You could use my answer to fitting n variable height images into 3 (similar length) column layout.
Mentally map:
Object size to picture height, and
bucket count to bincount
Then the rest of that solution should apply...
The following uses the first_fit algorithm mentioned by Robin Green earlier but then improves on this by greedy swapping.
The swapping routine finds the column that is furthest away from the average column height then systematically looks for a swap between one of its pictures and the first picture in another column that minimizes the maximum deviation from the average.
I used a random sample of 30 pictures with heights in the range five to 50 'units'. The convergenge was swift in my case and improved significantly on the first_fit algorithm.
The code (Python 3.2:
def first_fit(items, bincount=3):
items = sorted(items, reverse=1) # New - improves first fit.
bins = [[] for c in range(bincount)]
binsizes = [0] * bincount
for item in items:
minbinindex = binsizes.index(min(binsizes))
bins[minbinindex].append(item)
binsizes[minbinindex] += item
average = sum(binsizes) / float(bincount)
maxdeviation = max(abs(average - bs) for bs in binsizes)
return bins, binsizes, average, maxdeviation
def swap1(columns, colsize, average, margin=0):
'See if you can do a swap to smooth the heights'
colcount = len(columns)
maxdeviation, i_a = max((abs(average - cs), i)
for i,cs in enumerate(colsize))
col_a = columns[i_a]
for pic_a in set(col_a): # use set as if same height then only do once
for i_b, col_b in enumerate(columns):
if i_a != i_b: # Not same column
for pic_b in set(col_b):
if (abs(pic_a - pic_b) > margin): # Not same heights
# new heights if swapped
new_a = colsize[i_a] - pic_a + pic_b
new_b = colsize[i_b] - pic_b + pic_a
if all(abs(average - new) < maxdeviation
for new in (new_a, new_b)):
# Better to swap (in-place)
colsize[i_a] = new_a
colsize[i_b] = new_b
columns[i_a].remove(pic_a)
columns[i_a].append(pic_b)
columns[i_b].remove(pic_b)
columns[i_b].append(pic_a)
maxdeviation = max(abs(average - cs)
for cs in colsize)
return True, maxdeviation
return False, maxdeviation
def printit(columns, colsize, average, maxdeviation):
print('columns')
pp(columns)
print('colsize:', colsize)
print('average, maxdeviation:', average, maxdeviation)
print('deviations:', [abs(average - cs) for cs in colsize])
print()
if __name__ == '__main__':
## Some data
#import random
#heights = [random.randint(5, 50) for i in range(30)]
## Here's some from the above, but 'fixed'.
from pprint import pprint as pp
heights = [45, 7, 46, 34, 12, 12, 34, 19, 17, 41,
28, 9, 37, 32, 30, 44, 17, 16, 44, 7,
23, 30, 36, 5, 40, 20, 28, 42, 8, 38]
columns, colsize, average, maxdeviation = first_fit(heights)
printit(columns, colsize, average, maxdeviation)
while 1:
swapped, maxdeviation = swap1(columns, colsize, average, maxdeviation)
printit(columns, colsize, average, maxdeviation)
if not swapped:
break
#input('Paused: ')
The output:
columns
[[45, 12, 17, 28, 32, 17, 44, 5, 40, 8, 38],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 34, 9, 37, 44, 30, 20, 28]]
colsize: [286, 267, 248]
average, maxdeviation: 267.0 19.0
deviations: [19.0, 0.0, 19.0]
columns
[[45, 12, 17, 28, 17, 44, 5, 40, 8, 38, 9],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 34, 37, 44, 30, 20, 28, 32]]
colsize: [263, 267, 271]
average, maxdeviation: 267.0 4.0
deviations: [4.0, 0.0, 4.0]
columns
[[45, 12, 17, 17, 44, 5, 40, 8, 38, 9, 34],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 37, 44, 30, 20, 28, 32, 28]]
colsize: [269, 267, 265]
average, maxdeviation: 267.0 2.0
deviations: [2.0, 0.0, 2.0]
columns
[[45, 12, 17, 17, 44, 5, 8, 38, 9, 34, 37],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 44, 30, 20, 28, 32, 28, 40]]
colsize: [266, 267, 268]
average, maxdeviation: 267.0 1.0
deviations: [1.0, 0.0, 1.0]
columns
[[45, 12, 17, 17, 44, 5, 8, 38, 9, 34, 37],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 44, 30, 20, 28, 32, 28, 40]]
colsize: [266, 267, 268]
average, maxdeviation: 267.0 1.0
deviations: [1.0, 0.0, 1.0]
Nice problem.
Heres the info on reverse-sorting mentioned in my separate comment below.
>>> h = sorted(heights, reverse=1)
>>> h
[46, 45, 44, 44, 42, 41, 40, 38, 37, 36, 34, 34, 32, 30, 30, 28, 28, 23, 20, 19, 17, 17, 16, 12, 12, 9, 8, 7, 7, 5]
>>> columns, colsize, average, maxdeviation = first_fit(h)
>>> printit(columns, colsize, average, maxdeviation)
columns
[[46, 41, 40, 34, 30, 28, 19, 12, 12, 5],
[45, 42, 38, 36, 30, 28, 17, 16, 8, 7],
[44, 44, 37, 34, 32, 23, 20, 17, 9, 7]]
colsize: [267, 267, 267]
average, maxdeviation: 267.0 0.0
deviations: [0.0, 0.0, 0.0]
If you have the reverse-sorting, this extra code appended to the bottom of the above code (in the 'if name == ...), will do extra trials on random data:
for trial in range(2,11):
print('\n## Trial %i' % trial)
heights = [random.randint(5, 50) for i in range(random.randint(5, 50))]
print('Pictures:',len(heights))
columns, colsize, average, maxdeviation = first_fit(heights)
print('average %7.3f' % average, '\nmaxdeviation:')
print('%5.2f%% = %6.3f' % ((maxdeviation * 100. / average), maxdeviation))
swapcount = 0
while maxdeviation:
swapped, maxdeviation = swap1(columns, colsize, average, maxdeviation)
if not swapped:
break
print('%5.2f%% = %6.3f' % ((maxdeviation * 100. / average), maxdeviation))
swapcount += 1
print('swaps:', swapcount)
The extra output shows the effect of the swaps:
## Trial 2
Pictures: 11
average 72.000
maxdeviation:
9.72% = 7.000
swaps: 0
## Trial 3
Pictures: 14
average 118.667
maxdeviation:
6.46% = 7.667
4.78% = 5.667
3.09% = 3.667
0.56% = 0.667
swaps: 3
## Trial 4
Pictures: 46
average 470.333
maxdeviation:
0.57% = 2.667
0.35% = 1.667
0.14% = 0.667
swaps: 2
## Trial 5
Pictures: 40
average 388.667
maxdeviation:
0.43% = 1.667
0.17% = 0.667
swaps: 1
## Trial 6
Pictures: 5
average 44.000
maxdeviation:
4.55% = 2.000
swaps: 0
## Trial 7
Pictures: 30
average 295.000
maxdeviation:
0.34% = 1.000
swaps: 0
## Trial 8
Pictures: 43
average 413.000
maxdeviation:
0.97% = 4.000
0.73% = 3.000
0.48% = 2.000
swaps: 2
## Trial 9
Pictures: 33
average 342.000
maxdeviation:
0.29% = 1.000
swaps: 0
## Trial 10
Pictures: 26
average 233.333
maxdeviation:
2.29% = 5.333
1.86% = 4.333
1.43% = 3.333
1.00% = 2.333
0.57% = 1.333
swaps: 4
Adapt the Knapsack Problem solving algorithms' by, for example, specify the "weight" of every buckets to be roughly equals to the mean of the n objects' sizes (try a gaussian distri around the mean value).
http://en.wikipedia.org/wiki/Knapsack_problem#Solving
Sort buckets in size order.
Move an object from the largest bucket into the smallest bucket, re-sorting the array (which is almost-sorted, so we can use "limited insertion sort" in both directions; you can also speed things up by noting where you placed the last two buckets to be sorted. If you have 6-6-6-6-6-6-5... and get one object from the first bucket, you will move it to the sixth position. Then on the next iteration you can start comparing from the fifth. The same goes, right-to-left, for the smallest buckets).
When the difference of the two buckets is one, you can stop.
This moves the minimum number of buckets, but is of order n^2 log n for comparisons (the simplest version is n^3 log n). If object moving is expensive while bucket size checking is not, for reasonable n it might still do:
12 7 5 2
11 7 5 3
10 7 5 4
9 7 5 5
8 7 6 5
7 7 6 6
12 7 3 1
11 7 3 2
10 7 3 3
9 7 4 3
8 7 4 4
7 7 5 4
7 6 5 5
6 6 6 5
Another possibility would be to calculate the expected average size for every bucket, and "move along" a bag (or a further bucket) with the excess from the larger buckets to the smaller ones.
Otherwise, strange things may happen:
12 7 3 1, the average is a bit less than 6, so we take 5 as the average.
5 7 3 1 bag = 7 from 1st bucket
5 5 3 1 bag = 9
5 5 5 1 bag = 7
5 5 5 8 which is a bit unbalanced.
By taking 6 (i.e. rounding) it goes better, but again sometimes it won't work:
12 5 3 1
6 5 3 1 bag = 6 from 1st bucket
6 6 3 1 bag = 5
6 6 6 1 bag = 2
6 6 6 3 which again is unbalanced.
You can run two passes, the first with the rounded mean left-to-right, the other with the truncated mean right-to-left:
12 5 3 1 we want to get no more than 6 in each bucket
6 11 3 1
6 6 8 1
6 6 6 3
6 6 6 3 and now we want to get at least 5 in each bucket
6 6 4 5 (we have taken 2 from bucket #3 into bucket #5)
6 5 5 5 (when the difference is 1 we stop).
This will require "n log n" size checks, and no more than 2n object moves.
Another possibility which is interesting is to reason thus: you have m objects into n buckets. So you need to do an integer mapping of m onto n, and this is Bresenham's linearization algorithm. Run a (n,m) Bresenham on the sorted array, and at step i (i.e. against bucket i-th) the algorithm will tell you whether to use round(m/n) or floor(m/n) size. Then move objects from or to the "moving bag" according to bucket i-th size.
This requires n log n comparisons.
You can further reduce the number of object moves by initially removing all buckets that are either round(m/n) or floor(m/n) in size to two pools of buckets sized R or F. When, running the algorithm, you need the i-th bucket to hold R objects, if the pool of R objects is not empty, swap the i-th bucket with one of the R-sized ones. This way, only buckets that are hopelessly under- or over-sized get balanced; (most of) the others are simply ignored, except for their references being shuffled.
If object access time is huge in proportion to computation time (e.g. some kind of automatic loader magazine), this will yield a magazine that is as balanced as possible, with the absolute minimum of overall object moves.
You could use an Integer Programming Package if it's fast enough.
It may be tricky getting your constraints right. Something like the following may do the trick:
let variable Oij denote Object i being in Bucket j. Let Wi represent the weight or size of Oi
Constraints:
sum(Oij for all j) == 1 #each object is in only one bucket
Oij = 1 or 0. #object is either in bucket j or not in bucket j
sum(Oij * Wi for all i) <= X + R #restrict weight on buckets.
Objective:
minimize X
Note R is the relaxation constant that you can play with depending on how much movement is required and how much performance is needed.
Now the maximum bucket size is X + R
The next step is to figure out the minimum amount movement possible whilst keeping the bucket size less than X + R
Define a Stay variable Si that controls if Oi stays in bucket j
If Si is 0 it indicates that Oi stays where it was.
Constraints:
Si = 1 or 0.
Oij = 1 or 0.
Oij <= Si where j != original bucket of Object i
Oij != Si where j == original bucket of Object i
Sum(Oij for all j) == 1
Sum(Oij for all i) <= X + R
Objective:
minimize Sum(Si for all i)
Here Sum(Si for all i) represents the number of objects that have moved.

Riffling Cards in Mathematica

My friend posed this question to me; felt like sharing it here.
Given a deck of cards, we split it into 2 groups, and "interleave them"; let us call this operation a 'split-join'. And repeat the same operation on the resulting deck.
E.g., { 1, 2, 3, 4 } becomes { 1, 2 } & { 3, 4 } (split) and we get { 1, 3, 2, 4 } (join)
Also, if we have an odd number of cards i.e., { 1, 2, 3 } we can split it like { 1, 2 } & { 3 } (bigger-half first) leading to { 1, 3, 2 }
(i.e., n is split up as Ceil[n/2] & n-Ceil[n/2])
The question my friend asked me was:
HOW many such split-joins are needed to get the original deck back?
And that got me wondering:
If the deck has n cards, what is the number of split-joins needed if:
n is even ?
n is odd ?
n is a power of '2' ? [I found that we then need log (n) (base 2) number of split-joins...]
(Feel free to explore different scenarios like that.)
Is there a simple pattern/formula/concept correlating n and the number of split-joins required?
I believe, this is a good thing to explore in Mathematica, especially, since it provides the Riffle[] method.
To quote MathWorld:
The numbers of out-shuffles needed to return a deck of n=2, 4, ... to its original order are 1, 2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, ... (Sloane's A002326), which is simply the multiplicative order of 2 (mod n-1). For example, a deck of 52 cards therefore is returned to its original state after eight out-shuffles, since 2**8=1 (mod 51) (Golomb 1961). The smallest numbers of cards 2n that require 1, 2, 3, ... out-shuffles to return to the deck's original state are 1, 2, 4, 3, 16, 5, 64, 9, 37, 6, ... (Sloane's A114894).
The case when n is odd isn't addressed.
Note that the article also includes a Mathematica notebook with functions to explore out-shuffles.
If we have an odd number of cards n==2m-1, and if we split the cards such that during each shuffle the first group contains m cards, the second group m-1 cards, and the groups are joined such that no two cards of the same group end up next to each other, then the number of shuffles needed is equal to MultiplicativeOrder[2, n].
To show this, we note that after one shuffle the card which was at position k has moved to position 2k for 0<=k<m and to 2k-2m+1 for m<=k<2m-1, where k is such that 0<=k<2m-1. Written modulo n==2m-1 this means that the new position is Mod[2k, n] for all 0<=k<n. Therefore, for each card to return to its original position we need N shuffles where N is such that Mod[2^N k, n]==Mod[k, n] for all 0<=k<n from which is follows that N is any multiple of MultiplicativeOrder[2, n].
Note that due to symmetry the result would have been exactly the same if we had split the deck the other way around, i.e. the first group always contains m-1 cards and the second group m cards. I don't know what would happen if you alternate, i.e. for odd shuffles the first group contains m cards, and for even shuffles m-1 cards.
There's old work by magician/mathematician Persi Diaconnis about restoring the order with perfect riffle shuffles. Ian Stewart wrote about that work in one of his 1998 Scientific American Mathematical Recreation columns -- see, e.g.: http://www.whydomath.org/Reading_Room_Material/ian_stewart/shuffle/shuffle.html
old question I know, but strange no one put up an actual mathematica solution..
countrifflecards[deck_] := Module[{n = Length#deck, ct, rifdeck},
ct = 0;
rifdeck =
Riffle ##
Partition[ # , Ceiling[ n/2], Ceiling[ n/2], {1, 1}, {} ] &;
NestWhile[(++ct; rifdeck[#]) &, deck, #2 != deck &,2 ]; ct]
This handles even and odd cases:
countrifflecards[RandomSample[ Range[#], #]] & /# Range[2, 52, 2]
{1, 2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, 20, 18, 28, 5, 10, 12, 36,
12, 20, 14, 12, 23, 21, 8}
countrifflecards[RandomSample[ Range[#], #]] & /# Range[3, 53, 2]
{2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, 20, 18, 28, 5, 10, 12, 36, 12,
20, 14, 12, 23, 21, 8, 52}
You can readily show if you add a card to the odd-case the extra card will stay on the bottom and not change the sequence, hence the odd case result is just the n+1 even result..
ListPlot[{#, countrifflecards[RandomSample[ Range[#], #]]} & /#
Range[2, 1000]]

Resources