fitting n variable height images into 3 (similar length) column layout - algorithm

I'm looking to make a 3-column layout similar to that of piccsy.com. Given a number of images of the same width but varying height, what is a algorithm to order them so that the difference in column lengths is minimal? Ideally in Python or JavaScript...
Thanks a lot for your help in advance!
Martin

How many images?
If you limit the maximum page size, and have a value for the minimum picture height, you can calculate the maximum number of images per page. You would need this when evaluating any solution.
I think there were 27 pictures on the link you gave.
The following uses the first_fit algorithm mentioned by Robin Green earlier but then improves on this by greedy swapping.
The swapping routine finds the column that is furthest away from the average column height then systematically looks for a swap between one of its pictures and the first picture in another column that minimizes the maximum deviation from the average.
I used a random sample of 30 pictures with heights in the range five to 50 'units'. The convergenge was swift in my case and improved significantly on the first_fit algorithm.
The code (Python 3.2:
def first_fit(items, bincount=3):
items = sorted(items, reverse=1) # New - improves first fit.
bins = [[] for c in range(bincount)]
binsizes = [0] * bincount
for item in items:
minbinindex = binsizes.index(min(binsizes))
bins[minbinindex].append(item)
binsizes[minbinindex] += item
average = sum(binsizes) / float(bincount)
maxdeviation = max(abs(average - bs) for bs in binsizes)
return bins, binsizes, average, maxdeviation
def swap1(columns, colsize, average, margin=0):
'See if you can do a swap to smooth the heights'
colcount = len(columns)
maxdeviation, i_a = max((abs(average - cs), i)
for i,cs in enumerate(colsize))
col_a = columns[i_a]
for pic_a in set(col_a): # use set as if same height then only do once
for i_b, col_b in enumerate(columns):
if i_a != i_b: # Not same column
for pic_b in set(col_b):
if (abs(pic_a - pic_b) > margin): # Not same heights
# new heights if swapped
new_a = colsize[i_a] - pic_a + pic_b
new_b = colsize[i_b] - pic_b + pic_a
if all(abs(average - new) < maxdeviation
for new in (new_a, new_b)):
# Better to swap (in-place)
colsize[i_a] = new_a
colsize[i_b] = new_b
columns[i_a].remove(pic_a)
columns[i_a].append(pic_b)
columns[i_b].remove(pic_b)
columns[i_b].append(pic_a)
maxdeviation = max(abs(average - cs)
for cs in colsize)
return True, maxdeviation
return False, maxdeviation
def printit(columns, colsize, average, maxdeviation):
print('columns')
pp(columns)
print('colsize:', colsize)
print('average, maxdeviation:', average, maxdeviation)
print('deviations:', [abs(average - cs) for cs in colsize])
print()
if __name__ == '__main__':
## Some data
#import random
#heights = [random.randint(5, 50) for i in range(30)]
## Here's some from the above, but 'fixed'.
from pprint import pprint as pp
heights = [45, 7, 46, 34, 12, 12, 34, 19, 17, 41,
28, 9, 37, 32, 30, 44, 17, 16, 44, 7,
23, 30, 36, 5, 40, 20, 28, 42, 8, 38]
columns, colsize, average, maxdeviation = first_fit(heights)
printit(columns, colsize, average, maxdeviation)
while 1:
swapped, maxdeviation = swap1(columns, colsize, average, maxdeviation)
printit(columns, colsize, average, maxdeviation)
if not swapped:
break
#input('Paused: ')
The output:
columns
[[45, 12, 17, 28, 32, 17, 44, 5, 40, 8, 38],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 34, 9, 37, 44, 30, 20, 28]]
colsize: [286, 267, 248]
average, maxdeviation: 267.0 19.0
deviations: [19.0, 0.0, 19.0]
columns
[[45, 12, 17, 28, 17, 44, 5, 40, 8, 38, 9],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 34, 37, 44, 30, 20, 28, 32]]
colsize: [263, 267, 271]
average, maxdeviation: 267.0 4.0
deviations: [4.0, 0.0, 4.0]
columns
[[45, 12, 17, 17, 44, 5, 40, 8, 38, 9, 34],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 37, 44, 30, 20, 28, 32, 28]]
colsize: [269, 267, 265]
average, maxdeviation: 267.0 2.0
deviations: [2.0, 0.0, 2.0]
columns
[[45, 12, 17, 17, 44, 5, 8, 38, 9, 34, 37],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 44, 30, 20, 28, 32, 28, 40]]
colsize: [266, 267, 268]
average, maxdeviation: 267.0 1.0
deviations: [1.0, 0.0, 1.0]
columns
[[45, 12, 17, 17, 44, 5, 8, 38, 9, 34, 37],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 44, 30, 20, 28, 32, 28, 40]]
colsize: [266, 267, 268]
average, maxdeviation: 267.0 1.0
deviations: [1.0, 0.0, 1.0]
Nice problem.
Heres the info on reverse-sorting mentioned in my separate comment below.
>>> h = sorted(heights, reverse=1)
>>> h
[46, 45, 44, 44, 42, 41, 40, 38, 37, 36, 34, 34, 32, 30, 30, 28, 28, 23, 20, 19, 17, 17, 16, 12, 12, 9, 8, 7, 7, 5]
>>> columns, colsize, average, maxdeviation = first_fit(h)
>>> printit(columns, colsize, average, maxdeviation)
columns
[[46, 41, 40, 34, 30, 28, 19, 12, 12, 5],
[45, 42, 38, 36, 30, 28, 17, 16, 8, 7],
[44, 44, 37, 34, 32, 23, 20, 17, 9, 7]]
colsize: [267, 267, 267]
average, maxdeviation: 267.0 0.0
deviations: [0.0, 0.0, 0.0]
If you have the reverse-sorting, this extra code appended to the bottom of the above code (in the 'if name == ...), will do extra trials on random data:
for trial in range(2,11):
print('\n## Trial %i' % trial)
heights = [random.randint(5, 50) for i in range(random.randint(5, 50))]
print('Pictures:',len(heights))
columns, colsize, average, maxdeviation = first_fit(heights)
print('average %7.3f' % average, '\nmaxdeviation:')
print('%5.2f%% = %6.3f' % ((maxdeviation * 100. / average), maxdeviation))
swapcount = 0
while maxdeviation:
swapped, maxdeviation = swap1(columns, colsize, average, maxdeviation)
if not swapped:
break
print('%5.2f%% = %6.3f' % ((maxdeviation * 100. / average), maxdeviation))
swapcount += 1
print('swaps:', swapcount)
The extra output shows the effect of the swaps:
## Trial 2
Pictures: 11
average 72.000
maxdeviation:
9.72% = 7.000
swaps: 0
## Trial 3
Pictures: 14
average 118.667
maxdeviation:
6.46% = 7.667
4.78% = 5.667
3.09% = 3.667
0.56% = 0.667
swaps: 3
## Trial 4
Pictures: 46
average 470.333
maxdeviation:
0.57% = 2.667
0.35% = 1.667
0.14% = 0.667
swaps: 2
## Trial 5
Pictures: 40
average 388.667
maxdeviation:
0.43% = 1.667
0.17% = 0.667
swaps: 1
## Trial 6
Pictures: 5
average 44.000
maxdeviation:
4.55% = 2.000
swaps: 0
## Trial 7
Pictures: 30
average 295.000
maxdeviation:
0.34% = 1.000
swaps: 0
## Trial 8
Pictures: 43
average 413.000
maxdeviation:
0.97% = 4.000
0.73% = 3.000
0.48% = 2.000
swaps: 2
## Trial 9
Pictures: 33
average 342.000
maxdeviation:
0.29% = 1.000
swaps: 0
## Trial 10
Pictures: 26
average 233.333
maxdeviation:
2.29% = 5.333
1.86% = 4.333
1.43% = 3.333
1.00% = 2.333
0.57% = 1.333
swaps: 4

This is the offline makespan minimisation problem, which I think is equivalent to the multiprocessor scheduling problem. Instead of jobs you have images, and instead of job durations you have image heights, but it's exactly the same problem. (The fact that it involves space instead of time doesn't matter.) So any algorithm that (approximately) solves either of them will do.

Here's an algorithm (called First Fit Decreasing) that will get you a very compact arrangement, in a reasonable amount of time. There may be a better algorithm but this is ridiculously simple.
Sort the images in order from tallest to shortest.
Take the first image, and place it in the shortest column.
(If multiple columns are the same height (and shortest) pick any one.)
Repeat step 2 until no images remain.
When you're done, you can re-arrange the elements in the each column however you choose if you don't like the tallest-to-shortest look.

Here's one:
// Create initial solution
<run First Fit Decreasing algorithm first>
// Calculate "error", i.e. maximum height difference
// after running FFD
err = (maximum_height - minimum_height)
minerr = err
// Run simple greedy optimization and random search
repeat for a number of steps: // e.g. 1000 steps
<find any two random images a and b from two different columns such that
swapping a and b decreases the error>
if <found>:
swap a and b
err = (maximum_height - minimum_height)
if (err < minerr):
<store as best solution so far> // X
else:
swap two random images from two columns
err = (maximum_height - minimum_height)
<output the best solution stored on line marked with X>

Related

How many comparisons needed in binary search of this array?

We have the following array:
[4, 13, 25, 33, 38, 41, 55, 71, 73, 84, 86, 92, 97]
To me it seems like there are only 3 comparisons needed to find 25, because:
First we pick the middle element 55. Now we perform two comparisons: 55 = 25? 55 > 25? None of these hold so we go to the left of the array. We get the subarray: [4, 13, 25, 33, 38, 41]
We divide this again and get 25 = 25? yes.. So it took 3 comparisons to get our match. My book says there are four comparisons needed to find 25. Why is this?
As the size of the left array is even, each algorithm could select one of the middle numbers. Hence, the comparison could be like the following with 4 comparison:
[4, 13, 25, 33, 38, 41, 55, 71, 73, 84, 86, 92, 97]
25 < 55 =>‌ [4, 13, 25, 33, 38, 41]
25 < 33 => [4, 13, 25]
25 > 13 => [25]
25 == 25 => Found.

Looking for an algorithm for a perfect "snake" from the center of a field?

I'm looking for a piece of code:
From the middle, in a "circle"-way, slowly to the ends of the edges of a rectangle. And when it reaches the boundaries on one side, just skip the pixels.
I tried already some crazy for-adventures, but that was to much code.
Does anyone have any idea for a simple/ingenious way?
It's like to start the game snake from the center until the full field is used. I'll use this way to scan a picture (from the middle to find the first pixel next to center in a other color).
Maybe a picture could describe it better:
From this link requires numpy and python of course.
import numpy as np
a = np.arange(7*7).reshape(7,7)
def spiral_ccw(A):
A = np.array(A)
out = []
while(A.size):
out.append(A[0][::-1]) # first row reversed
A = A[1:][::-1].T # cut off first row and rotate clockwise
return np.concatenate(out)
def base_spiral(nrow, ncol):
return spiral_ccw(np.arange(nrow*ncol).reshape(nrow, ncol))[::-1]
def to_spiral(A):
A = np.array(A)
B = np.empty_like(A)
B.flat[base_spiral(*A.shape)] = A.flat
return B
to_spiral(a)
array([[42, 43, 44, 45, 46, 47, 48],
[41, 20, 21, 22, 23, 24, 25],
[40, 19, 6, 7, 8, 9, 26],
[39, 18, 5, 0, 1, 10, 27],
[38, 17, 4, 3, 2, 11, 28],
[37, 16, 15, 14, 13, 12, 29],
[36, 35, 34, 33, 32, 31, 30]])
how do you think about run from edge to center? It really easy to code, just run from (0;0) and if you hit a edge or a pixel already visited just turn right 90*

Algorithm - longest wiggle subsequence

Algorithm:
A sequence of numbers is called a wiggle sequence if the differences
between successive numbers strictly alternate between positive and
negative. The first difference (if one exists) may be either positive
or negative. A sequence with fewer than two elements is trivially a
wiggle sequence.
For example, [1,7,4,9,2,5] is a wiggle sequence because the
differences (6,-3,5,-7,3) are alternately positive and negative. In
contrast, [1,4,7,2,5] and [1,7,4,5,5] are not wiggle sequences, the
first because its first two differences are positive and the second
because its last difference is zero.
Given a sequence of integers, return the length of the longest
subsequence that is a wiggle sequence. A subsequence is obtained by
deleting some number of elements (eventually, also zero) from the
original sequence, leaving the remaining elements in their original
order.
Examples:
Input: [1,7,4,9,2,5]
Output: 6
The entire sequence is a wiggle sequence.
Input: [1,17,5,10,13,15,10,5,16,8]
Output: 7
There are several subsequences that achieve this length. One is [1,17,10,13,10,16,8].
Input: [1,2,3,4,5,6,7,8,9]
Output: 2
My soln:
def wiggle_max_length(nums)
[ build_seq(nums, 0, 0, true, -1.0/0.0),
build_seq(nums, 0, 0, false, 1.0/0.0)
].max
end
def build_seq(nums, index, len, wiggle_up, prev)
return len if index >= nums.length
if wiggle_up && nums[index] - prev > 0 || !wiggle_up && nums[index] - prev < 0
build_seq(nums, index + 1, len + 1, !wiggle_up, nums[index])
else
build_seq(nums, index + 1, len, wiggle_up, prev)
end
end
This is working for smaller inputs (e.g [1,1,1,3,2,4,1,6,3,10,8] and for all the sample inputs, but its failing for very large inputs (which is harder to debug) like:
[33,53,12,64,50,41,45,21,97,35,47,92,39,0,93,55,40,46,69,42,6,95,51,68,72,9,32,84,34,64,6,2,26,98,3,43,30,60,3,68,82,9,97,19,27,98,99,4,30,96,37,9,78,43,64,4,65,30,84,90,87,64,18,50,60,1,40,32,48,50,76,100,57,29,63,53,46,57,93,98,42,80,82,9,41,55,69,84,82,79,30,79,18,97,67,23,52,38,74,15]
which should have output: 67 but my soln outputs 57. Does anyone know what is wrong here?
The approach tried is a greedy solution (because it always uses the current element if it satisfies the wiggle condition), but this does not always work.
I will try illustrating this with this simpler counter-example: 1 100 99 6 7 4 5 2 3.
One best sub-sequence is: 1 100 6 7 4 5 2 3, but the two build_seq calls from the algorithm will produce these sequences:
1 100 99
1
Edit: A slightly modified greedy approach does work -- see this link, thanks Peter de Rivaz.
Dynamic Programming can be used to obtain an optimal solution.
Note: I wrote this before seeing the article mentioned by #PeterdeRivaz. While dynamic programming (O(n2)) works, the article presents a superior (O(n)) "greedy" algorithm ("Approach #5"), which is also far easier to code than a dynamic programming solution. I have added a second answer that implements that method.
Code
def longest_wiggle(arr)
best = [{ pos_diff: { length: 0, prev_ndx: nil },
neg_diff: { length: 0, prev_ndx: nil } }]
(1..arr.size-1).each do |i|
calc_best(arr, i, :pos_diff, best)
calc_best(arr, i, :neg_diff, best)
end
unpack_best(best)
end
def calc_best(arr, i, diff, best)
curr = arr[i]
prev_indices = (0..i-1).select { |j|
(diff==:pos_diff) ? (arr[j] < curr) : (arr[j] > curr) }
best[i] = {} if best.size == i
best[i][diff] =
if prev_indices.empty?
{ length: 0, prev_ndx: nil }
else
prev_diff = previous_diff(diff)
j = prev_indices.max_by { |j| best[j][prev_diff][:length] }
{ length: (1 + best[j][prev_diff][:length]), prev_ndx: j }
end
end
def previous_diff(diff)
diff==:pos_diff ? :neg_diff : :pos_diff·
end
def unpack_best(best)
last_idx, last_diff =
best.size.times.to_a.product([:pos_diff, :neg_diff]).
max_by { |i,diff| best[i][diff][:length] }
return [0, []] if best[last_idx][last_diff][:length].zero?
best_path = []
loop do
best_path.unshift(last_idx)
prev_index = best[last_idx][last_diff][:prev_ndx]
break if prev_index.nil?
last_idx = prev_index·
last_diff = previous_diff(last_diff)
end
best_path
end
Examples
longest_wiggle([1, 4, 2, 6, 8, 3, 2, 5])
#=> [0, 1, 2, 3, 5, 7]]
The length of the longest wiggle is 6 and consists of the elements at indices 0, 1, 2, 3, 5 and 7, that is, [1, 4, 2, 6, 3, 5].
A second example uses the larger array given in the question.
arr = [33, 53, 12, 64, 50, 41, 45, 21, 97, 35, 47, 92, 39, 0, 93, 55, 40, 46,
69, 42, 6, 95, 51, 68, 72, 9, 32, 84, 34, 64, 6, 2, 26, 98, 3, 43, 30,
60, 3, 68, 82, 9, 97, 19, 27, 98, 99, 4, 30, 96, 37, 9, 78, 43, 64, 4,
65, 30, 84, 90, 87, 64, 18, 50, 60, 1, 40, 32, 48, 50, 76, 100, 57, 29,
arr.size 63, 53, 46, 57, 93, 98, 42, 80, 82, 9, 41, 55, 69, 84, 82, 79, 30, 79,
18, 97, 67, 23, 52, 38, 74, 15]
#=> 100
longest_wiggle(arr).size
#=> 67
longest_wiggle(arr)
#=> [0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 12, 14, 16, 17, 19, 21, 22, 23, 25,
# 27, 28, 29, 30, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 47, 49, 50,
# 52, 53, 54, 55, 56, 57, 58, 62, 63, 65, 66, 67, 70, 72, 74, 75, 77, 80,
# 81, 83, 84, 90, 91, 92, 93, 95, 96, 97, 98, 99]
As indicated, the largest wiggle is comprised of 67 elements of arr. Solution time was essentially instantaneous.
The values of arr at those indices are as follows.
[33, 53, 12, 64, 41, 45, 21, 97, 35, 47, 39, 93, 40, 46, 42, 95, 51, 68, 9,
84, 34, 64, 6, 26, 3, 43, 30, 60, 3, 68, 9, 97, 19, 27, 4, 96, 37, 78, 43,
64, 4, 65, 30, 84, 18, 50, 1, 40, 32, 76, 57, 63, 53, 57, 42, 80, 9, 41, 30,
79, 18, 97, 23, 52, 38, 74, 15]
[33, 53, 12, 64, 41, 45, 21, 97, 35, 92, 0, 93, 40, 69, 6, 95, 51, 72, 9, 84, 34, 64, 2, 98, 3, 43, 30, 60, 3, 82, 9, 97, 19, 99, 4, 96, 9, 78, 43, 64, 4, 65, 30, 90, 18, 60, 1, 40, 32, 100, 29, 63, 46, 98, 42, 82, 9, 84, 30, 79, 18, 97, 23, 52, 38, 74]
Explanation
I had intended to provide an explanation of the algorithm and its implementation, but having since learned there is a superior approach (see my note at the beginning of my answer), I have decided against doing that, but would of course be happy to answer any questions. The link in my note explains, among other things, how dynamic programming can be used here.
Let Wp[i] be the longest wiggle sequence starting at element i, and where the first difference is positive. Let Wn[i] be the same, but where the first difference is negative.
Then:
Wp[k] = max(1+Wn[k'] for k<k'<n, where A[k'] > A[k]) (or 1 if no such k' exists)
Wn[k] = max(1+Wp[k'] for k<k'<n, where A[k'] < A[k]) (or 1 if no such k' exists)
This gives an O(n^2) dynamic programming solution, here in pseudocode
Wp = [1, 1, ..., 1] -- length n
Wn = [1, 1, ..., 1] -- length n
for k = n-1, n-2, ..., 0
for k' = k+1, k+2, ..., n-1
if A[k'] > A[k]
Wp[k] = max(Wp[k], Wn[k']+1)
else if A[k'] < A[k]
Wn[k] = max(Wn[k], Wp[k']+1)
result = max(max(Wp[i], Wn[i]) for i = 0, 1, ..., n-1)
In a comment on #quertyman's answer, #PeterdeRivaz provided a link to an article that considers various approaches to solving the "longest wiggle subsequence" problem. I have implemented "Approach #5", which has a time-complexity of O(n).
The algorithm is simple as well as fast. The first step is to remove one element from each pair of consecutive elements that are equal, and continue to do so until there are no consecutive elements that are equal. For example, [1,2,2,2,3,4,4] would be converted to [1,2,3,4]. The longest wiggle subsequence includes the first and last elements of the resulting array, a, and every element a[i], 0 < i < a.size-1 for which a[i-1] < a[i] > a[i+1] ora[i-1] > a[i] > a[i+1]. In other words, it includes the first and last elements and all peaks and valley bottoms. Those elements are A, D, E, G, H, I in the graph below (taken from the above-referenced article, with permission).
Code
def longest_wiggle(arr)
arr.each_cons(2).
reject { |a,b| a==b }.
map(&:first).
push(arr.last).
each_cons(3).
select { |triple| [triple.min, triple.max].include? triple[1] }.
map { |_,n,_| n }.
unshift(arr.first).
push(arr.last)
end
Example
arr = [33, 53, 12, 64, 50, 41, 45, 21, 97, 35, 47, 92, 39, 0, 93, 55, 40,
46, 69, 42, 6, 95, 51, 68, 72, 9, 32, 84, 34, 64, 6, 2, 26, 98, 3,
43, 30, 60, 3, 68, 82, 9, 97, 19, 27, 98, 99, 4, 30, 96, 37, 9, 78,
43, 64, 4, 65, 30, 84, 90, 87, 64, 18, 50, 60, 1, 40, 32, 48, 50, 76,
100, 57, 29, 63, 53, 46, 57, 93, 98, 42, 80, 82, 9, 41, 55, 69, 84,
82, 79, 30, 79, 18, 97, 67, 23, 52, 38, 74, 15]
a = longest_wiggle(arr)
#=> [33, 53, 12, 64, 41, 45, 21, 97, 35, 92, 0, 93, 40, 69, 6, 95, 51, 72,
# 9, 84, 34, 64, 2, 98, 3, 43, 30, 60, 3, 82, 9, 97, 19, 99, 4, 96, 9,
# 78, 43, 64, 4, 65, 30, 90, 18, 60, 1, 40, 32, 100, 29, 63, 46, 98, 42,
# 82, 9, 84, 30, 79, 18, 97, 23, 52, 38, 74, 15]
a.size
#=> 67
Explanation
The steps are as follows.
arr = [3, 4, 4, 5, 2, 3, 7, 4]
enum1 = arr.each_cons(2)
#=> #<Enumerator: [3, 4, 4, 5, 2, 3, 7, 4]:each_cons(2)>
We can see the elements that will be generated by this enumerator by converting it to an array.
enum1.to_a
#=> [[3, 4], [4, 4], [4, 5], [5, 2], [2, 3], [3, 7], [7, 4]]
Continuing, remove all but one of each group of successive equal elements.
d = enum1.reject { |a,b| a==b }
#=> [[3, 4], [4, 5], [5, 2], [2, 3], [3, 7], [7, 4]]
e = d.map(&:first)
#=> [3, 4, 5, 2, 3, 7]
Add the last element.
f = e.push(arr.last)
#=> [3, 4, 5, 2, 3, 7, 4]
Next, find the peaks and valley bottoms.
enum2 = f.each_cons(3)
#=> #<Enumerator: [3, 4, 5, 2, 3, 7, 4]:each_cons(3)>
enum2.to_a
#=> [[3, 4, 5], [4, 5, 2], [5, 2, 3], [2, 3, 7], [3, 7, 4]]
g = enum2.select { |triple| [triple.min, triple.max].include? triple[1] }
#=> [[4, 5, 2], [5, 2, 3], [3, 7, 4]]
h = g.map { |_,n,_| n }
#=> [5, 2, 7]
Lastly, add the first and last values of arr.
i = h.unshift(arr.first)
#=> [3, 5, 2, 7]
i.push(arr.last)
#=> [3, 5, 2, 7, 4]

Make a square multiplication table in Ruby

I got this question in an interview and got almost all the way to the answer but got stuck on the last part. If I want to get the multiplication table for 5, for instance, I want to get the output to be formatted like so:
1, 2, 3, 4, 5
2, 4, 6, 8, 10
3, 6, 9, 12, 15
4, 8, 12, 16, 20
5, 10, 15, 20, 25
My answer to this is:
def make_table(n)
s = ""
1.upto(n).each do |i|
1.upto(n).each do |j|
s += (i*j).to_s
end
s += "\n"
end
p s
end
But the output for make_table(5) is:
"12345\n246810\n3691215\n48121620\n510152025\n"
I've tried variations with array but I'm getting similar output.
What am I missing or how should I think about the last part of the problem?
You can use map and join to get a String in one line :
n = 5
puts (1..n).map { |x| (1..n).map { |y| x * y }.join(', ') }.join("\n")
It iterates over rows (x=1, x=2, ...). For each row, it iterates over cells (y=1, y=2, ...) and calculates x*y. It joins every cells in a row with ,, and joins every rows in the table with a newline :
1, 2, 3, 4, 5
2, 4, 6, 8, 10
3, 6, 9, 12, 15
4, 8, 12, 16, 20
5, 10, 15, 20, 25
If you want to keep the commas aligned, you can use rjust :
puts (1..n).map { |x| (1..n).map { |y| (x * y).to_s.rjust(3) }.join(',') }.join("\n")
It outputs :
1, 2, 3, 4, 5
2, 4, 6, 8, 10
3, 6, 9, 12, 15
4, 8, 12, 16, 20
5, 10, 15, 20, 25
You could even go fancy and calculate the width of n**2 before aligning commas :
n = 11
width = Math.log10(n**2).ceil + 1
puts (1..n).map { |x| (1..n).map { |y| (x * y).to_s.rjust(width) }.join(',') }.join("\n")
It outputs :
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22
3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33
4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44
5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55
6, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66
7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77
8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88
9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99
10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110
11, 22, 33, 44, 55, 66, 77, 88, 99, 110, 121
Without spaces between the figures, the result is indeed unreadable. Have a look at the % operator, which formats strings and numbers. Instead of
s += (i*j).to_s
you could write
s += '%3d' % (i*j)
If you really want to get the output formatted in the way you explained in your posting (which I don't find that much readable), you could do a
s += "#{i*j}, "
This leaves you with two extra characters at the end of the line, which you have to remove. An alternative would be to use an array. Instead of the inner loop, you would have then something like
s += 1.upto(n).to_a.map {|j| i*j}.join(', ') + "\n"
You don't need to construct a string if you're only interested in printing the table and not returning the table(as a string).
(1..n).each do |a|
(1..n-1).each { |b| print "#{a * b}, " }
puts a * n
end
This is how I'd do it.
require 'matrix'
n = 5
puts Matrix.build(n) { |i,j| (i+1)*(j+1) }.to_a.map { |row| row.join(', ') }
1, 2, 3, 4, 5
2, 4, 6, 8, 10
3, 6, 9, 12, 15
4, 8, 12, 16, 20
5, 10, 15, 20, 25
See Matrix::build.
You can make it much shorter but here's my version.
range = Array(1..12)
range.each do |element|
range.map { |item| print "#{element * item} " } && puts
end

Spread objects evenly over multiple collections

The scenario is that there are n objects, of different sizes, unevenly spread over m buckets. The size of a bucket is the sum of all of the object sizes that it contains. It now happens that the sizes of the buckets are varying wildly.
What would be a good algorithm if I want to spread those objects evenly over those buckets so that the total size of each bucket would be about the same? It would be nice if the algorithm leaned towards less move size over a perfectly even spread.
I have this naïve, ineffective, and buggy solution in Ruby.
buckets = [ [10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2] ]
avg_size = buckets.flatten.reduce(:+) / buckets.count + 1
large_buckets = buckets.take_while {|arr| arr.reduce(:+) >= avg_size}.to_a
large_buckets.each do |large|
smallest = buckets.last
until ((small_sum = smallest.reduce(:+)) >= avg_size)
break if small_sum + large.last >= avg_size
smallest << large.pop
end
buckets.insert(0, buckets.pop)
end
=> [[3, 1, 1, 1, 2, 3], [2, 1, 2, 3, 3], [10, 4], [5, 5]]
I believe this is a variant of the bin packing problem, and as such it is NP-hard. Your answer is essentially a variant of the first fit decreasing heuristic, which is a pretty good heuristic. That said, I believe that the following will give better results.
Sort each individual bucket in descending size order, using a balanced binary tree.
Calculate average size.
Sort the buckets with size less than average (the "too-small buckets") in descending size order, using a balanced binary tree.
Sort the buckets with size greater than average (the "too-large buckets") in order of the size of their greatest elements, using a balanced binary tree (so the bucket with {9, 1} would come first and the bucket with {8, 5} would come second).
Pass1: Remove the largest element from the bucket with the largest element; if this reduces its size below the average, then replace the removed element and remove the bucket from the balanced binary tree of "too-large buckets"; else place the element in the smallest bucket, and re-index the two modified buckets to reflect the new smallest bucket and the new "too-large bucket" with the largest element. Continue iterating until you've removed all of the "too-large buckets."
Pass2: Iterate through the "too-small buckets" from smallest to largest, and select the best-fitting elements from the largest "too-large bucket" without causing it to become a "too-small bucket;" iterate through the remaining "too-large buckets" from largest to smallest, removing the best-fitting elements from them without causing them to become "too-small buckets." Do the same for the remaining "too-small buckets." The results of this variant won't be as good as they are for the more complex variant because it won't shift buckets from the "too-large" to the "too-small" category or vice versa (hence the search space will be smaller), but this also means that it has much simpler halting conditions (simply iterate through all of the "too-small" buckets and then halt), whereas the complex variant might cause an infinite loop if you're not careful.
The idea is that by moving the largest elements in Pass1 you make it easier to more precisely match up the buckets' sizes in Pass2. You use balanced binary trees so that you can quickly re-index the buckets or the trees of buckets after removing or adding an element, but you could use linked lists instead (the balanced binary trees would have better worst-case performance but the linked lists might have better average-case performance). By performing a best-fit instead of a first-fit in Pass2 you're less likely to perform useless moves (e.g. moving a size-10 object from a bucket that's 5 greater than average into a bucket that's 5 less than average - first fit would blindly perform the movie, best-fit would either query the next "too-large bucket" for a better-sized object or else would remove the "too-small bucket" from the bucket tree).
I ended up with something like this.
Sort the buckets in descending size order.
Sort each individual bucket in descending size order.
Calculate average size.
Iterate over each bucket with a size larger than average size.
Move objects in size order from those buckets to the smallest bucket until either the large bucket is smaller than average size or the target bucket reaches average size.
Ruby code example
require 'pp'
def average_size(buckets)
(buckets.flatten.reduce(:+).to_f / buckets.count + 0.5).to_i
end
def spread_evenly(buckets)
average = average_size(buckets)
large_buckets = buckets.take_while {|arr| arr.reduce(:+) >= average}.to_a
large_buckets.each do |large_bucket|
smallest_bucket = buckets.last
smallest_size = smallest_bucket.reduce(:+)
large_size = large_bucket.reduce(:+)
until (smallest_size >= average)
break if large_size <= average
if smallest_size + large_bucket.last > average and large_size > average
buckets.unshift buckets.pop
smallest_bucket = buckets.last
smallest_size = smallest_bucket.reduce(:+)
end
smallest_size += smallest_object = large_bucket.pop
large_size -= smallest_object
smallest_bucket << smallest_object
end
buckets.unshift buckets.pop if smallest_size >= average
end
buckets
end
test_buckets = [
[ [10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2] ],
[ [4, 3, 3, 2, 2, 2, 2, 1, 1], [10, 5, 3, 2, 1], [3, 3, 3], [6] ],
[ [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1] ],
[ [10, 9, 8, 7], [6, 5, 4], [3, 2], [1] ],
]
test_buckets.each do |buckets|
puts "Before spread with average of #{average_size(buckets)}:"
pp buckets
result = spread_evenly(buckets)
puts "Result and sum of each bucket:"
pp result
sizes = result.map {|bucket| bucket.reduce :+}
pp sizes
puts
end
Output:
Before spread with average of 12:
[[10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2]]
Result and sum of each bucket:
[[3, 1, 1, 4, 1, 2], [2, 1, 2, 3, 3], [10], [5, 5, 3]]
[12, 11, 10, 13]
Before spread with average of 14:
[[4, 3, 3, 2, 2, 2, 2, 1, 1], [10, 5, 3, 2, 1], [3, 3, 3], [6]]
Result and sum of each bucket:
[[3, 3, 3, 2, 3], [6, 1, 1, 2, 2, 1], [4, 3, 3, 2, 2], [10, 5]]
[14, 13, 14, 15]
Before spread with average of 4:
[[1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1]]
Result and sum of each bucket:
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]
[4, 4, 4, 4, 4]
Before spread with average of 14:
[[10, 9, 8, 7], [6, 5, 4], [3, 2], [1]]
Result and sum of each bucket:
[[1, 7, 9], [10], [6, 5, 4], [3, 2, 8]]
[17, 10, 15, 13]
This isn't bin packing as others have suggested. There the size of bins is fixed and you are trying to minimize the number. Here you are trying to minimize the variance among a fixed number of bins.
It turns out this is equivalent to Multiprocessor Scheduling, and - according to the reference - the algorithm below (known as "Longest Job First" or "Longest Processing Time First") is certain to produce a largest sum no more than 4/3 - 1/(3m) times optimal, where m is the number of buckets. In the test cases shonw, we'd have 4/3-1/12 = 5/4 or no more than 25% above optimal.
We just start with all bins empty, and put each item in decreasing order of size into the currently least full bin. We can track the least full bin efficiently with a min heap. With a heap having O(log n) insert and deletemin, the algorithm has O(n log m) time (n and m defined as #Jonas Elfström says). Ruby is very expressive here: only 9 sloc for the algorithm itself.
Here is code. I am not a Ruby expert, so please feel free to suggest better ways. I am using #Jonas Elfström's test cases.
require 'algorithms'
require 'pp'
test_buckets = [
[ [10, 4, 3, 3, 2, 1], [5, 5, 3, 2, 1], [3, 1, 1], [2] ],
[ [4, 3, 3, 2, 2, 2, 2, 1, 1], [10, 5, 3, 2, 1], [3, 3, 3], [6] ],
[ [1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1], [1, 1] ],
[ [10, 9, 8, 7], [6, 5, 4], [3, 2], [1] ],
]
def relevel(buckets)
q = Containers::PriorityQueue.new { |x, y| x < y }
# Initially all buckets to be returned are empty and so have zero sums.
rtn = Array.new(buckets.length) { [] }
buckets.each_index {|i| q.push(i, 0) }
sums = Array.new(buckets.length, 0)
# Add to emptiest bucket in descending order.
# Bang! ops would generate less garbage.
buckets.flatten.sort.reverse.each do |val|
i = q.pop # Get index of emptiest bucket
rtn[i] << val # Append current value to it
q.push(i, sums[i] += val) # Update sums and min heap
end
rtn
end
test_buckets.each {|b| pp relevel(b).map {|a| a.inject(:+) }}
Results:
[12, 11, 11, 12]
[14, 14, 14, 14]
[4, 4, 4, 4, 4]
[13, 13, 15, 14]
You could use my answer to fitting n variable height images into 3 (similar length) column layout.
Mentally map:
Object size to picture height, and
bucket count to bincount
Then the rest of that solution should apply...
The following uses the first_fit algorithm mentioned by Robin Green earlier but then improves on this by greedy swapping.
The swapping routine finds the column that is furthest away from the average column height then systematically looks for a swap between one of its pictures and the first picture in another column that minimizes the maximum deviation from the average.
I used a random sample of 30 pictures with heights in the range five to 50 'units'. The convergenge was swift in my case and improved significantly on the first_fit algorithm.
The code (Python 3.2:
def first_fit(items, bincount=3):
items = sorted(items, reverse=1) # New - improves first fit.
bins = [[] for c in range(bincount)]
binsizes = [0] * bincount
for item in items:
minbinindex = binsizes.index(min(binsizes))
bins[minbinindex].append(item)
binsizes[minbinindex] += item
average = sum(binsizes) / float(bincount)
maxdeviation = max(abs(average - bs) for bs in binsizes)
return bins, binsizes, average, maxdeviation
def swap1(columns, colsize, average, margin=0):
'See if you can do a swap to smooth the heights'
colcount = len(columns)
maxdeviation, i_a = max((abs(average - cs), i)
for i,cs in enumerate(colsize))
col_a = columns[i_a]
for pic_a in set(col_a): # use set as if same height then only do once
for i_b, col_b in enumerate(columns):
if i_a != i_b: # Not same column
for pic_b in set(col_b):
if (abs(pic_a - pic_b) > margin): # Not same heights
# new heights if swapped
new_a = colsize[i_a] - pic_a + pic_b
new_b = colsize[i_b] - pic_b + pic_a
if all(abs(average - new) < maxdeviation
for new in (new_a, new_b)):
# Better to swap (in-place)
colsize[i_a] = new_a
colsize[i_b] = new_b
columns[i_a].remove(pic_a)
columns[i_a].append(pic_b)
columns[i_b].remove(pic_b)
columns[i_b].append(pic_a)
maxdeviation = max(abs(average - cs)
for cs in colsize)
return True, maxdeviation
return False, maxdeviation
def printit(columns, colsize, average, maxdeviation):
print('columns')
pp(columns)
print('colsize:', colsize)
print('average, maxdeviation:', average, maxdeviation)
print('deviations:', [abs(average - cs) for cs in colsize])
print()
if __name__ == '__main__':
## Some data
#import random
#heights = [random.randint(5, 50) for i in range(30)]
## Here's some from the above, but 'fixed'.
from pprint import pprint as pp
heights = [45, 7, 46, 34, 12, 12, 34, 19, 17, 41,
28, 9, 37, 32, 30, 44, 17, 16, 44, 7,
23, 30, 36, 5, 40, 20, 28, 42, 8, 38]
columns, colsize, average, maxdeviation = first_fit(heights)
printit(columns, colsize, average, maxdeviation)
while 1:
swapped, maxdeviation = swap1(columns, colsize, average, maxdeviation)
printit(columns, colsize, average, maxdeviation)
if not swapped:
break
#input('Paused: ')
The output:
columns
[[45, 12, 17, 28, 32, 17, 44, 5, 40, 8, 38],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 34, 9, 37, 44, 30, 20, 28]]
colsize: [286, 267, 248]
average, maxdeviation: 267.0 19.0
deviations: [19.0, 0.0, 19.0]
columns
[[45, 12, 17, 28, 17, 44, 5, 40, 8, 38, 9],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 34, 37, 44, 30, 20, 28, 32]]
colsize: [263, 267, 271]
average, maxdeviation: 267.0 4.0
deviations: [4.0, 0.0, 4.0]
columns
[[45, 12, 17, 17, 44, 5, 40, 8, 38, 9, 34],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 37, 44, 30, 20, 28, 32, 28]]
colsize: [269, 267, 265]
average, maxdeviation: 267.0 2.0
deviations: [2.0, 0.0, 2.0]
columns
[[45, 12, 17, 17, 44, 5, 8, 38, 9, 34, 37],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 44, 30, 20, 28, 32, 28, 40]]
colsize: [266, 267, 268]
average, maxdeviation: 267.0 1.0
deviations: [1.0, 0.0, 1.0]
columns
[[45, 12, 17, 17, 44, 5, 8, 38, 9, 34, 37],
[7, 34, 12, 19, 41, 30, 16, 7, 23, 36, 42],
[46, 44, 30, 20, 28, 32, 28, 40]]
colsize: [266, 267, 268]
average, maxdeviation: 267.0 1.0
deviations: [1.0, 0.0, 1.0]
Nice problem.
Heres the info on reverse-sorting mentioned in my separate comment below.
>>> h = sorted(heights, reverse=1)
>>> h
[46, 45, 44, 44, 42, 41, 40, 38, 37, 36, 34, 34, 32, 30, 30, 28, 28, 23, 20, 19, 17, 17, 16, 12, 12, 9, 8, 7, 7, 5]
>>> columns, colsize, average, maxdeviation = first_fit(h)
>>> printit(columns, colsize, average, maxdeviation)
columns
[[46, 41, 40, 34, 30, 28, 19, 12, 12, 5],
[45, 42, 38, 36, 30, 28, 17, 16, 8, 7],
[44, 44, 37, 34, 32, 23, 20, 17, 9, 7]]
colsize: [267, 267, 267]
average, maxdeviation: 267.0 0.0
deviations: [0.0, 0.0, 0.0]
If you have the reverse-sorting, this extra code appended to the bottom of the above code (in the 'if name == ...), will do extra trials on random data:
for trial in range(2,11):
print('\n## Trial %i' % trial)
heights = [random.randint(5, 50) for i in range(random.randint(5, 50))]
print('Pictures:',len(heights))
columns, colsize, average, maxdeviation = first_fit(heights)
print('average %7.3f' % average, '\nmaxdeviation:')
print('%5.2f%% = %6.3f' % ((maxdeviation * 100. / average), maxdeviation))
swapcount = 0
while maxdeviation:
swapped, maxdeviation = swap1(columns, colsize, average, maxdeviation)
if not swapped:
break
print('%5.2f%% = %6.3f' % ((maxdeviation * 100. / average), maxdeviation))
swapcount += 1
print('swaps:', swapcount)
The extra output shows the effect of the swaps:
## Trial 2
Pictures: 11
average 72.000
maxdeviation:
9.72% = 7.000
swaps: 0
## Trial 3
Pictures: 14
average 118.667
maxdeviation:
6.46% = 7.667
4.78% = 5.667
3.09% = 3.667
0.56% = 0.667
swaps: 3
## Trial 4
Pictures: 46
average 470.333
maxdeviation:
0.57% = 2.667
0.35% = 1.667
0.14% = 0.667
swaps: 2
## Trial 5
Pictures: 40
average 388.667
maxdeviation:
0.43% = 1.667
0.17% = 0.667
swaps: 1
## Trial 6
Pictures: 5
average 44.000
maxdeviation:
4.55% = 2.000
swaps: 0
## Trial 7
Pictures: 30
average 295.000
maxdeviation:
0.34% = 1.000
swaps: 0
## Trial 8
Pictures: 43
average 413.000
maxdeviation:
0.97% = 4.000
0.73% = 3.000
0.48% = 2.000
swaps: 2
## Trial 9
Pictures: 33
average 342.000
maxdeviation:
0.29% = 1.000
swaps: 0
## Trial 10
Pictures: 26
average 233.333
maxdeviation:
2.29% = 5.333
1.86% = 4.333
1.43% = 3.333
1.00% = 2.333
0.57% = 1.333
swaps: 4
Adapt the Knapsack Problem solving algorithms' by, for example, specify the "weight" of every buckets to be roughly equals to the mean of the n objects' sizes (try a gaussian distri around the mean value).
http://en.wikipedia.org/wiki/Knapsack_problem#Solving
Sort buckets in size order.
Move an object from the largest bucket into the smallest bucket, re-sorting the array (which is almost-sorted, so we can use "limited insertion sort" in both directions; you can also speed things up by noting where you placed the last two buckets to be sorted. If you have 6-6-6-6-6-6-5... and get one object from the first bucket, you will move it to the sixth position. Then on the next iteration you can start comparing from the fifth. The same goes, right-to-left, for the smallest buckets).
When the difference of the two buckets is one, you can stop.
This moves the minimum number of buckets, but is of order n^2 log n for comparisons (the simplest version is n^3 log n). If object moving is expensive while bucket size checking is not, for reasonable n it might still do:
12 7 5 2
11 7 5 3
10 7 5 4
9 7 5 5
8 7 6 5
7 7 6 6
12 7 3 1
11 7 3 2
10 7 3 3
9 7 4 3
8 7 4 4
7 7 5 4
7 6 5 5
6 6 6 5
Another possibility would be to calculate the expected average size for every bucket, and "move along" a bag (or a further bucket) with the excess from the larger buckets to the smaller ones.
Otherwise, strange things may happen:
12 7 3 1, the average is a bit less than 6, so we take 5 as the average.
5 7 3 1 bag = 7 from 1st bucket
5 5 3 1 bag = 9
5 5 5 1 bag = 7
5 5 5 8 which is a bit unbalanced.
By taking 6 (i.e. rounding) it goes better, but again sometimes it won't work:
12 5 3 1
6 5 3 1 bag = 6 from 1st bucket
6 6 3 1 bag = 5
6 6 6 1 bag = 2
6 6 6 3 which again is unbalanced.
You can run two passes, the first with the rounded mean left-to-right, the other with the truncated mean right-to-left:
12 5 3 1 we want to get no more than 6 in each bucket
6 11 3 1
6 6 8 1
6 6 6 3
6 6 6 3 and now we want to get at least 5 in each bucket
6 6 4 5 (we have taken 2 from bucket #3 into bucket #5)
6 5 5 5 (when the difference is 1 we stop).
This will require "n log n" size checks, and no more than 2n object moves.
Another possibility which is interesting is to reason thus: you have m objects into n buckets. So you need to do an integer mapping of m onto n, and this is Bresenham's linearization algorithm. Run a (n,m) Bresenham on the sorted array, and at step i (i.e. against bucket i-th) the algorithm will tell you whether to use round(m/n) or floor(m/n) size. Then move objects from or to the "moving bag" according to bucket i-th size.
This requires n log n comparisons.
You can further reduce the number of object moves by initially removing all buckets that are either round(m/n) or floor(m/n) in size to two pools of buckets sized R or F. When, running the algorithm, you need the i-th bucket to hold R objects, if the pool of R objects is not empty, swap the i-th bucket with one of the R-sized ones. This way, only buckets that are hopelessly under- or over-sized get balanced; (most of) the others are simply ignored, except for their references being shuffled.
If object access time is huge in proportion to computation time (e.g. some kind of automatic loader magazine), this will yield a magazine that is as balanced as possible, with the absolute minimum of overall object moves.
You could use an Integer Programming Package if it's fast enough.
It may be tricky getting your constraints right. Something like the following may do the trick:
let variable Oij denote Object i being in Bucket j. Let Wi represent the weight or size of Oi
Constraints:
sum(Oij for all j) == 1 #each object is in only one bucket
Oij = 1 or 0. #object is either in bucket j or not in bucket j
sum(Oij * Wi for all i) <= X + R #restrict weight on buckets.
Objective:
minimize X
Note R is the relaxation constant that you can play with depending on how much movement is required and how much performance is needed.
Now the maximum bucket size is X + R
The next step is to figure out the minimum amount movement possible whilst keeping the bucket size less than X + R
Define a Stay variable Si that controls if Oi stays in bucket j
If Si is 0 it indicates that Oi stays where it was.
Constraints:
Si = 1 or 0.
Oij = 1 or 0.
Oij <= Si where j != original bucket of Object i
Oij != Si where j == original bucket of Object i
Sum(Oij for all j) == 1
Sum(Oij for all i) <= X + R
Objective:
minimize Sum(Si for all i)
Here Sum(Si for all i) represents the number of objects that have moved.

Resources