Slideshow Algorithm - algorithm

I need to design an algorithm for a photo slideshow that is constantly receiving new images, so that the oldest pictures appear less in the presentation, until a balance between the old photos and those that have appeared.
I have thought that every image could have a counter of the number of times they have been shown and prioritize those pictures with the lowest value in that variable.
Any other ideas or solutions would be well received.

You can achieve an overall near-uniform distribution (each image appears about the same number of times for the long run), but I wouldn't recommend doing it. Images that were available early would appear very very rarely later on. A better user experience would be to simply choose a random image from all the available images at each step.
If you still want near-uniform distribution for the long run, you should set the probability for any image based on the number of times it appeared so far. For example:
p(i) = 1 - count(i) / (max_count() + epsilon)
Here is a simple R code that simulates such process. 37 random images are selected before a new image becomes available. This process is repeated 3000 times:
h <- 3000 # total images
eps <- 0.001
t <- integer(length=h) # t[i]: no. of instances of value i in r
r <- c() # proceded vector of indexes of images
m <- 0 # highest number of appearances for an image
for (i in 1:h)
for (j in 1:37) # select 37 random images in range 1..i
{
v <- sample(1:i, 1, prob=1-t[1:i]/(m+eps)) # select image i with weight 1-t[i]/(m+eps)
r <- c(r, v) # add to output vector
t[v] <- t[v]+1 # update appearances count
m <- max(m, t[v]) # update highest number of appearances
}
plot(table(r))
The output plot shows the number of times each image appeared:
epsilon = 0.001:
epsilon = 0.0001:
If we look, for example at the indexes in the output vector in which, say, image #3 was selected:
> which(r==3)
[1] 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
[21] 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 1189 34767 39377
[41] 70259
Note that if epsilon is very small, the sequence will seem less random (newer images are much preferred). For the long run however, any epsilon will do.

Instead of a view counter, you could also try basing your algorithm on the timestamp that images were uploaded.

Related

Need help for finding optimal path which visits multiple sequences of nodes

Summary
Recently I have had a path-finding puzzle that has some complex constraints (currently, I don't have any solution for this one)
A 2D matrix represented the graph. The length of a path is the number of traversed cells.
One or more number sequences are to be found inside the matrix. Each sequence is scored with a value.
Maximum length of the path in the graph. The number of picked cells must not exceed this value.
At any given moment, you can only choose cells in a specific column or row.
On each turn, you need to switch between column and row and stay on
the same line as the last cell you picked. You have to move at right angles. (The direction is like the Snake game).
Always start with picking the first cell from the top row, then go
vertically down to pick the second cell, and then continue switching
between column and row as usual.
You can't choose the same cell twice. The resulting path must not contain duplicated
cells.
For example:
The task is to find the shortest path, if possible in the graph that contains one or more sequences with the highest total score and the path's length is not exceed the provided maximum length.
The picture below demonstrates the solved puzzle with the resulting path marked in red:
Here, we have a path 3A-10-9B. This path contains the given
sequence 3A-10-9B so, which earns 10pts. More complex graphs typically have longer paths containing various sequences at once.
More complex examples
Multiple Sequences
You can complete sequences in any order. The order in which the sequences are listed doesn't matter.
Wasted Moves
Sometimes we are forced to waste moves and choose different cells that don't belong to any sequence. Here are the rules:
Able to waste 1 or 2 moves before the first sequence.
Able to waste 1 or 2 moves between any neighboring sequences.
However, you cannot break sequences and waste moves in the middle of them.
Here, we must waste one move before the sequence 3A-9B and two moves between sequences 3A-9B and 72-D4. Also, notice how red lines between 3A and 9B as well as between 72 and D4 "cross" previously selected cells D4 and 9B, respectively. You can pick different cells from the same row or column multiple times.
Optimal Sequences
Sometimes, it is not possible to have a path that contains all of the provided sequences. In this case, choose the way which achieved the most significant score.
In the above example, we can complete either 9B-3A-72-D4 or 72-D4-3A but not both due to the maximum path length of 5 cells. We have chosen the sequence 9B-3A-72-D4 since it grants more score points than 72-D4-3A.
Unsolvable solution
The first sequence 3A-D4 can't be completed since the code matrix doesn't contain code D4 at all. The second sequence, 72-10, can't be completed for another reason: codes 72 and 10 aren't located in the same row or column anywhere in the matrix and, therefore, can't form a sequence.
Performance advice
One brute force way is to generate all possible paths in the code matrix, loop through them and choose the best one. This is the easiest but also the slowest approach. Solving larger matrices with larger maximum length of path might take dozens of minutes, if not hours.
Try to implement a faster algorithm that doesn’t iterate through all possible paths and can solve puzzles with the following parameters in less than 10 seconds:
Matrix size: 10x10
Number of sequences: 5
Average length of sequences: 4
Maximum path length: 12
At least one solution exists
For example:
Matrix:
41,0f,32,18,29,4b,55,3f,10,3a,
19,4f,57,43,3a,25,19,1e,5e,42,
13,5a,54,3c,1b,32,29,1c,15,30,
49,45,22,2e,25,51,2f,21,4c,37,
1a,5e,49,12,55,1e,49,19,43,2d,
34,26,53,48,49,60,32,3c,50,10,
0f,1e,30,3d,64,37,5b,5e,22,61,
4e,4f,15,5a,13,56,44,22,40,26,
43,2c,17,2b,1f,25,43,60,50,1f,
3c,2b,54,46,42,4d,32,46,30,24,
Sequences:
30, 26, 44, 32, 3c - 25pts
5a, 3c, 12, 1e, 4d - 10pts
1e, 5a, 12 - 10pts
4d, 1e - 5pts
32, 51, 2f, 49, 55, 42 - 30pts
Optimal solution
3f, 1c, 30, 26, 44, 32, 3c, 22, 5a, 12, 1e, 4d
Which contains
30, 26, 44, 32, 3c
5a, 12, 1e
1e, 4d
Conclusion
I am looking for any advice for this puzzle since I have no idea what keywords to look for. A pseudo-code or hints would be helpful for me, and I appreciate that. What has come to my mind is just Dijkstra:
For each sequence, since the order doesn't matter, I have to find all get all possible paths with every permutation, then find the highest score path that contains other input sequences
After that, choose the best of the best.
In this case, I doubt the performance will be the issue.
First step is to find if a required sequence exists.
- SET found FALSE
- LOOP C1 over cells in first row
- CLEAR foundSequence
- ADD C1 to foundSequence
- LOOP C2 over cells is column containing C1
- IF C2 value == first value in sequence
- ADD C2 to foundSequence
- SET found TRUE
- break from LOOP C2
- IF found
- SET direction VERT
- LOOP V over remaining values in sequence
- TOGGLE direction
- SET found FALSE
- LOOP C2 over cells in same column or row ( depending on direction ) containing last cell in foundSequence
- IF C2 value == V
- ADD C2 to foundSequence
- SET found TRUE
- break from LOOP C2
- IF ! found
break out of LOOP V
- IF foundSequence == required sequence
- RETURN foundSequence
RETURN failed
Note: this doesn't find sequences that are feasible with "wasted moves". I would implement this first and get it working. Then, using the same ideas, it can be extended to allow wasted moves.
You have not specified an input format! I suggest a space delimited text files with lines beginning with 'm' containing matrix values and lines beginning 's' containing sequences, like this
m 3A 3A 10 9B
m 9B 72 3A 10
m 10 3A 3A 3A
m 3A 10 3A 9B
s 3A 10 9B
I have implemented the sequence finder in C++
std::vector<int> findSequence()
{
int w, h;
pA->size(w, h);
std::vector<int> foundSequence;
bool found = false;
bool vert = false;
// loop over cells in first row
for (int c = 0; c < w; c++)
{
foundSequence.clear();
found = false;
if (pA->cell(c, 0)->value == vSequence[0][0])
{
foundSequence.push_back(pA->cell(c, 0)->ID());
found = true;
}
while (found)
{
// found possible starting cell
// toggle search direction
vert = (!vert);
// start from last cell found
auto pmCell = pA->cell(foundSequence.back());
int c, r;
pA->coords(c, r, pmCell);
// look for next value in required sequence
std::string nextValue = vSequence[0][foundSequence.size()];
found = false;
if (vert)
{
// loop over cells in column
for (int r2 = 1; r2 < w; r2++)
{
if (pA->cell(c, r2)->value == nextValue)
{
foundSequence.push_back(pA->cell(c, r2)->ID());
found = true;
break;
}
}
}
else
{
// loop over cells in row
for (int c2 = 0; c2 < h; c2++)
{
if (pA->cell(c2, r)->value == nextValue)
{
foundSequence.push_back(pA->cell(c2, r)->ID());
found = true;
break;
}
}
}
if (!found) {
// dead end - try starting from next cell in first row
break;
}
if( foundSequence.size() == vSequence[0].size()) {
// success!!!
return foundSequence;
}
}
}
std::cout << "Cannot find sequence\n";
exit(1);
}
This outputs:
3A 3A 10 9B
9B 72 3A 10
10 3A 3A 3A
3A 10 3A 9B
row 0 col 1 3A
row 3 col 1 10
row 3 col 3 9B
You can check out the code for the complete application at https://github.com/JamesBremner/stackoverflow75410318
I have added the ability to find sequences that start elsewhere than the first row ( i.e. with "wasted moves" ). You can see the code in the github repo.
Here are the the results of a timing profile run on a 10 by 10 matrix - the algorithm finds 5 sequences in 0.6 milliseconds
Searching
41 0f 32 18 29 4b 55 3f 10 3a
19 4f 57 43 3a 25 19 1e 5e 42
13 5a 54 3c 1b 32 29 1c 15 30
49 45 22 2e 25 51 2f 21 4c 37
1a 5e 49 12 55 1e 49 19 43 2d
34 26 53 48 49 60 32 3c 50 10
0f 1e 30 3d 64 37 5b 5e 22 61
4e 4f 15 5a 13 56 44 22 40 26
43 2c 17 2b 1f 25 43 60 50 1f
3c 2b 54 46 42 4d 32 46 30 24
for sequence 4d 1e
Cannot find sequence starting in 1st row, using wasted moves
row 9 col 5 4d
row 4 col 5 1e
for sequence 30 26 44 32 3c
Cannot find sequence starting in 1st row, using wasted moves
Cannot find sequence
for sequence 5a 3c 12 1e 4d
Cannot find sequence starting in 1st row, using wasted moves
row 2 col 1 5a
row 2 col 3 3c
row 4 col 3 12
row 4 col 5 1e
row 9 col 5 4d
for sequence 1e 5a 12
Cannot find sequence starting in 1st row, using wasted moves
row 6 col 1 1e
row 4 col 5 1e
row 4 col 3 12
for sequence 32 51 2f 49 55 42
Cannot find sequence starting in 1st row, using wasted moves
row 2 col 5 32
row 3 col 5 51
row 3 col 6 2f
row 4 col 6 49
row 4 col 4 55
row 9 col 4 42
raven::set::cRunWatch code timing profile
Calls Mean (secs) Total Scope
5 0.00059034 0.0029517 findSequence

Why average loss goes up when training using Vowpal Wabbit

I tried to use VW to train a regression model on a small set of examples (about 3112). I think I'm doing it correctly, yet it showed me weird results. Dug around but didn't find anything helpful.
$ cat sh600000.feat | vw --l1 1e-8 --l2 1e-8 --readable_model model -b 24 --passes 10 --cache_file cache
using l1 regularization = 1e-08
using l2 regularization = 1e-08
Num weight bits = 24
learning rate = 0.5
initial_t = 0
power_t = 0.5
decay_learning_rate = 1
using cache_file = cache
ignoring text input in favor of cache input
num sources = 1
average since example example current current current
loss last counter weight label predict features
0.040000 0.040000 1 1.0 -0.2000 0.0000 79
0.051155 0.062310 2 2.0 0.2000 -0.0496 79
0.046606 0.042056 4 4.0 0.4100 0.1482 79
0.052160 0.057715 8 8.0 0.0200 0.0021 78
0.064936 0.077711 16 16.0 -0.1800 0.0547 77
0.060507 0.056079 32 32.0 0.0000 0.3164 79
0.136933 0.213358 64 64.0 -0.5900 -0.0850 79
0.151692 0.166452 128 128.0 0.0700 0.0060 79
0.133965 0.116238 256 256.0 0.0900 -0.0446 78
0.179995 0.226024 512 512.0 0.3700 -0.0217 79
0.109296 0.038597 1024 1024.0 0.1200 -0.0728 79
0.579360 1.049425 2048 2048.0 -0.3700 -0.0084 79
0.485389 0.485389 4096 4096.0 1.9600 0.3934 79 h
0.517748 0.550036 8192 8192.0 0.0700 0.0334 79 h
finished run
number of examples per pass = 2847
passes used = 5
weighted example sum = 14236
weighted label sum = -155.98
average loss = 0.490685 h
best constant = -0.0109567
total feature number = 1121506
$ wc model
41 48 657 model
Questions:
Why is the number of features in the output (readable) model less than the number of actual features? I counted that the training data contains 78 features (plus the bias that's 79 as shown during the training). The number of feature bits is 24, which should be far more than enough to avoid collision.
Why does the average loss actually go up in the training as you can see in the above example?
(Minor) I tried to increase the number of feature bits to 32, and it output an empty model. Why?
EDIT:
I tried to shuffle the input file, as well as using --holdout_off, as suggested. But the result is still almost the same - the average loss go up.
$ cat sh600000.feat.shuf | vw --l1 1e-8 --l2 1e-8 --readable_model model -b 24 --passes 10 --cache_file cache --holdout_off
using l1 regularization = 1e-08
using l2 regularization = 1e-08
Num weight bits = 24
learning rate = 0.5
initial_t = 0
power_t = 0.5
decay_learning_rate = 1
using cache_file = cache
ignoring text input in favor of cache input
num sources = 1
average since example example current current current
loss last counter weight label predict features
0.040000 0.040000 1 1.0 -0.2000 0.0000 79
0.051155 0.062310 2 2.0 0.2000 -0.0496 79
0.046606 0.042056 4 4.0 0.4100 0.1482 79
0.052160 0.057715 8 8.0 0.0200 0.0021 78
0.071332 0.090504 16 16.0 0.0300 0.1203 79
0.043720 0.016108 32 32.0 -0.2200 -0.1971 78
0.142895 0.242071 64 64.0 0.0100 -0.1531 79
0.158564 0.174232 128 128.0 0.0500 -0.0439 79
0.150691 0.142818 256 256.0 0.3200 0.1466 79
0.197050 0.243408 512 512.0 0.2300 -0.0459 79
0.117398 0.037747 1024 1024.0 0.0400 0.0284 79
0.636949 1.156501 2048 2048.0 1.2500 -0.0152 79
0.363364 0.089779 4096 4096.0 0.1800 0.0071 79
0.477569 0.591774 8192 8192.0 -0.4800 0.0065 79
0.411068 0.344567 16384 16384.0 0.0700 0.0450 77
finished run
number of examples per pass = 3112
passes used = 10
weighted example sum = 31120
weighted label sum = -105.5
average loss = 0.423404
best constant = -0.0033901
total feature number = 2451800
The training examples are unique to each other so I doubt there is over-fitting problem (which, as I understand it, usually happens when the number of input is too small comparing the number of features).
EDIT2:
Tried to print the average loss for every pass of examples, and see that it mostly remains constant.
$ cat dist/sh600000.feat | vw --l1 1e-8 --l2 1e-8 -f dist/model -P 3112 --passes 10 -b 24 --cache_file dist/cache
using l1 regularization = 1e-08
using l2 regularization = 1e-08
Num weight bits = 24
learning rate = 0.5
initial_t = 0
power_t = 0.5
decay_learning_rate = 1
final_regressor = dist/model
using cache_file = dist/cache
ignoring text input in favor of cache input
num sources = 1
average since example example current current current
loss last counter weight label predict features
0.498822 0.498822 3112 3112.0 0.0800 0.0015 79 h
0.476677 0.454595 6224 6224.0 -0.2200 -0.0085 79 h
0.466413 0.445856 9336 9336.0 0.0200 -0.0022 79 h
0.490221 0.561506 12448 12448.0 0.0700 -0.1113 79 h
finished run
number of examples per pass = 2847
passes used = 5
weighted example sum = 14236
weighted label sum = -155.98
average loss = 0.490685 h
best constant = -0.0109567
total feature number = 1121506
Also another try without the --l1, --l2 and -b parameters:
$ cat dist/sh600000.feat | vw -f dist/model -P 3112 --passes 10 --cache_file dist/cacheNum weight bits = 18
learning rate = 0.5
initial_t = 0
power_t = 0.5
decay_learning_rate = 1
final_regressor = dist/model
using cache_file = dist/cache
ignoring text input in favor of cache input
num sources = 1
average since example example current current current
loss last counter weight label predict features
0.520286 0.520286 3112 3112.0 0.0800 -0.0021 79 h
0.488581 0.456967 6224 6224.0 -0.2200 -0.0137 79 h
0.474247 0.445538 9336 9336.0 0.0200 -0.0299 79 h
0.496580 0.563450 12448 12448.0 0.0700 -0.1727 79 h
0.533413 0.680958 15560 15560.0 -0.1700 0.0322 79 h
0.524531 0.480201 18672 18672.0 -0.9800 -0.0573 79 h
finished run
number of examples per pass = 2801
passes used = 7
weighted example sum = 19608
weighted label sum = -212.58
average loss = 0.491739 h
best constant = -0.0108415
total feature number = 1544713
Does that mean it's normal for average loss to go up during one pass, but as long as multiple pass gets the same loss then it's fine?
Model file stores only non-zero weights. So most likely others got nulled especially if you are using --l1
It may be caused by many reasons. Perhaps your dataset isn't shuffled well enough. If you sort your dataset so examples labeled -1 will be in first half and examples labeled 1 will be in second then your model will show very good convergence on first half, but you'll see avg loss bump as it reaches 2nd half. So it may be disbalance in dataset. As for last two losses - these are holdout losses (marked with 'h' at end of line) and may point that model is overfitted. Pls refer to my other answer.
Well, in master branch usage of -b 32 is even currently blocked. You shall use up to -b 31. On practice -b 24-28 is usually enough even for dozens of thousands of features.
I would recommend you to get up-to-date VW version from github

Compute the local mean and variance of identified pixel in image

I want to compute the mean and standard derivation of sub region that is created by a window (dashed line) and center at identified pixel-red color( called local mean and standard derivation). This is figure to describe it
We can do it by convolution image with a mask. However, it takes long time because I only care the mean and standard derivation of a server points, while convolution computes for whole point in image. Could you have a faster way to resolve it that only compute the mean and standard derivation at identified pixel? I am doing it by matlab. This is my code by convolution function
I=[18 36 70 33 64 40 62 76 71 37 5
82 49 86 45 96 29 74 7 60 56 45
25 32 55 48 25 30 12 82 95 77 8
24 18 78 74 19 57 67 59 16 46 78
28 9 59 2 29 11 7 31 75 15 25
83 26 96 8 82 26 85 12 11 28 19
81 64 78 70 26 33 17 72 81 16 54
75 39 78 34 59 31 77 31 61 81 89
89 84 29 99 79 25 26 35 65 56 76
93 90 45 7 61 13 34 24 11 34 92
88 82 91 81 100 4 88 70 85 8 19];
identified_position=[30 36 84 90] %indices of pixel 78, 48,72 60
mask=1/9.*ones(3,3);
mean_all=imfilter(I,mask,'same');
%Mean of identified pixels
mean_all(identified_position)
% Compute the variance
std_all=stdfilt(I,ones(3));
%std of identified pixels
std_all(identified_position)
This is the comparison code
function compare_mean(dimx,dimy)
I=randi(100,[dimx,dimy]);
rad=3;
identified_position=randi(max(I(:)),[1,5]);% Get 5 random position
function way1()
mask=ones(rad,rad);
mask=mask./sum(mask(:));
mean_all=conv2(I,mask,'same');
mean_out =mean_all(identified_position);
end
function way2()
box_size = rad; %// Edit your window size here (an odd number is preferred)
bxr = floor(box_size/2); %// box radius
%// Get neighboring indices and those elements for all identified positions
off1 = bsxfun(#plus,[-bxr:bxr]',[-bxr:bxr]*size(I,1)); %//'#neighborhood offsets
idx = bsxfun(#plus,off1(:),identified_position); %// all absolute offsets
I_selected_neigh = I(idx); %// all offsetted elements
mean_out = mean(I_selected_neigh,1); %// mean output
end
way2()
time_way1=#()way1();timeit(time_way1)
time_way2=#()way2();timeit(time_way2)
end
Sometime the way2 has error is
Subscript indices must either be real positive integers or logicals.
Error in compare_mean/way2 (line 18)
I_selected_neigh = I(idx); %// all offsetted elements
Error in compare_mean (line 22)
way2()
Discussion & Solution Codes
Given I as the input image, identified_position as the linear indices of the selected points and bxsz as the window/box size, the approach listed next must be pretty efficient -
%// Get XY coordinates
[X,Y] = ind2sub(size(I),identified_position);
pts = [X(:) Y(:)];
%// Parameters
bxr = (bxsz-1)/2;
Isz = size(I);
%// XY coordinates of neighboring elements
[offx,offy] = ndgrid(-bxr:bxr,-bxr:bxr);
x_idx = bsxfun(#plus,offx(:),pts(:,1)'); %//'
y_idx = bsxfun(#plus,offy(:),pts(:,2)'); %//'
%// Outside image boundary elements
invalids = x_idx>Isz(1) | x_idx<1 | y_idx>Isz(2) | y_idx<1;
%// All neighboring indices
all_idx = (y_idx-1)*size(I,1) + x_idx;
all_idx(invalids) = 1;
%// All neighboring elements
all_vals = I(all_idx);
all_vals(invalids) = 0;
mean_out = mean(all_vals,1); %// final mean output
stdfilts = stdfilt(all_vals,ones(bxsz^2,1))
std_out = stdfilts(ceil(size(stdfilts,1)/2),:) %// final stdfilt output
Basically, it gets all the neighbouring indices for all identified positions in one go with bsxfun and thus, gets all those neighbouring elements. Those selected elements are then used to get the mean and stdfilt outputs. The whole idea is to keep the memory requirement minimum and at the same time doing everything in a vectorized fashion within those selected elements. Hopefully, this must be faster!
Benchmarking
Benchmarking Code
dx = 10000; %// x-dimension of input image
dy = 10000; %// y-dimension of input image
npts = 1000; %// number of points
I=randi(100,[dx,dy]); %// create input image of random intensities
identified_position=randi(max(I(:)),[1,npts]);
rad=5; %// blocksize (rad x rad)
%// Run the approaches fed with the inputs
func1 = #() way1(I,identified_position,rad); %// original approach
time1 = timeit(func1);
clear func1
func2 = #() way2(I,identified_position,rad); %// proposed approach
time2 = timeit(func2);
clear func2
disp(['Input size: ' num2str(dx) 'x' num2str(dy) ' & Points: ' num2str(npts)])
disp(['With Original Approach: Elapsed Time = ' num2str(time1) '(s)'])
disp(['With Proposed Approach: Elapsed Time = ' num2str(time2) '(s)'])
disp(['**Speedup w/ Proposed Approach : ' num2str(time1/time2) 'x!**'])
Associated function codes
%// OP's stated approach
function mean_out = way1(I,identified_position,rad)
mask=ones(rad,rad);
mask=mask./sum(mask(:));
mean_all=conv2(I,mask,'same');
mean_out =mean_all(identified_position);
return;
function mean_out = way2(I,identified_position,rad)
%//.... code from proposed approach stated earlier until mean_out %//
Runtime results
Input size: 10000x10000 & Points: 1000
With Original Approach: Elapsed Time = 0.46394(s)
With Proposed Approach: Elapsed Time = 0.00049403(s)
**Speedup w/ Proposed Approach : 939.0778x!**

Need to find lowest differences between first line of an array and the rest ones

Well, I've been given a number of pairs of elements (s,h), where s sends an h element on the s-th row of a 2d array.It is not necessary that each line has the same amount of elements, only known that there cannot be more than N elements on a line.
What I want to do is to find the lowest biggest difference(!) between a certain element of the first line and the rest ones.
Thus, if I have 3 lines with (101,92) (100,25,95,52,101) (93,108,0,65,200) what I want to find is 3, because I have to choose 92 and I have 95-92=3 from first to second and 93-92=1 form first to third.
I have reached a point where it is certain that if I have s lines with n(i) elements each and i=0..s, then n0<=n1<=...<=ns so as to have a good average performance scenario when picking the best-fit from 1st line towards the others.
However, I cannot think of a way lower than O(n2) or even maybe O(n3) in some cases. Does anyone have a suggestion about a fairly improved way to do this?
Combine all lines into a single list, also keeping track of which element comes from where.
Sort this list.
Have a last-value variable for each line.
For each item in the sorted list, update the last-value variable of the applicable list. If not all lines have a last-value set yet, do nothing. If it's an element from the first list:
Recalculate the biggest difference for all of the last-value variables. Store this difference.
If it's an element from any other list:
If all values have previous not been set, calculate the biggest difference. Otherwise, if the difference between the first list's last-value and this element is bigger than the biggest difference, update the biggest difference with this difference. Store this difference.
The smallest difference is the desired value.
Example:
Lists: (101,92) (100,25,95,52,101) (93,108,0,65,200)
Sorted 0 25 52 65 92 93 95 100 101 101 108 200
Source 2 1 1 2 0 2 1 1 0 1 2 2
Last[0] - - - - 92 92 92 92 101 101 101 101
Last[1] - 25 52 52 52 52 95 100 100 101 101 101
Last[2] 0 0 0 65 65 93 93 93 93 93 108 200
Diff - - - - 40 41 3 8 8 8 7 9
Best - - - - 40 40 3 3 3 3 3 3
Best = 3 as required. Storing the actual items or finding them afterwards should be easy enough.
Complexity:
Let n be the total number of items and k be the number of lists.
O(n log n) for the combine + sort.
O(nk) (worst case) for the scan through, since we're checking n items and, at each item, we do maximum O(k) work.
So O(n log n + nk).

How to optimize the layout of rectangles

I have a dynamic number of equally proportioned and sized rectangular objects that I want to optimally display on the screen. I can resize the objects but need to maintain proportion.
I know what the screen dimensions are.
How can I calculate the optimal number of rows and columns that I will need to divide the screen in to and what size I will need to scale the objects to?
Thanks,
Jamie.
Assuming that all rectangles have the same dimensions and orientation and that such should not be changed.
Let's play!
// Proportion of the screen
// w,h width and height of your rectangles
// W,H width and height of the screen
// N number of your rectangles that you would like to fit in
// ratio
r = (w*H) / (h*W)
// This ratio is important since we can define the following relationship
// nbRows and nbColumns are what you are looking for
// nbColumns = nbRows * r (there will be problems of integers)
// we are looking for the minimum values of nbRows and nbColumns such that
// N <= nbRows * nbColumns = (nbRows ^ 2) * r
nbRows = ceil ( sqrt ( N / r ) ) // r is positive...
nbColumns = ceil ( N / nbRows )
I hope I got my maths right, but that cannot be far from what you are looking for ;)
EDIT:
there is not much difference between having a ratio and the width and height...
// If ratio = w/h
r = ratio * (H/W)
// If ratio = h/w
r = H / (W * ratio)
And then you're back using 'r' to find out how much rows and columns use.
Jamie, I interpreted "optimal number of rows and columns" to mean "how many rows and columns will provide the largest rectangles, consistent with the required proportions and screen size". Here's a simple approach for that interpretation.
Each possible choice (number of rows and columns of rectangles) results in a maximum possible size of rectangle for the specified proportions. Looping over the possible choices and computing the resulting size implements a simple linear search over the space of possible solutions. Here's a bit of code that does that, using an example screen of 480 x 640 and rectangles in a 3 x 5 proportion.
def min (a, b)
a < b ? a : b
end
screenh, screenw = 480, 640
recth, rectw = 3.0, 5.0
ratio = recth / rectw
puts ratio
nrect = 14
(1..nrect).each do |nhigh|
nwide = ((nrect + nhigh - 1) / nhigh).truncate
maxh, maxw = (screenh / nhigh).truncate, (screenw / nwide).truncate
relh, relw = (maxw * ratio).truncate, (maxh / ratio).truncate
acth, actw = min(maxh, relh), min(maxw, relw)
area = acth * actw
puts ([nhigh, nwide, maxh, maxw, relh, relw, acth, actw, area].join("\t"))
end
Running that code provides the following trace:
1 14 480 45 27 800 27 45 1215
2 7 240 91 54 400 54 91 4914
3 5 160 128 76 266 76 128 9728
4 4 120 160 96 200 96 160 15360
5 3 96 213 127 160 96 160 15360
6 3 80 213 127 133 80 133 10640
7 2 68 320 192 113 68 113 7684
8 2 60 320 192 100 60 100 6000
9 2 53 320 192 88 53 88 4664
10 2 48 320 192 80 48 80 3840
11 2 43 320 192 71 43 71 3053
12 2 40 320 192 66 40 66 2640
13 2 36 320 192 60 36 60 2160
14 1 34 640 384 56 34 56 1904
From this, it's clear that either a 4x4 or 5x3 layout will produce the largest rectangles. It's also clear that the rectangle size (as a function of row count) is worst (smallest) at the extremes and best (largest) at an intermediate point. Assuming that the number of rectangles is modest, you could simply code the calculation above in your language of choice, but bail out as soon as the resulting area starts to decrease after rising to a maximum.
That's a quick and dirty (but, I hope, fairly obvious) solution. If the number of rectangles became large enough to bother, you could tweak for performance in a variety of ways:
use a more sophisticated search algorithm (partition the space and recursively search the best segment),
if the number of rectangles is growing during the program, keep the previous result and only search nearby solutions,
apply a bit of calculus to get a faster, precise, but less obvious formula.
This is almost exactly like kenneth's question here on SO. He also wrote it up on his blog.
If you scale the proportions in one dimension so that you are packing squares, it becomes the same problem.
One way I like to do that is to use the square root of the area:
Let
r = number of rectangles
w = width of display
h = height of display
Then,
A = (w * h) / r is the area per rectangle
and
L = sqrt(A) is the base length of each rectangle.
If they are not square, then just multiply accordingly to keep the same ratio.
Another way to do a similar thing is to just take the square root of the number of rectangles. That'll give you one dimension of your grid (i.e. the number of columns):
C = sqrt(n) is the number of columns in your grid
and
R = n / C is the number of rows.
Note that one of these will have to ceiling and the other floor otherwise you will truncate numbers and might miss a row.
Your mention of rows and columns suggests that you envisaged arranging the rectangles in a grid, possibly with a few spaces (e.g. some of the bottom row) unfilled. Assuming this is the case:
Suppose you scale the objects such that (an as-yet unknown number) n of them fit across the screen. Then
objectScale=screenWidth/(n*objectWidth)
Now suppose there are N objects, so there will be
nRows = ceil(N/n)
rows of objects (where ceil is the Ceiling function), which will take up
nRows*objectScale*objectHeight
of vertical height. We need to find n, and want to choose the smallest n such that this distance is smaller than screenHeight.
A simple mathematical expression for n is made trickier by the presence of the ceiling function. If the number of columns is going to be fairly small, probably the easiest way to find n is just to loop through increasing n until the inequality is satisfied.
Edit: We can start the loop with the upper bound of
floor(sqrt(N*objectHeight*screenWidth/(screenHeight*objectWidth)))
for n, and work down: the solution is then found in O(sqrt(N)). An O(1) solution is to assume that
nRows = N/n + 1
or to take
n=ceil(sqrt(N*objectHeight*screenWidth/(screenHeight*objectWidth)))
(the solution of Matthieu M.) but these have the disadvantage that the value of n may not be optimal.
Border cases occur when N=0, and when N=1 and the aspect ratio of the objects is such that objectHeight/objectWidth > screenHeight/screenWidth - both of these are easy to deal with.

Resources