I've been implementing an adaptation of Viola-Jones' face detection algorithm. The technique relies upon placing a subframe of 24x24 pixels within an image, and subsequently placing rectangular features inside it in every position with every size possible.
These features can consist of two, three or four rectangles. The following example is presented.
They claim the exhaustive set is more than 180k (section 2):
Given that the base resolution of the detector is 24x24, the exhaustive set of rectangle features is quite large, over 180,000 . Note that unlike the Haar basis, the set of rectangle
features is overcomplete.
The following statements are not explicitly stated in the paper, so they are assumptions on my part:
There are only 2 two-rectangle features, 2 three-rectangle features and 1 four-rectangle feature. The logic behind this is that we are observing the difference between the highlighted rectangles, not explicitly the color or luminance or anything of that sort.
We cannot define feature type A as a 1x1 pixel block; it must at least be at least 1x2 pixels. Also, type D must be at least 2x2 pixels, and this rule holds accordingly to the other features.
We cannot define feature type A as a 1x3 pixel block as the middle pixel cannot be partitioned, and subtracting it from itself is identical to a 1x2 pixel block; this feature type is only defined for even widths. Also, the width of feature type C must be divisible by 3, and this rule holds accordingly to the other features.
We cannot define a feature with a width and/or height of 0. Therefore, we iterate x and y to 24 minus the size of the feature.
Based upon these assumptions, I've counted the exhaustive set:
const int frameSize = 24;
const int features = 5;
// All five feature types:
const int feature[features][2] = {{2,1}, {1,2}, {3,1}, {1,3}, {2,2}};
int count = 0;
// Each feature:
for (int i = 0; i < features; i++) {
int sizeX = feature[i][0];
int sizeY = feature[i][1];
// Each position:
for (int x = 0; x <= frameSize-sizeX; x++) {
for (int y = 0; y <= frameSize-sizeY; y++) {
// Each size fitting within the frameSize:
for (int width = sizeX; width <= frameSize-x; width+=sizeX) {
for (int height = sizeY; height <= frameSize-y; height+=sizeY) {
count++;
}
}
}
}
}
The result is 162,336.
The only way I found to approximate the "over 180,000" Viola & Jones speak of, is dropping assumption #4 and by introducing bugs in the code. This involves changing four lines respectively to:
for (int width = 0; width < frameSize-x; width+=sizeX)
for (int height = 0; height < frameSize-y; height+=sizeY)
The result is then 180,625. (Note that this will effectively prevent the features from ever touching the right and/or bottom of the subframe.)
Now of course the question: have they made a mistake in their implementation? Does it make any sense to consider features with a surface of zero? Or am I seeing it the wrong way?
Upon closer look, your code looks correct to me; which makes one wonder whether the original authors had an off-by-one bug. I guess someone ought to look at how OpenCV implements it!
Nonetheless, one suggestion to make it easier to understand is to flip the order of the for loops by going over all sizes first, then looping over the possible locations given the size:
#include <stdio.h>
int main()
{
int i, x, y, sizeX, sizeY, width, height, count, c;
/* All five shape types */
const int features = 5;
const int feature[][2] = {{2,1}, {1,2}, {3,1}, {1,3}, {2,2}};
const int frameSize = 24;
count = 0;
/* Each shape */
for (i = 0; i < features; i++) {
sizeX = feature[i][0];
sizeY = feature[i][1];
printf("%dx%d shapes:\n", sizeX, sizeY);
/* each size (multiples of basic shapes) */
for (width = sizeX; width <= frameSize; width+=sizeX) {
for (height = sizeY; height <= frameSize; height+=sizeY) {
printf("\tsize: %dx%d => ", width, height);
c=count;
/* each possible position given size */
for (x = 0; x <= frameSize-width; x++) {
for (y = 0; y <= frameSize-height; y++) {
count++;
}
}
printf("count: %d\n", count-c);
}
}
}
printf("%d\n", count);
return 0;
}
with the same results as the previous 162336
To verify it, I tested the case of a 4x4 window and manually checked all cases (easy to count since 1x2/2x1 and 1x3/3x1 shapes are the same only 90 degrees rotated):
2x1 shapes:
size: 2x1 => count: 12
size: 2x2 => count: 9
size: 2x3 => count: 6
size: 2x4 => count: 3
size: 4x1 => count: 4
size: 4x2 => count: 3
size: 4x3 => count: 2
size: 4x4 => count: 1
1x2 shapes:
size: 1x2 => count: 12 +-----------------------+
size: 1x4 => count: 4 | | | | |
size: 2x2 => count: 9 | | | | |
size: 2x4 => count: 3 +-----+-----+-----+-----+
size: 3x2 => count: 6 | | | | |
size: 3x4 => count: 2 | | | | |
size: 4x2 => count: 3 +-----+-----+-----+-----+
size: 4x4 => count: 1 | | | | |
3x1 shapes: | | | | |
size: 3x1 => count: 8 +-----+-----+-----+-----+
size: 3x2 => count: 6 | | | | |
size: 3x3 => count: 4 | | | | |
size: 3x4 => count: 2 +-----------------------+
1x3 shapes:
size: 1x3 => count: 8 Total Count = 136
size: 2x3 => count: 6
size: 3x3 => count: 4
size: 4x3 => count: 2
2x2 shapes:
size: 2x2 => count: 9
size: 2x4 => count: 3
size: 4x2 => count: 3
size: 4x4 => count: 1
all. There is still some confusion in Viola and Jones' papers.
In their CVPR'01 paper it is clearly stated that
"More specifically, we use three
kinds of features. The value of a
two-rectangle feature is the difference between the sum of the
pixels within two rectangular regions.
The regions have the same size and
shape and are horizontally or
vertically adjacent (see Figure 1).
A three-rectangle feature computes the sum within two outside
rectangles subtracted from the sum in
a center rectangle. Finally a
four-rectangle feature".
In the IJCV'04 paper, exactly the same thing is said. So altogether, 4 features. But strangely enough, they stated this time that the the exhaustive feature set is 45396! That does not seem to be the final version.Here I guess that some additional constraints were introduced there, such as min_width, min_height, width/height ratio, and even position.
Note that both papers are downloadable on his webpage.
Having not read the whole paper, the wording of your quote sticks out at me
Given that the base resolution of the
detector is 24x24, the exhaustive set
of rectangle features is quite large,
over 180,000 . Note that unlike the
Haar basis, the set of rectangle
features is overcomplete.
"The set of rectangle features is overcomplete"
"Exhaustive set"
it sounds to me like a set up, where I expect the paper writer to follow up with an explaination for how they cull the search space down to a more effective set, by, for example, getting rid of trivial cases such as rectangles with zero surface area.
edit: or using some kind of machine learning algorithm, as the abstract hints at. Exhaustive set implies all possibilities, not just "reasonable" ones.
There is no guarantee that any author of any paper is correct in all their assumptions and findings. If you think that assumption #4 is valid, then keep that assumption, and try out your theory. You may be more successful than the original authors.
Quite good observation, but they might implicitly zero-pad the 24x24 frame, or "overflow" and start using first pixels when it gets out of bounds, as in rotational shifts, or as Breton said they might consider some features as "trivial features" and then discard them with the AdaBoost.
In addition, I wrote Python and Matlab versions of your code so I can test the code myself (easier to debug and follow for me) and so I post them here if anyone find them useful sometime.
Python:
frameSize = 24;
features = 5;
# All five feature types:
feature = [[2,1], [1,2], [3,1], [1,3], [2,2]]
count = 0;
# Each feature:
for i in range(features):
sizeX = feature[i][0]
sizeY = feature[i][1]
# Each position:
for x in range(frameSize-sizeX+1):
for y in range(frameSize-sizeY+1):
# Each size fitting within the frameSize:
for width in range(sizeX,frameSize-x+1,sizeX):
for height in range(sizeY,frameSize-y+1,sizeY):
count=count+1
print (count)
Matlab:
frameSize = 24;
features = 5;
% All five feature types:
feature = [[2,1]; [1,2]; [3,1]; [1,3]; [2,2]];
count = 0;
% Each feature:
for ii = 1:features
sizeX = feature(ii,1);
sizeY = feature(ii,2);
% Each position:
for x = 0:frameSize-sizeX
for y = 0:frameSize-sizeY
% Each size fitting within the frameSize:
for width = sizeX:sizeX:frameSize-x
for height = sizeY:sizeY:frameSize-y
count=count+1;
end
end
end
end
end
display(count)
In their original 2001 paper they only state that they used three kinds of features:
we use three kinds of features
with two, three and four rectangles respectively.
Since each kind has two orientations (that differ by 90 degrees), perhaps for the computation of the total number of features they used 2*3 types of features: 2 two-rectangle features, 2 three-rectangle features and 2 four-rectangle features. With this assumption there are indeed over 180,000 features:
feature_types = [(1,2), (2,1), (1,3), (3,1), (2,2), (2,2)]
window_size = (24,24)
total_features = 0
for f_type in feature_types:
for f_height in range(f_type[0], window_size[0] + 1, f_type[0]):
for f_width in range(f_type[1], window_size[1] + 1, f_type[1]):
total_features += (window_size[0] - f_height + 1) * (window_size[1] - f_width + 1)
print(total_features)
# 183072
The second four-rectangle feature differs from the first only by a sign, so there is no need to keep it and if we drop it then the total number of features reduces to 162,336.
Related
This is from a sample program for OpenCL programming.
I am confused about how global and local work size are computed.
They are computed based on the image size.
Image size is 1920 x 1080 (w x h).
What I assumed is global_work_size[0] and global_work_size[1] are grids on image.
But now global_work_size is {128, 1088}.
Then local_work_size[0] and local_work_size[1] are grids on global_work_size.
local_work_size is {128, 32}.
But total groups, num_groups = 34, it is not 128 x 1088.
Max workgroup_size available at device is 4096.
How is the image distributed into such global and local work group sizes?
They are calculated in the following function.
clGetKernelWorkGroupInfo(histogram_rgba_unorm8, device, CL_KERNEL_WORK_GROUP_SIZE, sizeof(size_t), &workgroup_size, NULL);
{
size_t gsize[2];
int w;
if (workgroup_size <= 256)
{
gsize[0] = 16;//workgroup_size is formed into row & col
gsize[1] = workgroup_size / 16;
}
else if (workgroup_size <= 1024)
{
gsize[0] = workgroup_size / 16;
gsize[1] = 16;
}
else
{
gsize[0] = workgroup_size / 32;
gsize[1] = 32;
}
local_work_size[0] = gsize[0];
local_work_size[1] = gsize[1];
w = (image_width + num_pixels_per_work_item - 1) / num_pixels_per_work_item;//to include all pixels, num_pixels_per_work_item is added first
global_work_size[0] = ((w + gsize[0] - 1) / gsize[0]);//col
global_work_size[1] = ((image_height + gsize[1] - 1) / gsize[1]);//row
num_groups = global_work_size[0] * global_work_size[1];
global_work_size[0] *= gsize[0];
global_work_size[1] *= gsize[1];
}
err = clEnqueueNDRangeKernel(queue, histogram_rgba_unorm8, 2, NULL, global_work_size, local_work_size, 0, NULL, NULL);
if (err)
{
printf("clEnqueueNDRangeKernel() failed for histogram_rgba_unorm8 kernel. (%d)\n", err);
return EXIT_FAILURE;
}
I don't see any great mystery here. If you follow the calculation, the values do indeed end up as you say. (Not that the group size is particularly efficient in my opinion.)
If workgroup_size is indeed 4096, gsize will end up as { 128, 32 } as it follows the else logic. (>1024)
w is the number of num_pixels_per_work_item = 32 wide columns, or the minimum number of work-items to cover the entire width, which for an image width of 1920 is 60. In other words, we require an absolute minimum of 60 x 1080 work-items to cover the entire image.
Next, the number of group columns and rows is calculated and temporarily stored in global_work_size. As group width has been set to 128, a w of 60 means we end up with 1 column of groups. (This seems a waste of resources, more than half of the 128 work-items in each group will not be doing anything.) The number of group rows is simply image_height divided by gsize[1] (32) and rounding up. (33.75 -> 34)
Total number of groups can now be determined by multiplying out the grid: num_groups = global_work_size[0] * global_work_size[1]
To get the true total number of work-items in each dimension, each dimension of global_work_size is now multiplied by the group size in this dimension. 1, 34 multiplied by 128, 32 yields 128, 1088.
This actually covers an area of 4096 x 1088 pixels so about 53% of that is wastage. This is mainly because the algorithm for group dimensions favours wide groups, and each work-item works on a 32x1 pixel slice of the image. It would be better to favour tall work groups to reduce the amount of rounding.
For example, if we reverse gsize[0] and gsize[1], in this case we'd get a group size of { 32, 128 }, giving us a global work size of { 64, 1152 } and only 12% wastage. It would also be worth checking if always picking the largest possible group size is even a good idea; it quite possibly isn't, but I've not looked into the kernel's computation in detail, let alone run any measurements, to say if that's the case or not.
I have very minimal programming experience.
I would like to write a program that will generate and save as a gif image every possible image that can be created using only black and white pixels in 640 by 360 px dimensions.
In other words, each pixel can be either black or white. 640 x 360 = 230,400 pixels. So I believe total of 460,800 images are possible to be generated (230,400 x 2 for black/white).
I would like a program to do this automatically.
Please help!
First to answer your questions. Yes there will be writings on "some" pictures. Actually ever text written by human which fits in 640x360 pixels will show up. Also every other text (text not yet written or text that never will be written). Also you will see pictures of every human which is, was or will be alive. See Infinite Monkey Theorem for further information.
The code to create your wanted gif is fairly easy. I used Java for this. Note that you need an extra class: AnimatedGifEncoder. The Code is not memory-bound because the AanimatedGifEncoder will write each image to disk as soon it is computed. But make sure that you have enough disk space available.
import java.awt.Color;
import java.awt.image.BufferedImage;
public class BigPicture {
private final int width;
private final int height;
private final int WHITE = Color.WHITE.getRGB();
private final int BLACK = Color.BLACK.getRGB();
public BigPicture(int width, int height) {
this.width = width;
this.height = height;
}
public void process(String outFile) {
AnimatedGifEncoder gif = new AnimatedGifEncoder();
gif.setSize(width, height);
gif.setTransparent(null); // no transparency
gif.setRepeat(-1); // play only once
gif.setDelay(0); // 0 ms delay between images,
// 'cause ain't nobody got time for that!
gif.start(outFile);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_BINARY);
// set the image to all white
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
bufferedImage.setRGB(x, y, WHITE);
}
}
// add white image
gif.addFrame(bufferedImage);
// add all other combinations
while (increase(bufferedImage)) {
gif.addFrame(bufferedImage);
}
gif.finish();
}
/**
* #param bufferedImage
* the image to increase
* #return false if last pixel set to black => image is complete black
*/
private boolean increase(BufferedImage bufferedImage) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (bufferedImage.getRGB(x, y) == WHITE) {
bufferedImage.setRGB(x, y, BLACK);
return true;
}
bufferedImage.setRGB(x, y, WHITE);
}
}
return false;
}
public static void main(String[] args) {
new BigPicture(640, 360).process("C:\\temp\\bigpicture.gif");
System.out.println("finished.");
}
}
Please be aware that this will take some time. So don't bother waiting and enjoy your life instead! ;)
EDIT: Since my solution is a bit unclear i will explain the algorithm.
I have defined a method called increase. This method takes the BufferedImage and changes the bit pattern of the image so that the next bit pattern appears. The method is just a bit addition. The method will return false if the image encounters the last bit pattern (all pixels are set to black).
As long as it is possible to increase the bit pattern (i.e. increase() returns true) we will save the image as new frame and increase the image again.
How the increase() method works: The method runs over the image first in x-direction then in y-direction. I assume that white pixels are 0 and black pixels are 1. So, we want to take the bit pattern of the image and add 1. We inspect the first pixel: if it is white (0) we can add 1 without an overflow so we turn the pixel to black (0 + 1 = 1 => black pixel). After that we return from the method because we want to increase only one position. It returns true because an increase was possible. If we encounter a black pixel we have an overflow (1 + 1 = 2 or in binary 10). So we have to set the current pixel to white and add the 1 to the next pixel. This will continue until we find the first white pixel.
example:
first we create a print method: this method prints the image as binary number. Attention the number is reversed and the most significant bit is the bit on the right side.
public void print(BufferedImage bufferedImage) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (bufferedImage.getRGB(x, y) == WHITE) {
System.out.print(0); // white pixel
} else {
System.out.print(1); // black pixel
}
}
}
System.out.println();
}
now we modify our main-while loop:
print(bufferedImage); // this one prints the empty image
while (increase(bufferedImage)) {
print(bufferedImage);
}
and now set some short example to test:
new BigPicture(1, 5).process("C:\\temp\\bigpicture.gif");
and finally the output:
00000 // 0 this is the first print before the loop -> "white image"
10000 // 1 the first white pixel is set to black
01000 // 2 the first overflow, so the second pixel is set to black "2"
11000 // 3
00100 // 4
10100 // 5
01100
11100
00010 // 8
10010
01010
11010
00110
10110
01110
11110
00001 // 16
10001
01001
11001
00101
10101
01101
11101
00011
10011
01011
11011
00111
10111
01111
11111 // 31 == 2^5 - 1
finished.
In other words, each pixel can be either black or white. 640 x 360 =
230,400 pixels. So I believe total of 460,800 images are possible to
be generated (230,400 x 2 for black/white).
There is a little flaw in your belief. You are right about the number of pixels: 230,400. Unfortunately, this means there are not 2 * 230,400, but 2 ^ 230,400 possible pictures, which is a number with more than 60,000 digits (longer than the allowed answer size, I am afraid). For comparison a particular number with 45 digits signifies the diameter of the observable universe in centimeters (roughly the width of a pinkie).
In order to understand why your computation of the number of pictures is wrong consider this example: if your pictures contained only three pixels, you could have 8 different pictures (2 ^ 3), rather than 6 (2 * 3). Here are all of them: BBB, BBW, BWB, BWW, WBB, WBW, WWB, WWW. Adding another pixel doubles the size of possible pictures because you can have it white for all the 3-pixel cases, or black for all the 3-pixel cases. Doubling 1 (which is the amount of pictures you can have with 0 pixels) 230,400 times gives you 2 ^ 230,400.
It's great that there is a bounty for the question, but it is rather distracting and counter-productive if it was just as an April's Fool joke.
I'm going to go ahead and pinch some code from a related question, just for fun.
from itertools import product
for matrix in product([0, 1], repeat=(math,pow(2,230400)):
# render and save your .gif
As all the comments have already stated, good luck!
On a more serious note, if you didn't want to be absolutely sure that you had all permutations, you could generate a random 640x360 matrix and store it as an image.
Perform this action say 100k times, and you'll have at least an interesting set of pictures to look at, but it's unfeasible to get every possible permutation.
You could then delete all identical files to reduce the set to just the unique images.
All,
Are there any nice algorithms out there to generate a unique colour based on index in an array?
This is of course going to be used in a UI, to set the background colour of a number of dynamic buttons.
Now with .Net (and Java off top of my head), the following methods are supported:
Color.FromArgb
Color.FromName
FromArgb can take an 32-bit integer containing the argb color.
However, the algorithmic approach might cause some colours to be too similar in order, depending upon how many items were in the array. And also, where the foreground colour is similar to the background.
The only way I can think of is to create some kind of Color array, with a set of predefined colours in. Off course, this is manual code effort, but this way you can get a different set of colours in a small range that can be visually different from each other, before repeating sequence towards the end.
The other way could be to use the following to generate the array of colours:
Enum.GetValues(typeof(KnownColor)
Any suggestions?
Cheers
Hash the index, and take the lower 32 bits of the hash for your color. This will appear random but should produce a uniform distribution of colors. Will not guarantee that the chosen colors will be visually different from each other or the background, but may serve.
You could also take the whole color spectrum, cut it into n evenly intervaled colors, and assign them to each element of the array, assuming that you know the size of the array.
https://stackoverflow.com/a/43235/684934 might also give good ideas.
RGB-colors form a 3D-cube of color-space. Begin by selecting the corners of this cube (0 or 255 values). Then subdivide the cube into a grid of 8 cubes, and take the newly formed vertices. Subdivide again, into 64 cubes, and take the newly formed vertices. This will give you progressively closer and closer colors for higher indices.
IEnumerable<Color> GeneratePalette()
{
for (int scale = 1; scale < 256; scale *= 2)
{
for (int r = 0; r <= scale; r++)
for (int g = 0; g <= scale; g++)
for (int b = 0; b <= scale; b++)
{
if (scale == 1 || (r & 1) == 1 || (g & 1) == 1 || (b & 1) == 1)
{
yield return new Color
{
A = 255,
R = (byte) (255 * r / scale),
G = (byte) (255 * g / scale),
B = (byte) (255 * b / scale),
};
}
}
}
}
The first few colors:
#FF000000
#FF0000FF
#FF00FF00
#FF00FFFF
#FFFF0000
#FFFF00FF
#FFFFFF00
#FFFFFFFF
#FF00007F
#FF007F00
#FF007F7F
#FF007FFF
...
#FFFF7FFF
#FFFFFF7F
#FF00003F
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
android color between two colors, based on percentage?
How to find all the colors between two colors?
At the beginning, we have two colors in RGB, and a number, for the intermediate colors between them. Method must return an array with required colors. Strongly need help with an algorithm.
Suppose we have 2 Colors (R1,G1,B1) (R2,G2,B2) and N number of intermediate colors:
for i from 1 to N:
Ri = R1 + (R2-R1) * i / N
Bi = B1 + (B2-B1) * i / N
Gi = G1 + (G2-G1) * i / N
AddToArray(Ri,Gi,Bi)
Is that what you are looking for?
PS: I would recommend using the HSL color space instead of the RGB if you want to have a more natural color gradient.
Let your current cR, cG and cB value be 0%, and let the R, G, and B values be 100%, then you just have to iterate i = 1 to 100 with each iteration adding cRGB + i * (RGB - cRGB). You don't have to use 100 intermediate colors, you can use N of them.
function(currentColor, desiredColor, N) {
var colors = [],
cR = currentColor.R,
cG = currentColor.G,
cB = currentColor.B,
dR = desiredColor.R - cR,
dG = desiredColor.G - cG,
dB = desiredColor.B - cB;
for(var i = 1; i <= N; i++) {
colors.push(new Color(cR + i * dR / N, cG + i * dG / N, cB + i * dB / N));
}
return colors;
}
However, that won't give you very good intermediate colors. The first thing you should do is convert your colors into HSV or similar colorspace where intensity is separate from hue and saturation. That will give you much better intermediate colors. http://en.wikipedia.org/wiki/HSL_and_HSV
To do that, first convert your colors to HSV, and run the same algorithm as above, but with H S and V instead of RGB, but keep in mind that S and V have a min of 0 and max of 1, while H is represented in degrees between 0 and 360. You might have to do something with H if you want it to go from the current color to destination color as quickly as possible e.g. if cH = 10 and dH = 50, then going from 10 -> 50 is shortest, but if cH = 10 and dH = 350, then going from 10 -> -10 (same as 350 degrees) is shorter.
I would like to know how can I randomly fill a space with a set number of items and a target size, for example given the number of columns = 15 and a target size width = 320, how can I randomly distribute the columns width to fill the space? like shown in the image below if possible any sort of pseudo-code or algorithm will do
One way to partition your 320 pixels in 15 random "columns" is to do it uniformly, i.e., every column width follows the same distribution.
For this, your actually need a uniform distribution on the simplex. The first way to achieve is the one described by yi_H, and is probably the way to go:
Generate 14 uniform integers between 0 and 320.
Keep regenerating any number that has already been chosen, so that you end up with 14 distinct numbers
Sort them
Your columns bounds are given by two consecutive random numbers.
If you have a minimum width requirement (e.g., 1 for non-empty columns), remove it 15 times from your 320 pixels, generate the numbers in the new range and make the necessary adjustments.
The second way to achieve a uniform point on a simplex is a bit more involved, and not very well suited with discrete settings such as pixels, but here it is in brief anyway:
Generate 15 exponential random variables with same shape parameter (e.g. 1)
Divide each number by the total, so that each is in [0,1]
Rescale those number by multiplying them by 320, and round them. These are your column widths
This is not as nice as the first way, since with the rounding you may end with a total bigger or smaller than 320, and you may have columns with 0 width... The only advantage is that you don't need to perform any sort (but you have to compute logarithms... so all in all, the first way is the way to go).
I should add that if you do not necessarily want uniform random filling, then you have a lot more algorithms at your disposal.
Edit: Here is a quick implementation of the first algorithm in Mathematica. Note that in order to avoid generating points until they are all different, you can just consider that an empty column has a width of 1, and then a minimum width of 2 will give you columns with non-empty interior:
min = 2;
total = 320;
height = 50;
n = 15;
x = Sort[RandomInteger[total - n*min - 1, n - 1]] + Range[n - 1]*min
Graphics[{Rectangle[{-2, 0}, {0, height}], (*left margin*)
Rectangle[{#, 0}, {# + 1, height}] & /# x, (*columns borders*)
Rectangle[{total, 0}, {total + 2, height}]}, (*right margin*)
PlotRange -> {{-2, total + 2}, {0, height}},
ImageSize -> {total + 4, height}]
with gives the following example output:
Edit: Here is the modified javascript algorithm (beware, I have never written Javascript before, so there might be some errors\poor style):
function sortNumber(a,b)
{
return a - b;
}
function draw() {
var canvas = document.getElementById( "myCanvas" );
var numberOfStrips = 15;
var initPosX = 10;
var initPosY = 10;
var width = 320;
var height = 240;
var minColWidth = 2;
var reducedWidth = width - numberOfStrips * minColWidth;
var separators = new Array();
for ( var n = 0; n < numberOfStrips - 1; n++ ) {
separators[n] = Math.floor(Math.random() * reducedWidth);
}
separators.sort(sortNumber);
for ( var n = 0; n < numberOfStrips - 1; n++ ) {
separators[n] += (n+1) * minColWidth;
}
if ( canvas.getContext ) {
var ctx = canvas.getContext( "2d" );
// Draw lines
ctx.lineWidth = 1;
ctx.strokeStyle = "rgb( 120, 120, 120 )";
for ( var n = 0; n < numberOfStrips - 1; n++ ) {
var newPosX = separators[n];
ctx.moveTo( initPosX + newPosX, initPosY );
ctx.lineTo( initPosX + newPosX, initPosY + height );
}
ctx.stroke();
// Draw enclosing rectangle
ctx.lineWidth = 4;
ctx.strokeStyle = "rgb( 0, 0, 0 )";
ctx.strokeRect( initPosX, initPosY, width, height );
}
}
Additionally, note that minColWidth should not be bigger than a certain value (reducedWidth should not be negative...), but it is not tested in the algorithm. As stated before, us a value of 0 if you don't mind two lines on one another, a value of 1 if you don't mind two lines next to each other, and a value of 2 or more if you want non-empty columns only.
Create 14 unique numbers in the range (0,320). Those will be the x position of the bars.
Create random number, compare with previous ones, store it.
If consecutive lines aren't allowed, also check that it doesn't equal with any previous+-1.