Related
What should be the correct length of other (custom) attributes
if
length of my position attribute is m and
length of my index attribute is n ?
Example, If I have a rectangular surface to draw;
4 points => 4*3 = length of positions is 12,
2 triangles => 2*3 = length of index attribute is 6
if I need color attribute (rgba) what should be the length of the array?
4 * 4 = 16 or 6*4 = 24 ?
attribute has itemSize property that determines how many numbers are meant for a single vector
position attribute has itemSize == 3
it does not matter whether your geometry is indexed as every vertex needs to have defined every attribute for itself
you can get the number of vertices with positionAttribute.count that is effectively positionAttribute.array.length / positionAttribute.itemSize
so your RGBA color attribute needs itemSize = 4 thus it needs to have array of length 4 * positionAttribute.count
var rbgaAttribute = new THREE.BufferAttribute(new Float32Array(4 * positionAttribute.count), 4);
How can I go about trying to order the points of an irregular array from top left to bottom right, such as in the image below?
Methods I've considered are:
calculate the distance of each point from the top left of the image (Pythagoras's theorem) but apply some kind of weighting to the Y coordinate in an attempt to prioritise points on the same 'row' e.g. distance = SQRT((x * x) + (weighting * (y * y)))
sort the points into logical rows, then sort each row.
Part of the difficulty is that I do not know how many rows and columns will be present in the image coupled with the irregularity of the array of points. Any advice would be greatly appreciated.
Even though the question is a bit older, I recently had a similar problem when calibrating a camera.
The algorithm is quite simple and based on this paper:
Find the top left point: min(x+y)
Find the top right point: max(x-y)
Create a straight line from the points.
Calculate the distance of all points to the line
If it is smaller than the radius of the circle (or a threshold): point is in the top line.
Otherwise: point is in the rest of the block.
Sort points of the top line by x value and save.
Repeat until there are no points left.
My python implementation looks like this:
#detect the keypoints
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(img)
img_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 0, 255),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
points = []
keypoints_to_search = keypoints[:]
while len(keypoints_to_search) > 0:
a = sorted(keypoints_to_search, key=lambda p: (p.pt[0]) + (p.pt[1]))[0] # find upper left point
b = sorted(keypoints_to_search, key=lambda p: (p.pt[0]) - (p.pt[1]))[-1] # find upper right point
cv2.line(img_with_keypoints, (int(a.pt[0]), int(a.pt[1])), (int(b.pt[0]), int(b.pt[1])), (255, 0, 0), 1)
# convert opencv keypoint to numpy 3d point
a = np.array([a.pt[0], a.pt[1], 0])
b = np.array([b.pt[0], b.pt[1], 0])
row_points = []
remaining_points = []
for k in keypoints_to_search:
p = np.array([k.pt[0], k.pt[1], 0])
d = k.size # diameter of the keypoint (might be a theshold)
dist = np.linalg.norm(np.cross(np.subtract(p, a), np.subtract(b, a))) / np.linalg.norm(b) # distance between keypoint and line a->b
if d/2 > dist:
row_points.append(k)
else:
remaining_points.append(k)
points.extend(sorted(row_points, key=lambda h: h.pt[0]))
keypoints_to_search = remaining_points
Jumping on this old thread because I just dealt with the same thing: sorting a sloppily aligned grid of placed objects by left-to-right, top to bottom location. The drawing at the top in the original post sums it up perfectly, except that this solution supports rows with varying numbers of nodes.
S. Vogt's script above was super helpful (and the script below is entirely based on his/hers), but my conditions are narrower. Vogt's solution accommodates a grid that may be tilted from the horizontal axis. I assume no tilting, so I don't need to compare distances from a potentially tilted top line, but rather from a single point's y value.
Javascript below:
interface Node {x: number; y: number; width:number; height:number;}
const sortedNodes = (nodeArray:Node[]) => {
let sortedNodes:Node[] = []; // this is the return value
let availableNodes = [...nodeArray]; // make copy of input array
while(availableNodes.length > 0){
// find y value of topmost node in availableNodes. (Change this to a reduce if you want.)
let minY = Number.MAX_SAFE_INTEGER;
for (const node of availableNodes){
minY = Math.min(minY, node.y)
}
// find nodes in top row: assume a node is in the top row when its distance from minY
// is less than its height
const topRow:Node[] = [];
const otherRows:Node[] = [];
for (const node of availableNodes){
if (Math.abs(minY - node.y) <= node.height){
topRow.push(node);
} else {
otherRows.push(node);
}
}
topRow.sort((a,b) => a.x - b.x); // we have the top row: sort it by x
sortedNodes = [...sortedNodes,...topRow] // append nodes in row to sorted nodes
availableNodes = [...otherRows] // update available nodes to exclude handled rows
}
return sortedNodes;
};
The above assumes that all node heights are the same. If you have some nodes that are much taller than others, get the value of the minimum node height of all nodes and use it instead of the iterated "node.height" value. I.e., you would change this line of the script above to use the minimum height of all nodes rather that the iterated one.
if (Math.abs(minY - node.y) <= node.height)
I propose the following idea:
1. count the points (p)
2. for each point, round it's x and y coordinates down to some number, like
x = int(x/n)*n, y = int(y/m)*m for some n,m
3. If m,n are too big, the number of counts will drop. Determine m, n iteratively so that the number of points p will just be preserved.
Starting values could be in alignment with max(x) - min(x). For searching employ a binary search. X and Y scaling would be independent of each other.
In natural words this would pin the individual points to grid points by stretching or shrinking the grid distances, until all points have at most one common coordinate (X or Y) but no 2 points overlap. You could call that classifying as well.
Think of a 2D grid, e.g. in the size of 1000x1000 cells, which is used as the map of a level in a game. This map is dynamically filled with game objects during runtime. Now we need to calculate the probability of placing a new object into the a given x/y position in this grid.
What I have already is an int array the holds the number of game objects in close distance to the cell at x/y. The index of this array represents the cell distance to the given cell, and each value in the array tells the number of game objects in the grid at that distance. So for example the array could look like this:
0, 0, 1, 2, 0, 3, 1, 0, 4, 0, 1
This would mean that 0 objects are in the grid cell at x/y itself, 0 objects are in the direct neighbour cells, 1 object is in a cell with a distance of two cells, 2 objects are in the cells of a distance of three cells, and so on. The following figure illustrates this example:
The task now is to calculate how likely it is to place a new object at x/y, based on the values in this array. The algorithm should be something like this:
if at least one object is already closer than min, then the probability must be 0.0
else if no object is within a distance of max, then the probability must be 1.0
else the probability depends on how many objects are close to x/y, and how many.
So in other words: if there is at least one game object already very close, we don't want a new one. On the other hand if there is no object within a max radius, we want a new object in any case. Or else we want to place a new object with a probability depending on how many other objects there are close to x/y -- the more objects are close, and the closer they are, the less likely we want to place a new object.
I hope my description was understandable.
Can you think of an elegent algorithm or formula to calculate this probability?
PS: Sorry for the title of this question, I don't know how to summarize my question better.
One approach I'd consider is to compute a "population density" for that square. The lower the population density, the higher the probability that you would place an item there.
As you say, if there is an item at (x,y), then you can't place an item there. So consider that a population density of 1.0.
At the next level out there are 8 possible neighbors. The population density for that level would be n/8, where n is the number of items at that level. So if there are 3 objects that are adjacent to (x,y), then the density of that level is 3/8. Divide that by (distance+1).
Do the same for all levels. That is, compute the density of each level, divide by (distance+1), and sum the results. The divisor at each level is (distance*8). So your divisors are 8, 16, 24, etc.
Once you compute the results, you'll probably want to play with the numbers a bit to adjust the probabilities. That is, if you come up with a sum of 0.5, that space is likely pretty crowded. You wouldn't want to use (1-density) as your probability for generating an item. But the method I outline above should give you a single number to play with, which should simplify the problem.
So the algorithm looks something like:
total_density = 0;
for i = 0; i < max; ++i
if (i == 0)
local_density = counts[i]
else
local_density = counts[i]/(i*8); // density at that level
total_density = total_density + (local_density/(i+1))
If dividing the local density by (i+1) over-exaggerates the effect of distance, consider using something like log(i+1) or sqrt(i+1). I've found that to be useful in other situations where the distance is a factor, but not linearly.
lets assume your array's name is distances.
double getProbability()
{
for(int i=0 ; i<min ; i++)
{
if(distances[i]!=0) return 0;
}
int s = 0;
bool b = true;
for(int i=min ; i<max ; i++)
{
b = b && (distances[i]==0)
s+= distances[i]/(i+1);
}
if(b) return 1;
for(int i=0 ; i<distances.Count() ; i++)
{
s+= distances[i]/(i+1);
}
else return (float)s/totalObjectNum;
}
This approach calculates a weighted sum of those objects in a distance > min and <= max.
Parallel an upper limit is calculated (called normWeight) which depends only on max.
If at least one object is in a distance > min and <= max then
the probability closest to 1 would be 1-(1/normWeight) for 1 object on the outer ring.
The minimal probability would be 1-((normWeight-1)/normWeight). E.g. for
max-1 objects on the outer ring.
The calculation of the weighted sum can be modified by calculating different values for the variable delta.
float calculateProbabilty()
{
vector<int> numObjects; // [i] := number of objects in a distance i
fill numObjects ....
// given:
int min = ...;
int max = ...; // must be >= min
bool anyObjectCloserThanMin = false;
bool anyObjectCloserThanMax = false;
// calculate a weighted sum
float sumOfWeights = 0.0;
float normWeight = 0.0;
for (int distance=0; distance <= max; distance++)
{
// calculate a delta-value for increasing sumOfWeights depending on distance
// the closer the object the higher the delta
// e.g.:
float delta = (float)(max + 1 - distance);
normWeight += delta;
if (numObjects[distance] > 0 && distance < min)
{
anyObjectCloserThanMin = true;
break;
}
if (numObjects[distance] > 0)
{
anyObjectCloserThanMax = true;
sumOfWeights += (float)numObjects[distance] * delta;
}
}
float probability = 0.0;
if (anyObjectCloserThanMin)
{
// if at least one object is already closer than min, then the probability must be 0.0
probability = 0.0;
}
else if (!anyObjectCloserThanMax)
{
// if no object is within a distance of max, then the probability must be 1.0
probability = 1.0;
}
else
{
// else the probability depends on how many objects are close to x/y
// in this scenario normWeight defines an upper limited beyond that
// the probability becomes 0
if (sumOfWeights >= normWeight)
{
probability = 0.0;
}
else
{
probability = 1. - (sumOfWeights / normWeight);
// The probability closest to 1 would be 1-(1/normWeight) for 1 object on the outer ring.
// The minimal probability would be 1-((normWeight-1)/normWeight). E.g. for
// max-1 objects on the outer ring.
}
}
return probability;
}
A simple approach could be:
1 / (sum over the number of all neighbours in [min, max] weighted by their distance to x/y + 1).
By weighted I mean that the number of those neighbours whose distance to x/y is smaller is multiplied by a bigger factor that the number of those, that are not so close. As weight you could for example take (max+1)-distance.
Note that once you compute the object density (see "population density" or "weighted sum of those objects in a distance" in previous answers), you still need to transform this value to a probability in which to insert new objects (which is not treated so comprehensivelly in other answers).
The probability function (PDF) needs to be defined for all possible values of object density, i.e. on closed interval [0, 1], but otherwise it can be shaped towards any goal you desire (see illustrations), e.g.:
to move the current object density towards a desired maximum object density
to keep overall probability of insertion constant, while taking the local object density into account
If you want to experiment with various goals (PDF function shapes - linear, quadratic, hyperbole, circle section, ...), you might wish to have a look at factory method pattern so you can switch between implementations while calling the same method name, but I wanted to keep things simpler in my example, so I implemented only the 1st goal (in python):
def object_density(objects, min, max):
# choose your favourite algorithm, e.g.:
# Compute the density for each level of distance
# and then averages the levels, i.e. distance 2 object is
# exactly 8 times less significant from distance 1 object.
# Returns float between 0 and 1 (inclusive) for valid inputs.
levels = [objects[d] / (d * 8) for d in range(min, max + 1)]
return sum(levels) / len(levels)
def probability_from_density(desired_max_density, density):
# play with PDF functions, e.g.
# a simple linear function
# f(x) = a*x + b
# where we know 2 points [0, 1] and [desired_max_density, 0], so:
# 1 = 0 + b
# 0 = a*desired_max_density + b
# Returns float betwen 0 and 1 (inclusive) for valid inputs.
if density >= desired_max_density:
return 0.0
a = -1 / desired_max_density
b = 1
return a * density + b
def main():
# distance 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
objects = [0, 0, 1, 2, 0, 3, 1, 0, 4, 0, 1]
min = 2
max = 5
desired_max_density = 0.1
if sum(objects[:min]): # when an object is below min distance
return 0.0
density = object_density(objects, min, max) # 0,0552
probability = probability_from_density(desired_max_density, density) # 0,4479
return probability
print(main())
I have a list of points in 2D space that form an (imperfect) grid:
x x x x
x x x x
x
x x x
x x x x
What's the best way to fit these to a rigid grid (i.e. create a two-dimendional array and work out where each point fits in that array)?
There are no holes in the grid, but I don't know in advance what its dimensions are.
EDIT: The grid is not necessarily regular (not even spacing between rows/cols)
A little bit of an image processing approach:
If you think of what you have as a binary image where the X is 1 and the rest is 0, you can sum up rows and columns, and use a peak finding algorithm to identify peaks which would correspond to x and y lines of the grid:
Your points as a binary image:
Sums of row/columns
Now apply some smoothing technique to the signal (e.g. lowess):
I'm sure you get the idea :-)
Good luck
The best I could come up with is a brute-force solution that calculates the grid dimensions that minimize the error in the square of the Euclidean distance between the point and its nearest grid intersection.
This assumes that the number of points p is exactly equal to the number of columns times the number of rows, and that each grid intersection has exactly one point on it. It also assumes that the minimum x/y value for any point is zero. If the minimum is greater than zero, just subtract the minimum x value from each point's x coordinate and the minimum y value from each point's y coordinate.
The idea is to create all of the possible grid dimensions given the number of points. In the example above with 16 points, we would make grids with dimensions 1x16, 2x8, 4x4, 8x2 and 16x1. For each of these grids we calculate where the grid intersections would lie by dividing the maximum width of the points by the number of columns minus 1, and the maximum height of the points by the number of rows minus 1. Then we fit each point to its closest grid intersection and find the error (square of the distance) between the point and the intersection. (Note that this only works if each point is closer to its intended grid intersection than to any other intersection.)
After summing the errors for each grid configuration individually (e.g. getting one error value for the 1x16 configuration, another for the 2x8 configuration and so on), we select the configuration with the lowest error.
Initialization:
P is the set of points such that P[i][0] is the x-coordinate and
P[i][1] is the y-coordinate
Let p = |P| or the number of points in P
Let max_x = the maximum x-coordinate in P
Let max_y = the maximum y-coordinate in P
(minimum values are assumed to be zero)
Initialize min_error_dist = +infinity
Initialize min_error_cols = -1
Algorithm:
for (col_count = 1; col_count <= n; col_count++) {
// only compute for integer # of rows and cols
if ((p % col_count) == 0) {
row_count = n/col_count;
// Compute the width of the columns and height of the rows
// If the number of columns is 1, let the column width be max_x
// (and similarly for rows)
if (col_count > 1) col_width = max_x/(col_count-1);
else col_width=max_x;
if (row_count > 1) row_height = max_y/(row_count-1);
else row_height=max_y;
// reset the error for the new configuration
error_dist = 0.0;
for (i = 0; i < n; i++) {
// For the current point, normalize the x- and y-coordinates
// so that it's in the range 0..(col_count-1)
// and 0..(row_count-1)
normalized_x = P[i][0]/col_width;
normalized_y = P[i][1]/row_height;
// Error is the sum of the squares of the distances between
// the current point and the nearest grid point
// (in both the x and y direction)
error_dist += (normalized_x - round(normalized_x))^2 +
(normalized_y - round(normalized_y))^2;
}
if (error_dist < min_error_dist) {
min_error_dist = error_dist;
min_error_cols = col_count;
}
}
}
return min_error_cols;
Once you've got the number of columns (and thus the number of rows) you can recompute the normalized values for each point and round them to get the grid intersection they belong to.
In the end I used this algorithm, inspired by beaker's:
Calculate all the possible dimensions of the grid, given the total number of points
For each possible dimension, fit the points to that dimension and calculate the variance in alignment:
Order the points by x-value
Group the points into columns: the first r points form the first column, where r is the number of rows
Within each column, order the points by y-value to determine which row they're in
For each row/column, calcuate the range of y-values/x-values
The variance in alignment is the maximum range found
Choose the dimension with the least variance in alignment
I wrote this algorithm that accounts for missing coordinates as well as coordinates with errors.
Python Code
# Input [x, y] coordinates of a 'sparse' grid with errors
xys = [[103,101],
[198,103],
[300, 99],
[ 97,205],
[304,202],
[102,295],
[200,303],
[104,405],
[205,394],
[298,401]]
def row_col_avgs(num_list, ratio):
# Finds the average of each row and column. Coordinates are
# assigned to a row and column by specifying an error ratio.
last_num = 0
sum_nums = 0
count_nums = 0
avgs = []
num_list.sort()
for num in num_list:
if num > (1 + ratio) * last_num and count_nums != 0:
avgs.append(int(round(sum_nums/count_nums,0)))
sum_nums = num
count_nums = 1
else:
sum_nums = sum_nums + num
count_nums = count_nums + 1
last_num = num
avgs.append(int(round(sum_nums/count_nums,0)))
return avgs
# Split coordinates into two lists of x's and y's
xs, ys = map(list, zip(*xys))
# Find averages of each row and column within a specified error.
x_avgs = row_col_avgs(xs, 0.1)
y_avgs = row_col_avgs(ys, 0.1)
# Return Completed Averaged Grid
avg_grid = []
for y_avg in y_avgs:
avg_row = []
for x_avg in x_avgs:
avg_row.append([int(x_avg), int(y_avg)])
avg_grid.append(avg_row)
print(avg_grid)
Code Output
[[[102, 101], [201, 101], [301, 101]],
[[102, 204], [201, 204], [301, 204]],
[[102, 299], [201, 299], [301, 299]],
[[102, 400], [201, 400], [301, 400]]]
I am also looking for another solution using linear algebra. See my question here.
I have N scalable square tiles (buttons) that need to be placed inside of fixed sized rectangular surface (toolbox). I would like to present the buttons all at the same size.
How could I solve for the optimal size of the tiles that would provide the largest area of the rectangular surface being covered by tiles.
Let W and H be the width and height of the rectangle.
Let s be the length of the side of a square.
Then the number of squares n(s) that you can fit into the rectangle is floor(W/s)*floor(H/s). You want to find the maximum value of s for which n(s) >= N
If you plot the number of squares against s you will get a piecewise constant function. The discontinuities are at the values W/i and H/j, where i and j run through the positive integers.
You want to find the smallest i for which n(W/i) >= N, and similarly the smallest j for which n(H/j) >= N. Call these smallest values i_min and j_min. Then the largest of W/i_min and H/j_min is the s that you want.
I.e. s_max = max(W/i_min,H/j_min)
To find i_min and j_min, just do a brute force search: for each, start from 1, test, and increment.
In the event that N is very large, it may be distasteful to search the i's and j's starting from 1 (although it is hard to imagine that there will be any noticeable difference in performance). In this case, we can estimate the starting values as follows. First, a ballpark estimate of the area of a tile is W*H/N, corresponding to a side of sqrt(W*H/N). If W/i <= sqrt(W*H/N), then i >= ceil(W*sqrt(N/(W*H))), similarly j >= ceil(H*sqrt(N/(W*H)))
So, rather than start the loops at i=1 and j=1, we can start them at i = ceil(sqrt(N*W/H)) and j = ceil(sqrt(N*H/W))). And OP suggests that round works better than ceil -- at worst an extra iteration.
Here's the algorithm spelled out in C++:
#include <math.h>
#include <algorithm>
// find optimal (largest) tile size for which
// at least N tiles fit in WxH rectangle
double optimal_size (double W, double H, int N)
{
int i_min, j_min ; // minimum values for which you get at least N tiles
for (int i=round(sqrt(N*W/H)) ; ; i++) {
if (i*floor(H*i/W) >= N) {
i_min = i ;
break ;
}
}
for (int j=round(sqrt(N*H/W)) ; ; j++) {
if (floor(W*j/H)*j >= N) {
j_min = j ;
break ;
}
}
return std::max (W/i_min, H/j_min) ;
}
The above is written for clarity. The code can be tightened up considerably as follows:
double optimal_size (double W, double H, int N)
{
int i,j ;
for (i = round(sqrt(N*W/H)) ; i*floor(H*i/W) < N ; i++){}
for (j = round(sqrt(N*H/W)) ; floor(W*j/H)*j < N ; j++){}
return std::max (W/i, H/j) ;
}
I believe this can be solved as a constrained minimisation problem, which requires some basic calculus. .
Definitions:
a, l -> rectangle sides
k -> number of squares
s -> side of the squares
You have to minimise the function:
f[s]:= a * l/s^2 - k
subject to the constraints:
IntegerPart[a/s] IntegerPart[l/s] - k >= 0
s > 0
I programed a little Mathematica function to do the trick
f[a_, l_, k_] := NMinimize[{a l/s^2 - k ,
IntegerPart[a/s] IntegerPart[l/s] - k >= 0,
s > 0},
{s}]
Easy to read since the equations are the same as above.
Using this function I made up a table for allocating 6 squares
as far as I can see, the results are correct.
As I said, you may use a standard calculus package for your environment, or you may also develop your own minimisation algorithm and programs. Ring the bell if you decide for the last option and I'll provide a few good pointers.
HTH!
Edit
Just for fun I made a plot with the results.
And for 31 tiles:
Edit 2: Characteristic Parameters
The problem has three characteristic parameters:
The Resulting Size of the tiles
The Number of Tiles
The ratio l/a of the enclosing rectangle
Perhaps the last one may result somewhat surprising, but it is easy to understand: if you have a problem with a 7x5 rectangle and 6 tiles to place, looking in the above table, the size of the squares will be 2.33. Now, if you have a 70x50 rectangle, obviously the resulting tiles will be 23.33, scaling isometrically with the problem.
So, we can take those three parameters and construct a 3D plot of their relationship, and eventually match the curve with some function easier to calculate (using least squares for example or computing iso-value regions).
Anyway, the resulting scaled plot is:
I realize this is an old thread but I recently solved this problem in a way that I think is efficient and always gives the correct answer. It is designed to maintain a given aspect ratio. If you wish for the children(buttons in this case) to be square just use an aspect ratio of 1. I am currently using this algorithm in a few places and it works great.
double VerticalScale; // for the vertical scalar: uses the lowbound number of columns
double HorizontalScale;// horizontal scalar: uses the highbound number of columns
double numColumns; // the exact number of columns that would maximize area
double highNumRows; // number of rows calculated using the upper bound columns
double lowNumRows; // number of rows calculated using the lower bound columns
double lowBoundColumns; // floor value of the estimated number of columns found
double highBoundColumns; // ceiling value of the the estimated number of columns found
Size rectangleSize = new Size(); // rectangle size will be used as a default value that is the exact aspect ratio desired.
//
// Aspect Ratio = h / w
// where h is the height of the child and w is the width
//
// the numerator will be the aspect ratio and the denominator will always be one
// if you want it to be square just use an aspect ratio of 1
rectangleSize.Width = desiredAspectRatio;
rectangleSize.Height = 1;
// estimate of the number of columns useing the formula:
// n * W * h
// columns = SquareRoot( ------------- )
// H * w
//
// Where n is the number of items, W is the width of the parent, H is the height of the parent,
// h is the height of the child, and w is the width of the child
numColumns = Math.Sqrt( (numRectangles * rectangleSize.Height * parentSize.Width) / (parentSize.Height * rectangleSize.Width) );
lowBoundColumns = Math.Floor(numColumns);
highBoundColumns = Math.Ceiling(numColumns);
// The number of rows is determined by finding the floor of the number of children divided by the columns
lowNumRows = Math.Ceiling(numRectangles / lowBoundColumns);
highNumRows = Math.Ceiling(numRectangles / highBoundColumns);
// Vertical Scale is what you multiply the vertical size of the child to find the expected area if you were to find
// the size of the rectangle by maximizing by rows
//
// H
// Vertical Scale = ----------
// R * h
//
// Where H is the height of the parent, R is the number of rows, and h is the height of the child
//
VerticalScale = parentSize.Height / lowNumRows * rectangleSize.Height;
//Horizontal Scale is what you multiply the horizintale size of the child to find the expected area if you were to find
// the size of the rectangle by maximizing by columns
//
// W
// Vertical Scale = ----------
// c * w
//
//Where W is the width of the parent, c is the number of columns, and w is the width of the child
HorizontalScale = parentSize.Width / (highBoundColumns * rectangleSize.Width);
// The Max areas are what is used to determine if we should maximize over rows or columns
// The areas are found by multiplying the scale by the appropriate height or width and finding the area after the scale
//
// Horizontal Area = Sh * w * ( (Sh * w) / A )
//
// where Sh is the horizontal scale, w is the width of the child, and A is the aspect ratio of the child
//
double MaxHorizontalArea = (HorizontalScale * rectangleSize.Width) * ((HorizontalScale * rectangleSize.Width) / desiredAspectRatio);
//
//
// Vertical Area = Sv * h * (Sv * h) * A
// Where Sv isthe vertical scale, h is the height of the child, and A is the aspect ratio of the child
//
double MaxVerticalArea = (VerticalScale * rectangleSize.Height) * ((VerticalScale * rectangleSize.Height) * desiredAspectRatio);
if (MaxHorizontalArea >= MaxVerticalArea ) // the horizontal are is greater than the max area then we maximize by columns
{
// the width is determined by dividing the parent's width by the estimated number of columns
// this calculation will work for NEARLY all of the horizontal cases with only a few exceptions
newSize.Width = parentSize.Width / highBoundColumns; // we use highBoundColumns because that's what is used for the Horizontal
newSize.Height = newSize.Width / desiredAspectRatio; // A = w/h or h= w/A
// In the cases that is doesnt work it is because the height of the new items is greater than the
// height of the parents. this only happens when transitioning to putting all the objects into
// only one row
if (newSize.Height * Math.Ceiling(numRectangles / highBoundColumns) > parentSize.Height)
{
//in this case the best solution is usually to maximize by rows instead
double newHeight = parentSize.Height / highNumRows;
double newWidth = newHeight * desiredAspectRatio;
// However this doesn't always work because in one specific case the number of rows is more than actually needed
// and the width of the objects end up being smaller than the size of the parent because we don't have enough
// columns
if (newWidth * numRectangles < parentSize.Width)
{
//When this is the case the best idea is to maximize over columns again but increment the columns by one
//This takes care of it for most cases for when this happens.
newWidth = parentSize.Width / Math.Ceiling(numColumns++);
newHeight = newWidth / desiredAspectRatio;
// in order to make sure the rectangles don't go over bounds we
// increment the number of columns until it is under bounds again.
while (newWidth * numRectangles > parentSize.Width)
{
newWidth = parentSize.Width / Math.Ceiling(numColumns++);
newHeight = newWidth / desiredAspectRatio;
}
// however after doing this it is possible to have the height too small.
// this will only happen if there is one row of objects. so the solution is to make the objects'
// height equal to the height of their parent
if (newHeight > parentSize.Height)
{
newHeight = parentSize.Height;
newWidth = newHeight * desiredAspectRatio;
}
}
// if we have a lot of added items occaisionally the previous checks will come very close to maximizing both columns and rows
// what happens in this case is that neither end up maximized
// because we don't know what set of rows and columns were used to get us to where we are
// we must recalculate them with the current measurements
double currentCols = Math.Floor(parentSize.Width / newWidth);
double currentRows = Math.Ceiling(numRectangles/currentCols);
// now we check and see if neither the rows or columns are maximized
if ( (newWidth * currentCols ) < parentSize.Width && ( newHeight * Math.Ceiling(numRectangles/currentCols) ) < parentSize.Height)
{
// maximize by columns first
newWidth = parentSize.Width / currentCols;
newHeight = newSize.Width / desiredAspectRatio;
// if the columns are over their bounds, then maximized by the columns instead
if (newHeight * Math.Ceiling(numRectangles / currentCols) > parentSize.Height)
{
newHeight = parentSize.Height / currentRows;
newWidth = newHeight * desiredAspectRatio;
}
}
// finally we have the height of the objects as maximized using columns
newSize.Height = newHeight;
newSize.Width = newWidth;
}
}
else
{
//Here we use the vertical scale. We determine the height of the objects based upong
// the estimated number of rows.
// This work for all known cases
newSize.Height = parentSize.Height / lowNumRows;
newSize.Width = newSize.Height * desiredAspectRatio;
}
At the end of the algorithm 'newSize' holds the appropriate size. This is written in C# but it would be fairly easy to port to other languages.
The first, very rough heuristic is to take
s = floor( sqrt( (X x Y) / N) )
where s is the button-side-length, X and Y are the width and height of the toolbox, and N is the number of buttons.
In this case, s will be the MAXIMUM possible side-length. It is not necessarily possible to map this set of buttons onto the toolbar, however.
Imagine a toolbar that is 20 units by 1 unit with 5 buttons. The heuristic will give you a side length of 2 (area of 4), with a total covering area of 20. However, half of each button will be outside of the toolbar.
I would take an iterative approach here.
I would check if it is possible to fit all button in a single row.
If not, check if it is possible to fit in two rows, and so on.
Say W is the smaller side of the toolbox.
H is the other side.
For each iteration, I would check for the best and worst possible cases, in that order. Best case means, say it is the nth iteration, would try a size of W/n X W/n sized buttons. If h value is enough then we are done. If not, the worst case is to try (W/(n+1))+1 X (W/(n+1))+1 sized buttons. If it is possible to fit all buttons, then i would try a bisection method between W/n and (W/(n+1))+1. If not iteration continues at n+1.
Let n(s) be the number of squares that can fit and s their side. Let W, H be the sides of the rectangle to fill. Then n(s) = floor(W/s)* floor(H/s). This is a monotonically decreasing function in s and also piecewise constant, so you can perform a slight modification of binary search to find the smallest s such that n(s) >= N but n(s+eps) < N. You start with an upper and lower bound on s u = min(W, H) and l = floor(min(W,H)/N) then compute t = (u + l) / 2. If n(t) >= N
then l = min(W/floor(W/t), H/floor(H/t)) otherwise u = max(W/floor(W/t), H/floor(H/t)). Stop when u and l stay the same in consecutive iterations.
So it's like binary search, but you exploit the fact that the function is piecewise constant and the change points are when W or H are an exact multiple of s. Nice little problem, thanks for proposing it.
We know that any optimal solution (there may be two) will fill the rectangle either horizontally or vertically. If you found an optimal solution that did not fill the rectangle in one dimension, you could always increase the scale of the tiles to fill one dimension.
Now, any solution that maximizes the surface covered will have an aspect ratio close to the aspect ratio of the rectangle. The aspect ratio of the solution is vertical tile count/horizontal tile count (and the aspect ratio of the rectangle is Y/X).
You can simplify the problem by forcing Y>=X; in other words, if X>Y, transpose the rectangle. This allows you to only think about aspect ratios >= 1, as long as you remember to transpose the solution back.
Once you've calculated the aspect ratio, you want to find solutions to the problem of V/H ~= Y/X, where V is the vertical tile count and H is the horizontal tile count. You will find up to three solutions: the closest V/H to Y/X and V+1, V-1. At that point, just calculate the coverage based on the scale using V and H and take the maximum (there could be more than one).