How many squares of size a×a can be packed into a circle of radius R?
I don't need a solution. I just need some kind of a starting idea.
I apologise for writing such a long answer. My approach is to start with a theoretical maximum and a guaranteed minimum. When you approach the problem, you can use these values to determine how good any algorithm you use is. If you can think of a better minimum then you can use that instead.
We can define an upper limit for the problem by simply using the area of the circle
Upper Limit = floor( (PI * (r pow 2)) / (L * L) )
Where L is the width or height of the squares you are packing and r is the radius of the circle you are packing the squares into. We are sure this is an upper limit because a) we must have a discrete number of boxes and b) we cannot take up more space than the area of the circle. (A formal proof would work somewhere along the lines of assume we had one more box than this, then the sum of the area of the boxes would be greater than the area of the circle).
So with an upper limit in hand, we can now take any solution that exists for all circles and call it a minimum solution.
So, let's consider a solution that exists for all circles by taking a look at the largest square we can fit inside the circle.
The largest square you can fit inside the circle has 4 points on the perimiter, and has a width and length of sqrt(2) * radius (by using pythagoras' theorem and using the radius for the length of the shorter edges)
So the first thing we note is that if sqrt(2) * radius is less than the dimension of your squares, then you cannot fit any squares in the circle, because afterall, this is the largest square you can fit.
Now we can do a straightforward computation to divide this large square into a regular grid of squares using the L you specified, which will give us at least one solution to the problem. So you have a grid of sqaures inside this maximum square. The number of squares you can fit into one row of this this grid is
floor((sqrt(2) * radius)/ L)
And so this minimum solution asserts that you can have at least
Lower Limit = floor((sqrt(2) * radius)/ L) pow 2
number of squares inside the circle.
So in case you got lost, all I did was take the largest square I could fit inside the circle and then pack as many squares as possible into a regular grid inside that, to give me at least one solution.
If you get an answer at 0 for this stage then you cannot fit any squares inside the circle.
Now armed with a theoreitical maximum and an absolute minimum, you can start trying any sort of heuristic algorithm you like for packing squares. A simple algorithm would be to just split the circle up into rows and fit as many sqaures as you can into each row. You can then take this minimum as a guideline to ensure that you came up with a better solution. If you want to spend more processing power looking for a better solution, you use the theoretical as a guideline for how close you are to the theoretical best.
And if you care about this, you could work out what the maximum and minimum theoretical percentage of cover the minimum algorithm I idenfied gives you. The largest square always covers a fixed ratio (pi/4 or about 78.5% of the internal area of the circle I think). So the maximum theoretical minimum is 78.5% cover. And the minimum non-trivial (ie. non zero) theoretical minimum is when you can only fit 1 square inside the largest square, which happens when the squares you are packing are 1 larger than half the width and height of the largest square you can fit in the circle. Basically you take up just over 25% of the inner square with 1 packed square, which means you get an approximate cover of about 20%
Rasterise the circle using something like the midpoint circle algorithm. The number of filled pixels is the number of squares you can fit in the circle. Of course, since you're not actually filling the pixels, just counting them, this should take time proportional to the circumference of the circle, not its area.
You'll have to pick the radius for rasterisation carefully, so that you only count pixels that are strictly inside the circle.
Edit: This may not be exactly correct, as it's possible that applying a sub-pixel offset to the grid could change the result. I'll leave the answer here as it may provide a useful starting point for an exact solution.
You can pack as many squares as you like into a circle. If you doubt this statement, draw a large circle on a piece of paper, then draw a square with side length 10^(-18)m inside it, repeat. When you get near to the boundary of the circle, start drawing squares with side length of 10^(-21)m.
So your first step must be to refine your question and state your problem more accurately.
Just a shot in the dark after a few minutes of thought...
What if you worked with half the circle and doubled it at the end. I would start with a grid of squares the length of the diameter and the width of the radius, essentially blanketing the semi-circle. Then check all 4 corners of each square and make sure their coordinates are within the radius of the circle. This would of course require that you plot the circle and squares on some sort of coordinate system or grid.
I hope this makes sense... It's in my head and it seems a bit difficult to articulate :)
EDIT:
After drawing it out, I think this method would work with a little tweaking. I would line up the squares along the diameter, but slide the first one down until it fits. Set that one in place and start lining up squares next to it until they no longer fit. Move out to the edge of this line of squares and repeat the same steps until your rows of squares reach the radius.
Related
I've been working for some time in an XNA roguelike game and I can't get my head around the following problem: developing an algorithm to divide a matrix of non-binary values into the fewest rectangles grouping these values.
Example: given the following matrix
01234567
0 ---##*##
1 ---##*##
2 --------
The algorithm should return:
3x3 rectangle of '-'s starting at (0,0)
2x2 rectangle of '#'s starting at (3, 0)
1x2 rectangle of '*'s starting at (5, 0)
2x2 rectangle of '#'s starting at (6, 0)
5x1 rectangle of '-'s starting at (3, 2)
Why am I doing this: I've gotten a pretty big dungeon type with a size of approximately 500x500. If I were to individually call the "Draw" method for each tile's Sprite, my FPS would be far too low. It is possible to optimize this process by grouping similar-textured tiles and applying texture repetition to them, which would dramatically decrease the amount of GPU draw calls for that. For example, if my map were the previous matrix, instead of calling draw 16 times, I'd call it only 5 times.
I've looked at some algorithms which can give you the biggest rectangle of a type inside a given binary matrix, but that doesn't fit my problem.
Thanks in advance!
You can use breadth first searches to separate each area of different tile type.
Picking a partitioning within the individual shapes is an NP-hard problem (see https://en.wikipedia.org/wiki/Graph_partition), so you can't find an efficient solution that guarantees the minimum number of rectangles. However if you don't mind an extra rectangle or two for each shape and your shapes are relatively small, you can come up with algorithms that split the shape into a number of rectangles close to the minimum.
An off the top of my head guess for something that could potentially work would be to pick a tile with the maximum connecting tiles and start growing a rectangle from it using a recursive algorithm to maximize the size. Remove the resulting rectangle from the shape, then repeat until there are no more tiles not included in a rectangle. Again, this won't produce perfect results, there are graphs on which this will return with more than the minimum amount of rectangles, but it's an easy to implement ballpark solution. With a little more effort I'm sure you will be able to find better heuristics to use and get better results too.
One possible building block is a routine to check, given two points, whether the rectangle formed by using those points as opposite corners is all of the same type. I think that a fast (but unreliable) means of testing this can be based on mapping each type to a large random number, and then working out the sum of the numbers within a rectangle modulo a large prime. Take one of the numbers within the rectangle. If the sum of the numbers within the rectangle is the size of the rectangle times the one number sampled, assume that the all of the numbers in the rectangle are the same.
In one dimension we can work out all of the cumulative sums a, a+b, a+b+c, a+b+c+d,... in time O(N) and then, for any two points, work out the sum for the interval between them by subtracting cumulative sums: b+c+d = a+b+c+d - a. In two dimensions, we can use cumulative sums to work out, for each point, the sum of all of the numbers from positions which have x and y co-ordinates no greater than the (x, y) coordinate of that position. For any rectangle we can work out the sum of the numbers within that rectangle by working out A-B-C+D where A,B,C,D are two-dimensional cumulative sums.
So with pre-processing O(N) we can work out a table which allows us to compute the sum of the numbers within a rectangle specified by its opposite corners in time O(1). Unless we are very unlucky, checking this sum against the size of the rectangle times a number extracted from within the rectangle will tell us whether the rectangle is all of the same type.
Based on this, repeatedly start with a random point not covered. Take a point just to its left and move that point left as long as the interval between the two points is of the same type. Then move that point up as long as the rectangle formed by the two points is of the same type. Now move the first point to the right and down as long as the rectangle formed by the two points is of the same type. Now you think you have a large rectangle covering the original point. Check it. In the unlikely event that it is not all of the same type, add that rectangle to a list of "fooled me" rectangles you can check against in future and try again. If it is all of the same type, count that as one extracted rectangle and mark all of the points in it as covered. Continue until all points are covered.
This is a greedy algorithm that makes no attempt at producing the optimal solution, but it should be reasonably fast - the most expensive part is checking that the rectangle really is all of the same type, and - assuming you pass that test - each cell checked is also a cell covered so the total cost for the whole process should be O(N) where N is the number of cells of input data.
I am interesting in finding the diameter of two points sets, in 128 dimensions. The first has 10000 points and the second 1000000. For that reason I would like to do something better than the naive approach which takes O(n²). The algorithm will be able to handle any number of points and dimensions, but I am currently very interested in these two particular data sets.
I am very interesting in gaining speed over accuracy, thus, based on this, I would find the (approximate) bounding box of the point set, by computing the min and max value per coordinate, thus O(n*d) time. Then, if I find the diameter of this box, the problem is solved.
In the 3d case, I could find the diameter of the one side, since I know the two edges and then, I could apply the Pythagorean theorem on the other, which is vertical to this side. I am not sure for this however and for sure, I can't see how to generalize it to d dimensions.
An interesting answer can be found here, but it seems to be specific for 3 dimensions and I want a method for d dimensions.
Interesting paper: On computing the diameter of a point set in high dimensional Euclidean space. Link. However, implementing the algorithm seems too much for me in this phase.
The classic 2-approximation algorithm for this problem, with running time O(nd), is to choose an arbitrary point and then return the maximum distance to another point. The diameter is no smaller than this value and no larger than twice this value.
I would like to add a comment, but not enough reputation for that...
I just want to warn other readers that the "bounding box" solution is very inaccurate. Take for example the Euclidean ball of radius one. This set has diameter two, but its bounding box is [-1, 1]^d, which has diameter twice the square root of d. For d = 128, this is already a very bad approximation.
For a crude estimate, I would stay with David Eisenstat's answer.
There is a precision based algorithm which performs very well on any dimension, which is based on computing the dimension of an axial bounding box.
The idea is that it's possible to find the lower and upper boundaries of the axis bounding box length function since it's partial derivatives are limited, and depend on the angle between the axises.
The limit of the local maxima derivatives between two axises in 2d space can be computed as:
sin(a/2)*(1 + tan(a/2))
That means that, for example, for 90deg between axises the boundary is 1.42 (sqrt(2))
Which reduces to a/2 when a => 0, so the upper boundary is proportional to the angle.
For a multidimensional case the formula varies slightly, but still it's easy to compute.
So, the search of local minima convolves in logarithmic time.
The good news is that we can run the search of such local maxima in parallel.
Also, we can filter out both the regions of the search based on the best achieved result so far, as well as the points themselves, which are belo the lower limit of the search in the worst region.
The worst case of the algorithm is where all of the points are placed on the surface of a sphere.
This can be firther improved: when we detect a local search which operates on just few points, we swap to bruteforce for this particular axis. It works fast, because we need only the points which are subject to that particular local search, which can be determined as points actually bound by two opposite spherical cones of a particular angle sharing the same axis.
It's hard to figure out the big O notation, because it depends on desired precision and the distribution of points (bad when most of the points are on a sphere's surface).
The algorithm i use is here:
Set the initial angle a = pi/2.
Take one axis for each dimension. The angle and the axises form the initial 'bucket'
For each axis, compute the span on that axis by projecting all the points onto the axis, and finding min and max of the coordinates on the axis.
Compute the upper and lower bounds of the diameter which is interesting. It's based on the formula: sin(a/2)*(1 + tan(a/2)) and multiplied by assimetry cooficient, computed from the length of the current axis projections.
For the next step, kill all of the points which fall under the lower bound in each dimension at the same time.
For each exis, If the amount of points above the upper bound is less then some reasonable amount (experimentally computed) then compute using a bruteforce (N^2) on the set of the points in question, and adjust the lower bound, and kill the axis for the next step.
For the next step, Kill all of the axises, which have all of their points under the lower bound.
If the precision is satisfactory (upper bound - lower bound) < epsilon, then return the upper bound as the result.
For all of the survived axises, there is a virtual cone on that axis (actually, the two opposite cones), which covers some area on a virtual sphere which encloses a face of the cube. If i'm not mistaken, it's angle would be a * sqrt(2). Set the new angle to a / sqrt(2). Create a whole bucket of new axises (2 * number of dimensions), so the new cone areas would cover the initial cone area. It's the hard part for me, as i have not enough imagination for n>3-dimensional case.
Continue from step (3).
You can paralellize the procedure, synchronizing the limits computed so far for the points from (5) through (7).
I'm going to summarize the algorithm proposed by Timothy Shields.
Pick random point x.
Pick point y furthest from x.
If not done, let x = y, and go to step 2
The more times you repeat, the more accurate the result will be... ??
EDIT: actually this algorithm is not very good. Think about a 2D rectangle with vertices ABCD. There are two maxima: between AC and BD, which are separated by a sizable valley. This algorithm will get stuck at one or the other 50/50. If AC is slightly larger than BD, you'll be getting the wrong answer 50% of the time no matter how many times you iterate. Other regular polygons have the same issue, and in higher dimensions it is even worse.
I am trying to figure out whether given those constraints regular 2D packing problem can be simplified. You have n regular s-sided polygons for s between 3 and 12. All of them have the same side length. We need to minimise the area of bounding square.
I would think that having all of the regular with same side length the packing can be easier since some configurations will always fit perfectly next to each other. Though I am not sure whether this property is of any use since local minimum might not translate to global minimum.
From your description, the polygons are regular polygons with all sides having the same length
This means that every polygon edges can connect to form a circle which can fit into a sub-square of size of 2r^2 perfectly
So an easy solution is to fit N polygons aligned in a square of size >= N * 2r^2, this is not an optimal solution, but works perfectly when you only have squares.
Here is an illustration that explains it:
First, knowing that all polygon sides have length of m
The polygon fits perfectly into a circle of ratio r
That circle fits perfectly into a square of size 2r^2
So we finally merge the fitting squares into one big square by tiling them in a matrix of M x M squares, where M * M >= N
I'm not sure if there's an algorithm that can solve this.
A given number of rectangles are placed side by side horizontally from left to right to form a shape. You are given the width and height of each.
How would you determine the minimum number of rectangles needed to cover the whole shape?
i.e How would you redraw this shape using as few rectangles as possible?
I've can only think about trying to squeeze as many big rectangles as i can but that seems inefficient.
Any ideas?
Edit:
You are given a number n , and then n sizes:
2
1 3
2 5
The above would have two rectangles of sizes 1x3 and 2x5 next to each other.
I'm wondering how many rectangles would i least need to recreate that shape given rectangles cannot overlap.
Since your rectangles are well aligned, it makes the problem easier. You can simply create rectangles from the bottom up. Each time you do that, it creates new shapes to check. The good thing is, all your new shapes will also be base-aligned, and you can just repeat as necessary.
First, you want to find the minimum height rectangle. Make a rectangle that height, with the width as total width for the shape. Cut that much off the bottom of the shape.
You'll be left with multiple shapes. For each one, do the same thing.
Finding the minimum height rectangle should be O(n). Since you do that for each group, worst case is all different heights. Totals out to O(n2).
For example:
In the image, the minimum for each shape is highlighted green. The resulting rectangle is blue, to the right. The total number of rectangles needed is the total number of blue ones in the image, 7.
Note that I'm explaining this as if these were physical rectangles. In code, you can completely do away with the width, since it doesn't matter in the least unless you want to output the rectangles rather than just counting how many it takes.
You can also reduce the "make a rectangle and cut it from the shape" to simply subtracting the height from each rectangle that makes up that shape/subshape. Each contiguous section of shapes with +ve height after doing so will make up a new subshape.
If you look for an overview on algorithms for the general problem, Rectangular Decomposition of Binary Images (article by Tomas Suk, Cyril Höschl, and Jan Flusser) might be helpful. It compares different approaches: row methods, quadtree, largest inscribed block, transformation- and graph-based methods.
A juicy figure (from page 11) as an appetizer:
Figure 5: (a) The binary convolution kernel used in the experiment. (b) Its 10 blocks of GBD decomposition.
I have a set of N positive numbers, and a rectangle of dimensions X and Y that I need to partition into N smaller rectangles such that:
the surface area of each smaller rectangle is proportional to its corresponding number in the given set
all space of big rectangle is occupied and there is no leftover space between smaller rectangles
each small rectangle should be shaped as close to square as feasible
the execution time should be reasonably small
I need directions on this. Do you know of such an algorithm described on the web? Do you have any ideas (pseudo-code is fine)?
Thanks.
What you describe sounds like a treemap:
Treemaps display hierarchical (tree-structured) data as a set of nested rectangles. Each branch of the tree is given a rectangle, which is then tiled with smaller rectangles representing sub-branches. A leaf node's rectangle has an area proportional to a specified dimension on the data.
That Wikipedia page links to a page by Ben Shneiderman, which gives a nice overview and links to Java implementations:
Then while puzzling about this in the faculty lounge, I had the Aha! experience of splitting the screen into rectangles in alternating horizontal and vertical directions as you traverse down the levels. This recursive algorithm seemed attractive, but it took me a few days to convince myself that it would always work and to write a six line algorithm.
Wikipedia also to "Squarified Treemaps" by Mark Bruls, Kees Huizing and Jarke J. van Wijk (PDF) that presents one possible algorithm:
How can we tesselate a rectangle recursively into rectangles, such that their aspect-ratios (e.g. max(height/width, width/height)) approach 1 as close as possible? The number of all possible tesselations is very large. This problem falls in the category of NP-hard problems. However, for our application we do not need the optimal solution, a good solution
that can be computed in short time is required.
You do not mention any recursion in the question, so your situation might be just one level of the treemap; but since the algorithms work on one level at a time, this should be no problem.
I have been working on something similar. I'm prioritizing simplicity over getting as similar aspect ratios as possible. This should (in theory) work. Tested it on paper for some values of N between 1 and 10.
N = total number of rects to create,
Q = max(width, height) / min(width, height),
R = N / Q
If Q > N/2, split the rect in N parts along its longest side.
If Q <= N/2, split the rect in R (rounded int) parts along its shortest side.
Then split the subrects in N/R (rounded down int) parts along its shortest side.
Subtract the rounded down value from the result of the next subrects division. Repeat for all subrects or until the required number of rects are created.