Surface Area Heuristic (SAH) kd-tree of triangles - flat cells - algorithm

I've implemented a SAH kd-tree based upon the paper On building fast kd-Trees for Ray Tracing, and on doing that in O(N log N) by Wald and Havran. Note I haven't done their splicing and merging suggestion right at the end to speed up the building of the tree, just the SAH part.
I'm testing the algorithm with an axis-aligned cube where each face is split into two triangles, so I have N = 12 triangles in total. The bottom-left vertex (i.e. the one nearest the axes) is actually at the origin.
Face Triangle
----------------
Front: 0, 1
Left: 6, 7
Right: 2, 3
Top: 4, 5
Bottom: 10, 11
Back: 8, 9
Assuming node traversal cost Ct = 1.0 and intersection cost Ci = 10.0. We first find the cost of not splitting which is Cns = N * Ci = 12 * 10.0 = 120.0.
Now we go through each axis in turn and do an incremental sweep through the candidate split planes to see if the cost of splitting is cheaper. The first split plane is p = <x,0>. We have Nl = 0, Np = 2 and Nr = 10 (that is, the number of triangles on the left, in the plane, and to the right of the plane). The two triangles in the plane are number 6 and 7 (the left face). All others are to the right.
The SAH(p,V,Nl,Nr,Np) function is now executed. This takes the split plane, the voxel V to be split, and the numbers of left/right/plane triangles. It computes the probability of hitting the left (flat) voxel as Pl = SA(Vl)/SA(V) = 50/150 = 1/3, the right probability as Pr = SA(Vr)/SA(V) = 150/150 = 1. Now it runs the cost function twice; first with the planar triangles on the left, then with the planar triangles on the right to get Cl and Cr respectively.
The cost function C(Pl,Pr,Nl,Nr) returns bias * (Ct + Ci(Pl * Nl + Pr * Nr))
Cl: cost with planar triangles on the left (Nl = 2, Nr = 10)
bias = 1 we aren't biasing as neither left nor right voxel is empty.
Cl = 1 * (1 + 10(1/3 * 2 + 1 * 10)) = 107.666
Cr: cost with planar triangles on the right (Nl = 0, Nr = 12)
bias = 0.8 empty cell bias comes into play.
Cr = 0.8 * (1 + 10(1/3 * 0 + 1 * 12)) = 96.8
The algorithm determines that Cr = 96.8 is better than splitting the two triangles off into a flat cell Cl = 107.666 and also better than not splitting the voxel at all Cns = 120.0. No other candidate splits are found to be cheaper. We therefore split into an empty left child, and a right child containing all the triangles. When we recurse into the right child to continue the tree building, we perform exactly the same steps as above. It's only because of a max depth termination criterion that this doesn't cause a stack overflow. The resultant tree is very deep.
The paper claims to have thought about this sort of thing:
During clipping, special care has to be taken to correctly handle
special cases like “flat” (i.e., zero-volume) cells, or cases where
numerical inaccuracies may occur (e.g., for cells that are very thin
compared to the size of the triangle). For example, we must make sure
not to “clip away” triangles lying in a flat cell. Note that such
cases are not rare exceptions, but are in fact encouraged by the SAH,
as they often produce minimal expected cost.
This local approximation can easily get stuck in a local minimum: As
the local greedy SAH overestimates CV (p), it might stop subdivision
even if the correct cost would have indicated further subdivision. In
particular, the local approximation can lead to premature termination
for voxels that require splitting off flat cells on the sides: many
scenes (in particular, architectural ones) contain geometry in the
form of axis-aligned boxes (a light fixture, a table leg or table top,
. . . ), in which case the sides have to be “shaved off” until the
empty interior is exposed. For wrongly chosen parameters, or when
using cost functions different from the ones we use (in particular,
ones in which a constant cost is added to the leaf estimate), the
recursion can terminated prematurely. Though this pre-mature exit
could also be avoided in a hardcoded way—e.g., only performing the
automatic termination test for non-flat cells—we propose to follow our
formulas, in which case no premature exit will happen.
Am I doing anything wrong? How can I correct this?

Probably a bit late by now :-/, but just in case: I think the thing that's "wrong" here is in the concept of "empty cell" bias: what you want to "encourage" (by giving it a less-than-one bias) is cutting away empty space, but that implies that the cell being cut away actually does have a volume. Apparently, in your implementation you're only checking if one side has zero triangles, and are thus applying the bias even if no space is cut away at all.
Fix: apply the bias only if one side has count==0 and non-zero width.

Related

Cover a polygonal line using the least given rectangles while keeping her continuity

Given a list of points forming a polygonal line, and both height and width of a rectangle, how can I find the number and positions of all rectangles needed to cover all the points?
The rectangles should be rotated and may overlap, but must follow the path of the polyline (A rectangle may contain multiple segments of the line, but each rectangle must contain a segment that is contiguous with the previous one.)
Do the intersections on the smallest side of the rectangle, when it is possible, would be much appreciated.
All the solutions I found so far were not clean, here is the result I get:
You should see that it gives a good render in near-flat cases, but overlaps too much in big curbs. One rectangle could clearly be removed if the previous were offset.
Actually, I put a rectangle centered at width/2 along the line and rotate it using convex hull and modified rotating calipers algorithms, and reiterate starting at the intersection point of the previous rectangle and the line.
You may observe that I took inspiration from the minimum oriented rectangle bounding box algorithm, for the orientation, but it doesn't include the cutting aspect, nor the fixed size.
Thanks for your help!
I modified k-means to solve this. It's not fast, it's not optimal, it's not guaranteed, but (IMHO) it's a good start.
There are two important modifications:
1- The distance measure
I used a Chebyshev-distance-inspired measure to see how far points are from each rectangle. To find distance from points to each rectangle, first I transformed all points to a new coordinate system, shifted to center of rectangle and rotated to its direction:
Then I used these transformed points to calculate distance:
d = max(2*abs(X)/w, 2*abs(Y)/h);
It will give equal values for all points that have same distance from each side of rectangle. The result will be less than 1.0 for points that lie inside rectangle. Now we can classify points to their closest rectangle.
2- Strategy for updating cluster centers
Each cluster center is a combination of C, center of rectangle, and a, its rotation angle. At each iteration, new set of points are assigned to a cluster. Here we have to find C and a so that rectangle covers maximum possible number of points. I don’t now if there is an analytical solution for that, but I used a statistical approach. I updated the C using weighted average of points, and used direction of first principal component of points to update a. I used results of proposed distance, powered by 500, as weight of each point in weighted average. It moves rectangle towards points that are located outside of it.
How to Find K
Initiate it with 1 and increase it till all distances from points to their corresponding rectangles become less than 1.0, meaning all points are inside a rectangle.
The results
Iterations 0, 10, 20, 30, 40, and 50 of updating cluster centers (rectangles):
Solution for test case 1:
Trying Ks: 2, 4, 6, 8, 10, and 12 for complete coverage:
Solution for test case 2:
P.M: I used parts of Chalous Road as data. It was fun downloading it from Google Maps. The I used technique described here to sample a set of equally spaced points.
It’s a little late and you’ve probably figured this out. But, I was free today and worked on the constraint reflected in your last edit (continuity of segments). As I said before in the comments, I suggest using a greedy algorithm. It’s composed of two parts:
A search algorithm that looks for furthermost point from an initial point (I used binary search algorithm), so that all points between them lie inside a rectangle of given w and h.
A repeated loop that finds best rectangle at each step and advances the initial point.
The pseudo code of them are like these respectively:
function getBestMBR( P, iFirst, w, h )
nP = length(P);
iStart = iFirst;
iEnd = nP;
while iStart <= iEnd
m = floor((iStart + iEnd) / 2);
MBR = getMBR(P[iFirst->m]);
if (MBR.w < w) & (MBR.h < h) {*}
iStart = m + 1;
iLast = m;
bestMBR = MBR;
else
iEnd = m - 1;
end
end
return bestMBR, iLast;
end
function getRectList( P, w, h )
nP = length(P);
rects = [];
iFirst = 1;
iLast = iFirst;
while iLast < nP
[bestMBR, iLast] = getBestMBR(P, iFirst, w, h);
rects.add(bestMBR.x, bestMBR.y, bestMBR.a];
iFirst = iLast;
end
return rects;
Solution for test case 1:
Solution for test case 2:
Just keep in mind that it’s not meant to find the optimal solution, but finds a sub-optimal one in a reasonable time. It’s greedy after all.
Another point is that you can improve this a little in order to decrease number of rectangles. As you can see in the line marked with (*), I kept resulting rectangle in direction of MBR (Minimum Bounding Rectangle), even though you can cover larger MBRs with rectangles of same w and h if you rotate the rectangle. (1) (2)

How can you iterate linearly through a 3D grid?

Assume we have a 3D grid that spans some 3D space. This grid is made out of cubes, the cubes need not have integer length, they can have any possible floating point length.
Our goal is, given a point and a direction, to check linearly each cube in our path once and exactly once.
So if this was just a regular 3D array and the direction is say in the X direction, starting at position (1,2,0) the algorithm would be:
for(i in number of cubes)
{
grid[1+i][2][0]
}
But of course the origin and the direction are arbitrary and floating point numbers, so it's not as easy as iterating through only one dimension of a 3D array. And the fact the side lengths of the cubes are also arbitrary floats makes it slightly harder as well.
Assume that your cube side lengths are s = (sx, sy, sz), your ray direction is d = (dx, dy, dz), and your starting point is p = (px, py, pz). Then, the ray that you want to traverse is r(t) = p + t * d, where t is an arbitrary positive number.
Let's focus on a single dimension. If you are currently at the lower boundary of a cube, then the step length dt that you need to make on your ray in order to get to the upper boundary of the cube is: dt = s / d. And we can calculate this step length for each of the three dimensions, i.e. dt is also a 3D vector.
Now, the idea is as follows: Find the cell where the ray's starting point lies in and find the parameter values t where the first intersection with the grid occurs per dimension. Then, you can incrementally find the parameter values where you switch from one cube to the next for each dimension. Sort the changes by the respective t value and just iterate.
Some more details:
cell = floor(p - gridLowerBound) / s <-- the / is component-wise division
I will only cover the case where the direction is positive. There are some minor changes if you go in the negative direction but I am sure that you can do these.
Find the first intersections per dimension (nextIntersection is a 3D vector):
nextIntersection = ((cell + (1, 1, 1)) * s - p) / d
And calculate the step length:
dt = s / d
Now, just iterate:
if(nextIntersection.x < nextIntersection.y && nextIntersection.x < nextIntersection.z)
cell.x++
nextIntersection.x += dt.x
else if(nextIntersection.y < nextIntersection.z)
cell.y++
nextIntersection.y += dt.y
else
cell.z++
nextIntersection.z += dt.z
end if
if cell is outside of grid
terminate
I have omitted the case where two or three cells are changed at the same time. The above code will only change one at a time. If you need this, feel free to adapt the code accordingly.
Well if you are working with floats, you can make the equation for the line in direction specifiedd. Which is parameterized by t. Because in between any two floats there is a finite number of points, you can simply check each of these points which cube they are in easily cause you have point (x,y,z) whose components should be in, a respective interval defining a cube.
The issue gets a little bit harder if you consider intervals that are, dense.
The key here is even with floats this is a discrete problem of searching. The fact that the equation of a line between any two points is a discrete set of points means you merely need to check them all to the cube intervals. What's better is there is a symmetry (a line) allowing you to enumerate each point easily with arithmetic expression, one after another for checking.
Also perhaps consider integer case first as it is same but slightly simpler in determining the discrete points as it is a line in Z_2^8?

In a restricted space with n dimension, how to find the coordinates of p points, so that they are as far as possible from each other?

For example, in a 2D space, with x [0 ; 1] and y [0 ; 1]. For p = 4, intuitively, I will place each point at each corner of the square.
But what can be the general algorithm?
Edit: The algorithm needs modification if dimensions are not orthogonal to eachother
To uniformly place the points as described in your example you could do something like this:
var combinedSize = 0
for each dimension d in d0..dn {
combinedSize += d.length;
}
val listOfDistancesBetweenPointsAlongEachDimension = new List
for each d dimension d0..dn {
val percentageOfWholeDimensionSize = d.length/combinedSize
val pointsToPlaceAlongThisDimension = percentageOfWholeDimensionSize * numberOfPoints
listOfDistancesBetweenPointsAlongEachDimension[d.index] = d.length/(pointsToPlaceAlongThisDimension - 1)
}
Run on your example it gives:
combinedSize = 2
percentageOfWholeDimensionSize = 1 / 2
pointsToPlaceAlongThisDimension = 0.5 * 4
listOfDistancesBetweenPointsAlongEachDimension[0] = 1 / (2 - 1)
listOfDistancesBetweenPointsAlongEachDimension[1] = 1 / (2 - 1)
note: The minus 1 deals with the inclusive interval, allowing points at both endpoints of the dimension
2D case
In 2D (n=2) the solution is to place your p points evenly on some circle. If you want also to define the distance d between points then the circle should have radius around:
2*Pi*r = ~p*d
r = ~(p*d)/(2*Pi)
To be more precise you should use circumference of regular p-point polygon instead of circle circumference (I am too lazy to do that). Or you can compute the distance of produced points and scale up/down as needed instead.
So each point p(i) can be defined as:
p(i).x = r*cos((i*2.0*Pi)/p)
p(i).y = r*sin((i*2.0*Pi)/p)
3D case
Just use sphere instead of circle.
ND case
Use ND hypersphere instead of circle.
So your question boils down to place p "equidistant" points to a n-D hypersphere (either surface or volume). As you can see 2D case is simple, but in 3D this starts to be a problem. See:
Make a sphere with equidistant vertices
sphere subdivision triangulation
As you can see there are quite a few approaches to do this (there are much more of them even using Fibonacci sequence generated spiral) which are more or less hard to grasp or implement.
However If you want to generalize this into ND space you need to chose general approach. I would try to do something like this:
Place p uniformly distributed place inside bounding hypersphere
each point should have position,velocity and acceleration vectors. You can also place the points randomly (just ensure none are at the same position)...
For each p compute acceleration
each p should retract any other point (opposite of gravity).
update position
just do a Newton D'Alembert physics simulation in ND. Do not forget to include some dampening of speed so the simulation will stop in time. Bound the position and speed to the sphere so points will not cross it's border nor they would reflect the speed inwards.
loop #2 until max speed of any p crosses some threshold
This will more or less accurately place p points on the circumference of ND hypersphere. So you got minimal distance d between them. If you got some special dependency between n and p then there might be better configurations then this but for arbitrary numbers I think this approach should be safe enough.
Now by modifying #2 rules you can achieve 2 different outcomes. One filling hypersphere surface (by placing massive negative mass into center of surface) and second filling its volume. For these two options also the radius will be different. For one you need to use surface and for the other volume...
Here example of similar simulation used to solve a geometry problem:
How to implement a constraint solver for 2-D geometry?
Here preview of 3D surface case:
The number on top is the max abs speed of particles used to determine the simulations stopped and the white-ish lines are speed vectors. You need to carefully select the acceleration and dampening coefficients so the simulation is fast ...

Best way to find all points of lattice in sphere

Given a bunch of arbitrary vectors (stored in a matrix A) and a radius r, I'd like to find all integer-valued linear combinations of those vectors which land inside a sphere of radius r. The necessary coordinates I would then store in a Matrix V. So, for instance, if the linear combination
K=[0; 1; 0]
lands inside my sphere, i.e. something like
if norm(A*K) <= r then
V(:,1)=K
end
etc.
The vectors in A are sure to be the simplest possible basis for the given lattice and the largest vector will have length 1. Not sure if that restricts the vectors in any useful way but I suspect it might. - They won't have as similar directions as a less ideal basis would have.
I tried a few approaches already but none of them seem particularly satisfying. I can't seem to find a nice pattern to traverse the lattice.
My current approach involves starting in the middle (i.e. with the linear combination of all 0s) and go through the necessary coordinates one by one. It involves storing a bunch of extra vectors to keep track of, so I can go through all the octants (in the 3D case) of the coordinates and find them one by one. This implementation seems awfully complex and not very flexible (in particular it doesn't seem to be easily generalizable to arbitrary numbers of dimension - although that isn't strictly necessary for the current purpose, it'd be a nice-to-have)
Is there a nice* way to find all the required points?
(*Ideally both efficient and elegant**. If REALLY necessary, it wouldn't matter THAT much to have a few extra points outside the sphere but preferably not that many more. I definitely do need all the vectors inside the sphere. - if it makes a large difference, I'm most interested in the 3D case.
**I'm pretty sure my current implementation is neither.)
Similar questions I found:
Find all points in sphere of radius r around arbitrary coordinate - this is actually a much more general case than what I'm looking for. I am only dealing with periodic lattices and my sphere is always centered at 0, coinciding with one point on the lattice.
But I don't have a list of points but rather a matrix of vectors with which I can generate all the points.
How to efficiently enumerate all points of sphere in n-dimensional grid - the case for a completely regular hypercubic lattice and the Manhattan-distance. I'm looking for completely arbitary lattices and euclidean distance (or, for efficiency purposes, obviously the square of that).
Offhand, without proving any assertions, I think that 1) if the set of vectors is not of maximal rank then the number of solutions is infinite; 2) if the set is of maximal rank, then the image of the linear transformation generated by the vectors is a subspace (e.g., plane) of the target space, which intersects the sphere in a lower-dimensional sphere; 3) it follows that you can reduce the problem to a 1-1 linear transformation (kxk matrix on a k-dimensional space); 4) since the matrix is invertible, you can "pull back" the sphere to an ellipsoid in the space containing the lattice points, and as a bonus you get a nice geometric description of the ellipsoid (principal axis theorem); 5) your problem now becomes exactly one of determining the lattice points inside the ellipsoid.
The latter problem is related to an old problem (counting the lattice points inside an ellipse) which was considered by Gauss, who derived a good approximation. Determining the lattice points inside an ellipse(oid) is probably not such a tidy problem, but it probably can be reduced one dimension at a time (the cross-section of an ellipsoid and a plane is another ellipsoid).
I found a method that makes me a lot happier for now. There may still be possible improvements, so if you have a better method, or find an error in this code, definitely please share. Though here is what I have for now: (all written in SciLab)
Step 1: Figure out the maximal ranges as defined by a bounding n-parallelotope aligned with the axes of the lattice vectors. Thanks for ElKamina's vague suggestion as well as this reply to another of my questions over on math.se by chappers: https://math.stackexchange.com/a/1230160/49989
function I=findMaxComponents(A,r) //given a matrix A of lattice basis vectors
//and a sphere radius r,
//find the corners of the bounding parallelotope
//built from the lattice, and store it in I.
[dims,vecs]=size(A); //figure out how many vectors there are in A (and, unnecessarily, how long they are)
U=eye(vecs,vecs); //builds matching unit matrix
iATA=pinv(A'*A); //finds the (pseudo-)inverse of A^T A
iAT=pinv(A'); //finds the (pseudo-)inverse of A^T
I=[]; //initializes I as an empty vector
for i=1:vecs //for each lattice vector,
t=r*(iATA*U(:,i))/norm(iAT*U(:,i)) //find the maximum component such that
//it fits in the bounding n-parallelotope
//of a (n-1)-sphere of radius r
I=[I,t(i)]; //and append it to I
end
I=[-I;I]; //also append the minima (by symmetry, the negative maxima)
endfunction
In my question I only asked for a general basis, i.e, for n dimensions, a set of n arbitrary but linearly independent vectors. The above code, by virtue of using the pseudo-inverse, works for matrices of arbitrary shapes and, similarly, Scilab's "A'" returns the conjugate transpose rather than just the transpose of A so it equally should work for complex matrices.
In the last step I put the corresponding minimal components.
For one such A as an example, this gives me the following in Scilab's console:
A =
0.9701425 - 0.2425356 0.
0.2425356 0.4850713 0.7276069
0.2425356 0.7276069 - 0.2425356
r=3;
I=findMaxComponents(A,r)
I =
- 2.9494438 - 3.4186986 - 4.0826424
2.9494438 3.4186986 4.0826424
I=int(I)
I =
- 2. - 3. - 4.
2. 3. 4.
The values found by findMaxComponents are the largest possible coefficients of each lattice vector such that a linear combination with that coefficient exists which still land on the sphere. Since I'm looking for the largest such combinations with integer coefficients, I can safely drop the part after the decimal point to get the maximal plausible integer ranges. So for the given matrix A, I'll have to go from -2 to 2 in the first component, from -3 to 3 in the second and from -4 to 4 in the third and I'm sure to land on all the points inside the sphere (plus superfluous extra points, but importantly definitely every valid point inside) Next up:
Step 2: using the above information, generate all the candidate combinations.
function K=findAllCombinations(I) //takes a matrix of the form produced by
//findMaxComponents() and returns a matrix
//which lists all the integer linear combinations
//in the respective ranges.
v=I(1,:); //starting from the minimal vector
K=[];
next=1; //keeps track of what component to advance next
changed=%F; //keeps track of whether to add the vector to the output
while or(v~=I(2,:)) //as long as not all components of v match all components of the maximum vector
if v <= I(2,:) then //if each current component is smaller than each largest possible component
if ~changed then
K=[K;v]; //store the vector and
end
v(next)=v(next)+1; //advance the component by 1
next=1; //also reset next to 1
changed=%F;
else
v(1:next)=I(1,1:next); //reset all components smaller than or equal to the current one and
next=next+1; //advance the next larger component next time
changed=%T;
end
end
K=[K;I(2,:)]'; //while loop ends a single iteration early so add the maximal vector too
//also transpose K to fit better with the other functions
endfunction
So now that I have that, all that remains is to check whether a given combination actually does lie inside or outside the sphere. All I gotta do for that is:
Step 3: Filter the combinations to find the actually valid lattice points
function points=generatePoints(A,K,r)
possiblePoints=A*K; //explicitly generates all the possible points
points=[];
for i=possiblePoints
if i'*i<=r*r then //filter those that are too far from the origin
points=[points i];
end
end
endfunction
And I get all the combinations that actually do fit inside the sphere of radius r.
For the above example, the output is rather long: Of originally 315 possible points for a sphere of radius 3 I get 163 remaining points.
The first 4 are: (each column is one)
- 0.2425356 0.2425356 1.2126781 - 0.9701425
- 2.4253563 - 2.6678919 - 2.4253563 - 2.4253563
1.6977494 0. 0.2425356 0.4850713
so the remainder of the work is optimization. Presumably some of those loops could be made faster and especially as the number of dimensions goes up, I have to generate an awful lot of points which I have to discard, so maybe there is a better way than taking the bounding n-parallelotope of the n-1-sphere as a starting point.
Let us just represent K as X.
The problem can be represented as:
(a11x1 + a12x2..)^2 + (a21x1 + a22x2..)^2 ... < r^2
(x1,x2,...) will not form a sphere.
This can be done with recursion on dimension--pick a lattice hyperplane direction and index all such hyperplanes that intersect the r-radius ball. The ball intersection of each such hyperplane itself is a ball, in one lower dimension. Repeat. Here's the calling function code in Octave:
function lat_points(lat_bas_mx,rr)
% **globals for hyperplane lattice point recursive function**
clear global; % this seems necessary/important between runs of this function
global MLB;
global NN_hat;
global NN_len;
global INP; % matrix of interior points, each point(vector) a column vector
global ctr; % integer counter, for keeping track of lattice point vectors added
% in the pre-allocated INP matrix; will finish iteration with actual # of points found
ctr = 0; % counts number of ball-interior lattice points found
MLB = lat_bas_mx;
ndim = size(MLB)(1);
% **create hyperplane normal vectors for recursion step**
% given full-rank lattice basis matrix MLB (each vector in lattice basis a column),
% form set of normal vectors between successive, nested lattice hyperplanes;
% store them as columnar unit normal vectors in NN_hat matrix and their lengths in NN_len vector
NN_hat = [];
for jj=1:ndim-1
tmp_mx = MLB(:,jj+1:ndim);
tmp_mx = [NN_hat(:,1:jj-1),tmp_mx];
NN_hat(:,jj) = null(tmp_mx'); % null space of transpose = orthogonal to columns
tmp_len = norm(NN_hat(:,jj));
NN_hat(:,jj) = NN_hat(:,jj)/tmp_len;
NN_len(jj) = dot(MLB(:,jj),NN_hat(:,jj));
if (NN_len(jj)<0) % NN_hat(:,jj) and MLB(:,jj) must have positive dot product
% for cutting hyperplane indexing to work correctly
NN_hat(:,jj) = -NN_hat(:,jj);
NN_len(jj) = -NN_len(jj);
endif
endfor
NN_len(ndim) = norm(MLB(:,ndim));
NN_hat(:,ndim) = MLB(:,ndim)/NN_len(ndim); % the lowest recursion level normal
% is just the last lattice basis vector
% **estimate number of interior lattice points, and pre-allocate memory for INP**
vol_ppl = prod(NN_len); % the volume of the ndim dimensional lattice paralellepiped
% is just the product of the NN_len's (they amount to the nested altitudes
% of hyperplane "paralellepipeds")
vol_bll = exp( (ndim/2)*log(pi) + ndim*log(rr) - gammaln(ndim/2+1) ); % volume of ndim ball, radius rr
est_num_pts = ceil(vol_bll/vol_ppl); % estimated number of lattice points in the ball
err_fac = 1.1; % error factor for memory pre-allocation--assume max of err_fac*est_num_pts columns required in INP
INP = zeros(ndim,ceil(err_fac*est_num_pts));
% **call the (recursive) function**
% for output, global variable INP (matrix of interior points)
% stores each valid lattice point (as a column vector)
clp = zeros(ndim,1); % confirmed lattice point (start at origin)
bpt = zeros(ndim,1); % point at center of ball (initially, at origin)
rd = 1; % initial recursion depth must always be 1
hyp_fun(clp,bpt,rr,ndim,rd);
printf("%i lattice points found\n",ctr);
INP = INP(:,1:ctr); % trim excess zeros from pre-allocation (if any)
endfunction
Regarding the NN_len(jj)*NN_hat(:,jj) vectors--they can be viewed as successive (nested) altitudes in the ndim-dimensional "parallelepiped" formed by the vectors in the lattice basis, MLB. The volume of the lattice basis parallelepiped is just prod(NN_len)--for a quick estimate of the number of interior lattice points, divide the volume of the ndim-ball of radius rr by prod(NN_len). Here's the recursive function code:
function hyp_fun(clp,bpt,rr,ndim,rd)
%{
clp = the lattice point we're entering this lattice hyperplane with
bpt = location of center of ball in this hyperplane
rr = radius of ball
rd = recrusion depth--from 1 to ndim
%}
global MLB;
global NN_hat;
global NN_len;
global INP;
global ctr;
% hyperplane intersection detection step
nml_hat = NN_hat(:,rd);
nh_comp = dot(clp-bpt,nml_hat);
ix_hi = floor((rr-nh_comp)/NN_len(rd));
ix_lo = ceil((-rr-nh_comp)/NN_len(rd));
if (ix_hi<ix_lo)
return % no hyperplane intersections detected w/ ball;
% get out of this recursion level
endif
hp_ix = [ix_lo:ix_hi]; % indices are created wrt the received reference point
hp_ln = length(hp_ix);
% loop through detected hyperplanes (updated)
if (rd<ndim)
bpt_new_mx = bpt*ones(1,hp_ln) + NN_len(rd)*nml_hat*hp_ix; % an ndim by length(hp_ix) matrix
clp_new_mx = clp*ones(1,hp_ln) + MLB(:,rd)*hp_ix; % an ndim by length(hp_ix) matrix
dd_vec = nh_comp + NN_len(rd)*hp_ix; % a length(hp_ix) row vector
rr_new_vec = sqrt(rr^2-dd_vec.^2);
for jj=1:hp_ln
hyp_fun(clp_new_mx(:,jj),bpt_new_mx(:,jj),rr_new_vec(jj),ndim,rd+1);
endfor
else % rd=ndim--so at deepest level of recursion; record the points on the given 1-dim
% "lattice line" that are inside the ball
INP(:,ctr+1:ctr+hp_ln) = clp + MLB(:,rd)*hp_ix;
ctr += hp_ln;
return
endif
endfunction
This has some Octave-y/Matlab-y things in it, but most should be easily understandable; M(:,jj) references column jj of matrix M; the tic ' means take transpose; [A B] concatenates matrices A and B; A=[] declares an empty matrix.
Updated / better optimized from original answer:
"vectorized" the code in the recursive function, to avoid most "for" loops (those slowed it down a factor of ~10; the code now is a bit more difficult to understand though)
pre-allocated memory for the INP matrix-of-interior points (this speeded it up by another order of magnitude; before that, Octave was having to resize the INP matrix for every call to the innermost recursion level--for large matrices/arrays that can really slow things down)
Because this routine was part of a project, I also coded it in Python. From informal testing, the Python version is another 2-3 times faster than this (Octave) version.
For reference, here is the old, much slower code in the original posting of this answer:
% (OLD slower code, using for loops, and constantly resizing
% the INP matrix) loop through detected hyperplanes
if (rd<ndim)
for jj=1:length(hp_ix)
bpt_new = bpt + hp_ix(jj)*NN_len(rd)*nml_hat;
clp_new = clp + hp_ix(jj)*MLB(:,rd);
dd = nh_comp + hp_ix(jj)*NN_len(rd);
rr_new = sqrt(rr^2-dd^2);
hyp_fun(clp_new,bpt_new,rr_new,ndim,rd+1);
endfor
else % rd=ndim--so at deepest level of recursion; record the points on the given 1-dim
% "lattice line" that are inside the ball
for jj=1:length(hp_ix)
clp_new = clp + hp_ix(jj)*MLB(:,rd);
INP = [INP clp_new];
endfor
return
endif

Fully cover a rectangle with minimum amount of fixed radius circles

I've had this problem for a few years. It was on an informatics contest in my town a while back. I failed to solve it, and my teacher failed to solve it. I haven't met anyone who was able to solve it. Nobody I know knows the right way to give the answer, so I decided to post it here:
Ze problem
Given a rectangle, X by Y, find the minimum amount of circles N with a fixed given radius R, necessary to fully cover every part of the rectangle.
I have thought of ways to solve it, but I have nothing definite. If each circle defines an inner rectangle, then R^2 = Wi^2 + Hi^2, where Wi and Hi are the width and height of the practical area covered by each circle i. At first I thought I should make Wi equal to Wj for any i=j, the same for H. That way, I could simplify the problem by making the width/height ratios equal with the main rectangle (Wi/Hi = X/Y). That way, N=X/Wi. But that solution is surely wrong in case X greatly exceeds Y or vice versa.
The second idea was that Wi=Hi for any given i. That way, squares fill space most efficiently. However if a very narrow strip remains, it's much more optimal to use rectangles to fill it, or better yet - use rectangles for the last row before that too.
Then I realized that none of the ideas are the optimal, since I can always find better ways of doing it. It will always be close to final, but not final.
Edit
In some cases (large rectangle) joining hexagons seem to be a better solution than joining squares.
Further Edit
Here's a comparison of 2 methods: clover vs hexagonal. Hexagonal is, obviously, better, for large surfaces. I do think however that when the rectangle is small enough, rectangular method may be more efficient. It's a hunch. Now, in the picture you see 14 circles on the left, and 13 circles on the right. Though the surface differs much greater (double) than one circle. It's because on the left they overlap less, thus waste less surface.
The questions still remain:
Is the regular hexagon pattern itself optimal? Or certain adjustments should be made in parts of the main rectangle.
Are there reasons not to use regular shapes as "ultimate solution"?
Does this question even have an answer? :)
For X and Y large compared to R, a hexagonal (honeycomb) pattern is near optimal. The distance between the centers of the circles in the X-direction is sqrt(3)*R. The distance between rows in the Y-direction is 3*R/2, so you need roughly X*Y/R^2 * 2*/(3*sqrt(3)) circles.
If you use a square pattern, the horizontal distance is larger (2*R), but the vertical distance is much smaller (R), so you'd need about X*Y/R^2 * 1/2 circles. Since 2/(3*sqrt(3) < 1/2, the hexagonal pattern is the better deal.
Note that this is only an approximation. It is usually possible to jiggle the regular pattern a bit to make something fit where the standard pattern wouldn't. This is especially true if X and Y are small compared to R.
In terms of your specific questions:
The hexagonal pattern is an optimal covering of the entire plane. With X and Y finite, I would think it is often possible to get a better result. The trivial example is when the height is less than the radius. In that case you can move the circles in the one row further apart until the distance between the intersecting points of every pair of circles equals Y.
Having a regular pattern imposes additional restrictions on the solution, and so the optimal solution under those restrictions may not be optimal with those restrictions removed. In general, somewhat irregular patterns may be better (see the page linked to by mbeckish).
The examples on that same page are all specific solutions. The solutions with more circles resemble the hexagonal pattern somewhat. Still, there does not appear to be a closed-form solution.
This site attacks the problem from a slightly different angle: Given n unit circles, what is the largest square they can cover?
As you can see, as the number of circles changes, so does the covering pattern.
For your problem, I believe this implies: different rectangle dimensions and circle sizes will dictate different optimal covering patterns.
The hexagon is better than the diamond. Consider the percent area of the unit circle covered by each:
#!/usr/bin/env ruby
include Math
def diamond
# The distance from the center to a corner is the radius.
# On a unit circle, that is 1.
radius = 1
# The edge of the nested diamond is the hypotenuse of a
# right triangle whose legs are both radii.
edge = sqrt(radius ** 2 + radius ** 2)
# The area of the diamond is the square of the edge
edge ** 2
end
def hexagon
# The hexagon is composed of 6 equilateral triangles.
# Since the inner edges go from the center to a hexagon
# corner, their length is the radius (1).
radius = 1
# The base and height of an equilateral triangle whose
# edge is 'radius'.
base = radius
height = sin(PI / 3) * radius
# The area of said triangle
triangle_area = 0.5 * base * height
# The area of the hexagon is 6 such triangles
triangle_area * 6
end
def circle
radius = 1
PI * radius ** 2
end
puts "diamond == #{sprintf "%2.2f", (100 * diamond / circle)}%"
puts "hexagon == #{sprintf "%2.2f", (100 * hexagon / circle)}%"
And
$ ./geometrons.rb
diamond == 63.66%
hexagon == 82.70%
Further, regular hexagons are highest-vertex polygon that form a regular tessellation of the plane.
According my calculations the right answer is:
D=2*R; X >= 2*D, Y >= 2*D,
N = ceil(X/D) + ceil(Y/D) + 2*ceil(X/D)*ceil(Y/D)
In particular case if the remainder for X/D and Y/D equal to 0, then
N = (X + Y + X*Y/R)/D
Case 1: R = 1, X = 2, Y = 2, then N = 4
Case 2: R = 1, X = 4, Y = 6, then N = 17
Case 3: R = 1, X = 5, Y = 7, then N = 31
Hope it helps.
When the circles are disposed as a clover with four leafs with a fifth circle in the middle, a circle will cover an area equal to R * 2 * R. In this arrangement, the question becomes: how many circles that cover an area of R * 2 * R will cover an area of W * H?, or N * R * 2 * R = W * H. So N = W * H / R * 2 * R.

Resources