How to quickly pack spheres in 3D? - algorithm

I'm looking for an algorithm for random close packing of spheres in 3D. The trick is that I'd like to pack spheres around a certain number of existing spheres. So for example, given somewhere between 100 and 1000 spheres in 3D (which have fixed positions and sizes; they may overlap, and may be different sizes), I'd like to pack spheres (all same size, positions can be chosen freely) around them (with no overlaps).
The metric for a good quality of packing is the packing density or void fraction. Essentially I'd like the fixed spheres and the packed spheres to occupy a compact volume of space (eg roughly ~spherical, or packed in layers around the fixed spheres) with as few voids in it as possible.
Is there an off the shelf algorithm that does this? How would you approach it in a way that balances speed of calculation with packing quality?
UPDATE Detail on packing density: this depends on what volume is chosen for the calculation. For this, we're looking to pack a certain number of layers of spheres around the fixed ones. Form a surface of points which are exactly a distance d to the surface of the closest fixed sphere; the packing density should be calculated within the volume enclosed by that surface. It's convenient if d = some multiple of the size of the packed spheres. (Assume we can place at least as many free spheres as needed to fill that volume; there may be excess ones, it doesn't matter where they're placed)
The fixed and all the variable spheres are all pretty similar sizes (let's say within 2x range from smallest to largest). In practice the degree of overlap of the fixed spheres is also limited: no fixed sphere is closer than a certain distance (around 0.2-0.3 diameters) of any other fixed sphere (so it is guaranteed that they are spread out, and/or only overlap a few neighbors rather than all overlapping each other)
Bounty posted!

Use a lattice where each point is equidistant by the diameter of the fill sphere. Any lattice shape, meeting the above definition will suffice.
Orient the translation and rotation of the lattice to minimize the center offsets of the fixed spheres to produce the world transform.
Fixed Pass 1:
Create a list of any lattice points within the fixed spheres radii + the diameter of the fill spheres.
For the latter keep the positional (origin - point) difference vectors in a list.
Flag in the lattice points(removal) in the list.
Lattice Pass 1:
Combine,i.e. re-base origin to overlap point(either a true overlap or extended to fill radius), any overlapping Fixed sphere's distance vectors. Storing the value on one side and flagging it on the other, to permit multiple overlaps.
This is where a decision is needed:
Optimize space over time, Computationally slow:
Add points from the adjusted origin radius + fill radius. Then iterating over lattice points moving one point at a time away from other points until all spacing conditions are met. If the lattice points implement spring logic, an optimal solution is produced, given enough iterations(N^2+ N). ?? Stop Here.... Done.
Pull the remaining points in lattice to fill the void:
Warp the lattice near(The size is as large as needed) each overlap point or origin, if no overlap exists pulling the points, to fill the gap.
Lattice Pass 2:
Add missing, i.e. no other point within fill radius + 1, and not near fixed sphere (other radius + fill radius) flagged points as removed. This should be a small amount of points.
Lattice Pass 3:
Adjust all lattice positions to move closer to the proper grid spacing. This will be monotonically decreasing the distances between points, limited to >= radius1 + radius2.
Repeat 3-4(or more) times. Applying a tiny amount of randomness(1 to -1 pixel max per dimension) bias offset to the first pass of the created points to avoid any equal spacing conflicts after the warp. If no suitable gap is created the solution may settle to a poorly optimized solution.
Each fill sphere is centered on a lattice grid point.
I can see many areas of improvement and optimization, but the point was to provide a clear somewhat fast algorithm that is good, but not guaranteed optimal.
Note the difference between 1 and 2:
Number 1 creates a sphere colliding with other spheres and requires all fills to move multiple times to resolve the collisions.
Number 2 only creates new spheres in empty spaces, and moves the rest inward to adapt, resulting in much faster convergence, since there are no collisions to resolve.

Related

Calculate volume of intersection between two sets of spheres

I need to calculate the overlap between two 3D volumes each of which is described as the union of a set of (possibly overlapping) spheres. If the two sets of spheres are S_i (with radius r_i and center at c_i) and S_j (radius r_j and center at c_j); consider the two combined volumes in 3D given by union(S_i) and union(S_j), and then calculate volume(intersection(union(S_i), union(S_j))).
Is there a reasonably fast algorithm for this?
Also: is there a fast approximate algorithm with a bound on the max error?
(Part of what makes that tricky is that spheres within each set can overlap in arbitrary ways; most of the time they overlap a little - let's say for the typical sphere in one set, there are a few other spheres in the same set that overlap it by a few % of its volume; but they could also overlap by a lot more, and in principle the same point in 3D could be in the interior of a lot more than two spheres in the same set, so an exact solution would have to consider 3-way, 4-way and so on up to n-way overlapped regions. A fast approximate solution doesn't necessarily have to do that)

How to index nearby 3D points on the fly?

In physics simulations (for example n-body systems) it is sometimes necessary to keep track of which particles (points in 3D space) are close enough to interact (within some cutoff distance d) in some kind of index. However, particles can move around, so it is necessary to update the index, ideally on the fly without recomputing it entirely. Also, for efficiency in calculating interactions it is necessary to keep the list of interacting particles in the form of tiles: a tile is a fixed size array (eg 32x32) where the rows and columns are particles, and almost every row-particle is close enough to interact with almost every column particle (and the array keeps track of which ones actually do interact).
What algorithms may be used to do this?
Here is a more detailed description of the problem:
Initial construction: Given a list of points in 3D space (on the order of a few thousand to a few million, stored as array of floats), produce a list of tiles of a fixed size (NxN), where each tile has two lists of points (N row points and N column points), and a boolean array NxN which describes whether the interaction between each row and column particle should be calculated, and for which:
a. every pair of points p1,p2 for which distance(p1,p2) < d is found in at least one tile and marked as being calculated (no missing interactions), and
b. if any pair of points is in more than one tile, it is only marked as being calculated in the boolean array in at most one tile (no duplicates),
and also the number of tiles is relatively small if possible (but this is less important than being able to update the tiles efficiently)
Update step: If the positions of the points change slightly (by much less than d), update the list of tiles in the fastest way possible so that they still meet the same conditions a and b (this step is repeated many times)
It is okay to keep any necessary data structures that help with this, for example the bounding boxes of each tile, or a spatial index like a quadtree. It is probably too slow to calculate all particle pairwise distances for every update step (and in any case we only care about particles which are close, so we can skip most possible pairs of distances just by sorting along a single dimension for example). Also it is probably too slow to keep a full (quadtree or similar) index of all particle positions. On the other hand is perfectly fine to construct the tiles on a regular grid of some kind. The density of particles per unit volume in 3D space is roughly constant, so the tiles can probably be built from (essentially) fixed size bounding boxes.
To give an example of the typical scale/properties of this kind of problem, suppose there is 1 million particles, which are arranged as a random packing of spheres of diameter 1 unit into a cube with of size roughly 100x100x100. Suppose the cutoff distance is 5 units, so typically each particle would be interacting with (2*5)**3 or ~1000 other particles or so. The tile size is 32x32. There are roughly 1e+9 interacting pairs of particles, so the minimum possible number of tiles is ~1e+6. Now assume each time the positions change, the particles move a distance around 0.0001 unit in a random direction, but always in a way such that they are at least 1 unit away from any other particle and the typical density of particles per unit volume stays the same. There would typically be many millions of position update steps like that. The number of newly created pairs of interactions per step due to the movement is (back of the envelope) (10**2 * 6 * 0.0001 / 10**3) * 1e+9 = 60000, so one update step can be handled in principle by marking 60000 particles as non-interacting in their original tiles, and adding at most 60000 new tiles (mostly empty - one per pair of newly interacting particles). This would rapidly get to a point where most tiles are empty, so it is definitely necessary to combine/merge tiles somehow pretty often - but how to do it without a full rebuild of the tile list?
P.S. It is probably useful to describe how this differs from the typical spatial index (eg octrees) scenario: a. we only care about grouping close by points together into tiles, not looking up which points are in an arbitrary bounding box or which points are closest to a query point - a bit closer to clustering that querying and b. the density of points in space is pretty constant and c. the index has to be updated very often, but most moves are tiny
Not sure my reasoning is sound, but here's an idea:
Divide your space into a grid of 3d cubes, like this in three dimensions:
The cubes have a side length of d. Then do the following:
Assign all points to all cubes in which they're contained; this is fast since you can derive a point's cube from just their coordinates
Now check the following:
Mark all points in the top left of your cube as colliding; they're less than d apart. Further, every "quarter cube" in space is only the top left quarter of exactly one cube, so you won't check the same pair twice.
Check fo collisions of type (p, q), where p is a point in the top left quartile, and q is a point not in the top left quartile. In this way, you will check collision between every two points again at most once, because very pair of quantiles is checked exactly once.
Since every pair of points is either in the same quartile or in neihgbouring quartiles, they'll be checked by the first or the second algorithm. Further, since points are approximately distributed evenly, your runtime is much less than n^2 (n=no points); in aggregate, it's k^2 (k = no points per quartile, which appears to be approximately constant).
In an update step, you only need to check:
if a point crossed a boundary of a box, which should be fast since you can look at one coordinate at a time, and box' boundaries are a simple multiple of d/2
check for collisions of the points as above
To create the tiles, divide the space into a second grid of (non-overlapping) cubes whose width is chosen s.t. the average count of centers between two particles that almost interact with each other that fall into a given cube is less than the width of your tiles (i.e. 32). Since each particle is expected to interact with 300-500 particles, the width will be much smaller than d.
Then, while checking for interactions in step 1 & 2, assigne particle interactions to these new cubes according to the coordinates of the center of their interaction. Assign one tile per cube, and mark interacting particles assigned to that cube in the tile. Visualization:
Further optimizations might be to consider the distance of a point's closest neighbour within a cube, and derive from that how many update steps are needed at least to change the collision status of that point; then ignore that point for this many steps.
I suggest the following algorithm. E.g we have cube 1x1x1 and the cutoff distance is 0.001
Let's choose three base anchor points: (0,0,0) (0,1,0) (1,0,0)
Associate array of size 1000 ( 1 / 0.001) with each anchor point
Add three numbers into each regular point. We will store the distance between the given point and each anchor point inside these fields
At the same time this distance will be used as an index in an array inside the anchor point. E.g. 0.4324 means index 432.
Let's store the set of points inside of each three arrays
Calculate distance between the regular point and each anchor point every time when update point
Move point between sets in arrays during the update
The given structures will give you an easy way to find all closer points: it is the intersection between three sets. And we choose these sets based on the distance between point and anchor points.
In short, it is the intersection between three spheres. Maybe you need to apply additional filtering for the result if you want to erase the corners of this intersection.
Consider using the Barnes-Hut algorithm or something similar. A simulation in 2D would use a quadtree data structure to store particles, and a 3D simulation would use an octree.
The benefit of using a a tree structure is that it stores the particles in a way that nearby particles can be found quickly by traversing the tree, and far-away particles are in traversal paths that can be ignored.
Wikipedia has a good description of the algorithm:
The Barnes–Hut tree
In a three-dimensional n-body simulation, the Barnes–Hut algorithm recursively divides the n bodies into groups by storing them in an octree (or a quad-tree in a 2D simulation). Each node in this tree represents a region of the three-dimensional space. The topmost node represents the whole space, and its eight children represent the eight octants of the space. The space is recursively subdivided into octants until each subdivision contains 0 or 1 bodies (some regions do not have bodies in all of their octants). There are two types of nodes in the octree: internal and external nodes. An external node has no children and is either empty or represents a single body. Each internal node represents the group of bodies beneath it, and stores the center of mass and the total mass of all its children bodies.
demo

Data structure for piecewise circular trajectory in plane

I'm trying to design a data-structure to hold/express a piecewise circular trajectory in the Euclidian plane. The trajectory is constrained to be continuous and have finite curvature everywhere, and therefore the circular arcs meet tangentially.
Storing all the circle centers, radii, and touching points would allow for inspecting the geometry anywhere in O(1) but would require explicit enforcement of the continuity and curvature constraints due to data redundancy. In my view, this would make the code messy.
Storing only the circle touching points (which are waypoints along the curve) along with the curve's initial direction would be sufficient in principle, and avoid data redundancy, but then it would be necessary to do an O(n) calculation to inspect the geometry of arc n, since that arc depends on all the arcs preceding it in the trajectory.
I would like to avoid data redundancy, but I also don't want to make the cost of geometric inspection prohibitive.
Does anyone have any high-level idea/advice to share?
For the most efficient traversal of the trajectory, if I am right you need
the ending curvilinear abscissas of every arc (cumulative),
the radii,
the starting angles,
the coordinates of the centers,
so that for a given s you find the index of the arc, then the azimuth and the coordinates of the point. (Either incrementally for a sequence of points, or by dichotomy for a single point.) That takes five parameters per arc.
Only the cumulative abscissas are global, but you can't do without them for single-point accesses. You can drop the radii and starting angles and retrieve them for any arc from the difference of curvilinear abscissas and the limit angles (see below). This reduces to three parameters.
On the other hand, knowing just the coordinates of the centers and those of the starting and ending points is enough to recover the whole geometry, and this takes two parameters per arc.
The meeting point of two arcs is found on the line through the centers, and if you know one radius, the other follows. And the limit angle is given by the direction of the line. So for an incremental traversal, this non-redundant description can do.
For convenient computation, knowing s and the arc index, consider the vectors from the center to the centers of the adjoining arcs. Rotate them so that the first becomes horizontal. The components of the other will give you the amplitude angle. The fraction (s - Si-1) / (Si - Si-1) of the amplitude gives you the azimuth of the point, to which you apply the counter-rotation.
I'd store items with the data required to get info for any point of that element. For example, an arc needs x, y, initial direction, radius, lenght (or end point, or angle difference or whatever you find easiest).
Because you need continuity (same x,y, same bearing, perhaps same curvature) between two ending points then a node with this properties is needed. Notice these properties are common to arcs and straights (a special arc identified by radius = 0). So you can treat a node the same as an item.
The trajectory should be calculated before any request. So you have all items-data in advance.
The container depends on how you request info.
If the trajectory can be somehow represented in a grid, then you better use a quad-tree.
I guess you must find the item from a x,y or accumulated length input. You will have to iterate through the container to find the element closest to the input data. Sorted data may help.
My choice is a simple vector with the consecutive elements, which happens to be sorted on accumulated trajectory length.
Finding by x,y on a x-sorted container (or a tree) is not so simple, due to some x,y may have perpendiculars to several items, consecutive or not, near or not, and you need to select the nearest one.

Approximation of a solid by a union of spheres

I've got a 3D solid, represented as the union of a set of polyhedral convex hulls. (Or a single convex, if that makes things easier.) I'd like to approximate that solid as the union of a set of spheres, in a way which minimizes both the number of spheres in the set and the error in the approximation. (The latter objective is deliberately vague: any reasonable error metric will do. Likewise, the way in which the objectives are combined is up in the air; either the number of spheres or the error metric could be constrained, or some function of the two could be minimized. I don't want to specify myself into a corner.)
The approximation does not need to entirely contain or be entirely contained by the original set. Each sphere may have an arbitrary radius.
This feels like the sort of problem that's NP-complete, and in any case unlikely to be practical using exact methods, so I'm assuming the solution lies in the realm of stochastic optimization. It feels like some variant of k-means might fit (assigning uncovered locations to their closest spheres, and refining the spheres to cover some of them), but I'm not sure how to handle multiply-covered locations, or how to find the local, not-necessarily-covering-everything optimum even for a single sphere. Also, for iterative methods efficiency is important, and doing 3D boolean operations is not going to be efficient.
The problem is not simple, but has been studied previously. The central concept
is the medial axis, which can be
viewed as a representation of an object by an infinite union of balls.
A key paper addressing your question is:
"The power crust, unions of balls, and the medial axis transform."
Nina Amenta, Sunghee Choi, Ravi Krishna Kolluri. Computational Geometry.
Volume 19, Issues 2–3, July 2001, Pages 127–153. (Journal link.)
(Images source: From Point Clouds to Power Crusts.)
A second paper is
Cazals, Frédéric, et al. "Greedy Geometric Algorithms for Collection of Balls, with Applications to Geometric Approximation and Molecular Coarse‐Graining." Computer Graphics Forum. Vol. 33. No. 6. 2014. (PDF download.)
whose 1st sentence is "Choosing balls which best approximate a 3D object is a non-trivial problem."!
Their primary application is to molecular models, which might be far
from your interests.
Hm, my best idea so far involves support vector machines. Turn your object into a whole bunch of (probably evenly spaced) points within and on the surface of the object. Train an SVDD model using a linear kernel (see libsvm for an SVDD implementation). The decision function of the model then represents an implicit surface defined by the support vectors of the model (and rho). Turning down the cost will get you more support vectors, turning it up gets you fewer.
Unfortunately, the nature of SVMs is such that the area covered by nearby support vectors will, uh, 'blob' together, sort of like this:
(sorry, my intuition for SVMs is entirely geometric/visual.)
Now, you don't have nice crisp spheres, but (massive hand waving!) hopefully the algorithm chose a useful distribution of centers for spheres.
Finally, you can concoct a function that computes error as a function of radii for spheres centers on all those points. Then just feed that into a nonlinear optimizer and tell it to minimize. Bam.
If you're willing to throw more CPU power at it, you could run another layer of error minimization over top of that one, which reruns the entire above process for different support vector costs, attempting to minimize some combination of error and cost. (Perhaps error/cost.)
This is what I came up with. This approach is more of an iterative 3D boolean operation so it might not be what you're looking for. The surface seems more difficult so I concentrated on that.
Overview
Basically add spheres inside the shape in positions that maximize the coverage of the surface. We convert the sphere into a 3D array of signed byte values. These values are points and will be gobbled up with spheres. We add one spheres at a time inside the object and then grow/shrink it in different directions to "eat" as many points as possible. The goal is to rack up as many points as possible per sphere. Points are earned by summing the points in the area of the sphere. With the addition of each sphere we count then count that area as used and set the Array values to 0.
(A) (B) ZZZZZZ (C) ZZZZZZ (D) ZZZZZZ (E) ZZZZZZ (F) ZZZZZZ
/\ ZX33XZ ZX33XZ ZX33XZ ZX33XZ ZX33XZ
/ \ ZX3223XZ ZX3223XZ ZX##23XZ ZX ##XZ ZX XZ
/ \ ZX321123XZ ZX321123XZ ZX####23XZ ZX ####XZ ZX XZ
| | ZX32111123XZ ZX32111123XZ ZX######23XZ ZX ######XZ ZX XZ
| | ZX32111133XZ ZX32111133XZ ZX######23XZ ZX ######XZ ZX XZ
| | ZX32222223XZ ZX##222223XZ ZX3####223XZ ZX3 ####3XZ ZX3 ##XZ
|------| ZX33333333XZ ZX##333333XZ ZX33##3333XZ ZX33 ##33XZ ZX33 ##XZ
X= -1 ZXXXXXXXXXXZ ZXXXXXXXXXXZ ZXXXXXXXXXXZ ZXXXXXXXXXXZ ZXXXXXXXXXXZ
Y= -2 ZZZZZZZZZZZZ ZZZZZZZZZZZZ ZZZZZZZZZZZZ ZZZZZZZZZZZZ ZZZZZZZZZZZZ
(A) The shape that we want to fill. (2D used here but 3D would be similar)
(B) Convert the shape in a 3D grid of points. Surface gets largest number and as you work to the center the numbers settle on low positive numbers(like 1); Outside the shape gets negative numbers; deep interior gets 1
(C) Add a small sphere. (we will grow it)
(D) Expand the sphere until we gobble up the maximum number of points
(E) Add the Next sphere and grow it.
(F) Add another sphere. This one is small.
Process
5) first break down the shape into a 3D block resolution.
10) Then give the most "points" to the blocks around the surface. High points with the block that actually touches the surface and lower points as you move inward or outward. As you go outward the points should quickly become negative as this will prevent protruding spheres. As you move inward from the surface the points should settle at 1 so that the central area would be all ones. These points can be setup in different ways to produce different results.
15) Find a location on the inside edge of the shape that is outside a sphere. While being near the edge is not required it does minimize the search area. If a location cannot be found goto 80. If a location cannot be found that is near
20) Draw a sphere with a zero radius inside the shape that is not covered
25) Move the sphere up/down until the points are maximized (simulated annealing)
26) Move the sphere in/out until the points are maximized
27) Move the sphere left/right until the points are maximized
28) Move the Top of the sphere up/down until the points are maximized (simulated annealing/hill climbing)
29) Move the Bottom the sphere up/down until the points are maximized
30) Move the Left side of the sphere in/out until the points are maximized
31) Move the Right side of the sphere in/out until the points are maximized
32) Move the Front of the sphere in/out until the points are maximized
50) If any changes in 25-32 then repeat (goto 25)
70) Subtract out the points from the last added sphere. Set all values to zero for internal values(positive numbers) and do not adjust external values(negative numbers). Goto 15.
80) (optional) Fill in an internal gaps. Basically visit each element to make sure it is less then or equal to 0. If positive then goto 20 with that element. Else, if none found then goto 85. Note:all outside values should be negative and all internal values that are in a sphere should be 0.
85) Finished
Notes
Since there would a grid of values, a 1000x1000x1000 workspace would consume up 1GB.
Not an exact solution
Could take lots of compute for higher resolutions.
For efficiency, smaller spheres can have their pixel ranges pre-recorded so that the sphere does not need to be calculated for each iteration.
A lower resolution(i.e. 500x500x500) version could be completed first and then the location and size of the spheres could be applied to a larger 1000x1000x1000. Also for very large shapes sub-sections could be solved.
A good start would be to develop an algorithm to:
1) Find the largest radius of an inscribed sphere.
2) Then consider the subtract volume
3) Approximate the subtract volume by a polyhedral.
4) Subdivide that polyhedral into smaller (finer) polyhedrals.
5) Redo step 1.
There might be some variations, but it seems to answer your question. As you can see, the error function decreases as the number of spheres increases, so you can't minimize both: that is a trade-off. But you can fix one and try to optimize another, e.g. fix the error function to be an acceptable tolerance, and minimize the number of spheres to do it, or vice versa.

Randomly and efficiently filling space with shapes

What is the most efficient way to randomly fill a space with as many non-overlapping shapes? In my specific case, I'm filling a circle with circles. I'm randomly placing circles until either a certain percentage of the outer circle is filled OR a certain number of placements have failed (i.e. were placed in a position that overlapped an existing circle). This is pretty slow, and often leaves empty spaces unless I allow a huge number of failures.
So, is there some other type of filling algorithm I can use to quickly fill as much space as possible, but still look random?
Issue you are running into
You are running into the Coupon collector's problem because you are using a technique of Rejection sampling.
You are also making strong assumptions about what a "random filling" is. Your algorithm will leave large gaps between circles; is this what you mean by "random"? Nevertheless it is a perfectly valid definition, and I approve of it.
Solution
To adapt your current "random filling" to avoid the rejection sampling coupon-collector's issue, merely divide the space you are filling into a grid. For example if your circles are of radius 1, divide the larger circle into a grid of 1/sqrt(2)-width blocks. When it becomes "impossible" to fill a gridbox, ignore that gridbox when you pick new points. Problem solved!
Possible dangers
You have to be careful how you code this however! Possible dangers:
If you do something like if (random point in invalid grid){ generateAnotherPoint() } then you ignore the benefit / core idea of this optimization.
If you do something like pickARandomValidGridbox() then you will slightly reduce the probability of making circles near the edge of the larger circle (though this may be fine if you're doing this for a graphics art project and not for a scientific or mathematical project); however if you make the grid size 1/sqrt(2) times the radius of the circle, you will not run into this problem because it will be impossible to draw blocks at the edge of the large circle, and thus you can ignore all gridboxes at the edge.
Implementation
Thus the generalization of your method to avoid the coupon-collector's problem is as follows:
Inputs: large circle coordinates/radius(R), small circle radius(r)
Output: set of coordinates of all the small circles
Algorithm:
divide your LargeCircle into a grid of r/sqrt(2)
ValidBoxes = {set of all gridboxes that lie entirely within LargeCircle}
SmallCircles = {empty set}
until ValidBoxes is empty:
pick a random gridbox Box from ValidBoxes
pick a random point inside Box to be center of small circle C
check neighboring gridboxes for other circles which may overlap*
if there is no overlap:
add C to SmallCircles
remove the box from ValidBoxes # possible because grid is small
else if there is an overlap:
increase the Box.failcount
if Box.failcount > MAX_PERGRIDBOX_FAIL_COUNT:
remove the box from ValidBoxes
return SmallCircles
(*) This step is also an important optimization, which I can only assume you do not already have. Without it, your doesThisCircleOverlapAnother(...) function is incredibly inefficient at O(N) per query, which will make filling in circles nearly impossible for large ratios R>>r.
This is the exact generalization of your algorithm to avoid the slowness, while still retaining the elegant randomness of it.
Generalization to larger irregular features
edit: Since you've commented that this is for a game and you are interested in irregular shapes, you can generalize this as follows. For any small irregular shape, enclose it in a circle that represent how far you want it to be from things. Your grid can be the size of the smallest terrain feature. Larger features can encompass 1x2 or 2x2 or 3x2 or 3x3 etc. contiguous blocks. Note that many games with features that span large distances (mountains) and small distances (torches) often require grids which are recursively split (i.e. some blocks are split into further 2x2 or 2x2x2 subblocks), generating a tree structure. This structure with extensive bookkeeping will allow you to randomly place the contiguous blocks, however it requires a lot of coding. What you can do however is use the circle-grid algorithm to place the larger features first (when there's lot of space to work with on the map and you can just check adjacent gridboxes for a collection without running into the coupon-collector's problem), then place the smaller features. If you can place your features in this order, this requires almost no extra coding besides checking neighboring gridboxes for collisions when you place a 1x2/3x3/etc. group.
One way to do this that produces interesting looking results is
create an empty NxM grid
create an empty has-open-neighbors set
for i = 1 to NumberOfRegions
pick a random point in the grid
assign that grid point a (terrain) type
add the point to the has-open-neighbors set
while has-open-neighbors is not empty
foreach point in has-open-neighbors
get neighbor-points as the immediate neighbors of point
that don't have an assigned terrain type in the grid
if none
remove point from has-open-neighbors
else
pick a random neighbor-point from neighbor-points
assign its grid location the same (terrain) type as point
add neighbor-point to the has-open-neighbors set
When done, has-open-neighbors will be empty and the grid will have been populated with at most NumberOfRegions regions (some regions with the same terrain type may be adjacent and so will combine to form a single region).
Sample output using this algorithm with 30 points, 14 terrain types, and a 200x200 pixel world:
Edit: tried to clarify the algorithm.
How about using a 2-step process:
Choose a bunch of n points randomly -- these will become the centres of the circles.
Determine the radii of these circles so that they do not overlap.
For step 2, for each circle centre you need to know the distance to its nearest neighbour. (This can be computed for all points in O(n^2) time using brute force, although it may be that faster algorithms exist for points in the plane.) Then simply divide that distance by 2 to get a safe radius. (You can also shrink it further, either by a fixed amount or by an amount proportional to the radius, to ensure that no circles will be touching.)
To see that this works, consider any point p and its nearest neighbour q, which is some distance d from p. If p is also q's nearest neighbour, then both points will get circles with radius d/2, which will therefore be touching; OTOH, if q has a different nearest neighbour, it must be at distance d' < d, so the circle centred at q will be even smaller. So either way, the 2 circles will not overlap.
My idea would be to start out with a compact grid layout. Then take each circle and perturb it in some random direction. The distance in which you perturb it can also be chosen at random (just make sure that the distance doesn't make it overlap another circle).
This is just an idea and I'm sure there are a number of ways you could modify it and improve upon it.

Resources