Algorithm for partially filling a polygonal mesh - algorithm

Overview:
I have a simple plastic sandbox represented by a 3D polygonal mesh. I need to be able to determine the water level after pouring a specific amount of water into the sandbox.
The water is distributed evenly from above when poured
No fluid simulation, the water is poured very slowly
It needs to be fast
Question:
What kind of techniques/algorithms could I use for this problem?
I'm not looking for a program or similar that can do this, just algorithms - I'll do the implementation.

Just an idea:
First you compute all saddle points. Tools like discrete morse theory or topological persistence might be useful here, but I know too little about them to be sure. Next you iterate over all saddle points, starting at the lowest, and compute the point in time after which water starts to cross that point. This is the time at which the shallower (in terms of volume versus surface area) of the two adjacent basins has reached a level that matches the height of that saddle point. From that point onward, water pouring onto that surface will flow over to the other basin and contribute to its water level instead, until the two basins have reached an equal level. After that, they will be treated as one single basin. Along the way you might have to correct the times when other saddle points will be reached, as the area associated with basins changes. You iterate in order of increasing time, not increasing height (e.g. using a heap with decrease-key operation). Once the final pair of basins have equal height, you are done; afterwards there is only a single basin left.
On the whole, this gives you a sequence of “interesting” times, where things change fundamentally. In between these, the problem will be much more local, as you only have to consider the shape of a single basin to compute its water level. In this local problem, you know the volume of water contained in that basin, so you can e.g. use bisection to find a suitable level for that. The adjacent “interesting” times might provide useful end points for your bisection.
To compute the volume of a triangulated polytope, you can use a 3D version of the shoelace formula: for every triangle, you take its three vertices and compute their determinant. Sum them up and divide by 6, and you have the volume of the enclosed space. Make sure that you orientate all your triangles consistently, i.e. either all as seen from the inside or all as seen from the outside. The choice decides the overall sign, try it out to see which one is which.
Note that your question might need refinement: when the level in a basin reaches two saddle points at exactly the same height, where does the water flow? Without fluid simulation this is not well defined, I guess. You could argue that it should be distributed equally among all adjacent basins. You could argue that such situations are unlikely to occur in real data, and therefore choose one neighbour arbitrarily, implying that this saddle point has infinitesimally less height than the other ones. Or you could come up with a number of other solutions. If this case is of interest to you, then you might need to clarify what you expect there.

A simple solution comes to mind:
binary-search your way through different heights of water, computing the volume of water contained.
i.e.
Start with an upper-estimate for the water's height of the depth D of the sandbox.
Note that since sand is porous, the maximum volume will be with the box filled to the brim with water;
Any more water would just pout back out into the grass in our hypothetical back-yard.
Also note, that this means that you don't need to worry about saddle points, or multiple water levels in your solution;
Again, we're assuming regular porous sand here, not mountains made out of rock.
Compute the volume of water contained by height D.
If it is within your approximation threshold, quit.
Otherwise, adjust your estimate with a different height, and repeat.
Note that computing the volume of water above the surface of the sand for any given triangular piece of the sand is easy.
It's the volume of a triangular prizm plus the volume of the tetrahedron which is in contact with the sand:
Note that the volume of water below the sandline would be calculated similarly, but the volume would be less, since part of it would be occupied by the sand.
I suggest internet-searching for typical air-void contents of sand, or water-holding capacity.
Or whatever phrases would return a sane result.
Also note, that some triangles may have zero water above the sand, if the sand is above the water-line.
Once you have the volume of water both above and below the sand-line for a single triangle of your mesh, you just loop over all of the triangles to get the total volume for your entire sandbox, for the given height.
Note that this is a pretty dumb algorithm, but I suspect it will have decent performance compared to a fancy-schmancier algorithm which would try to do anything more clever.
Remember that this is just a handful of multiplications and summations for each triangle, and that blind loops with few if statements or other flow-control tend to execute quickly, since the processor can pipeline it well.
This method may be parallelized easily instead of looping over each triangle, if you have a highly-detailed mesh of a sandbox, and want to shove the calculations into multiple cores.
Or keep the loops, and shove different heights into each core.
Or something else; I leave parallelization and speeding up as an exercise for the reader.

Related

Optimally filling a 3D sphere with smaller spheres

I'm trying to optimally fill a 3D spherical volume with "particles" (represented by 3D XYZ vectors) that need to maintain a specific distance from each other, while attempting to minimize the amount of free space present in-between them.
There's one catch though- The particles themselves may fall on the boundary of the spherical volume- they just can't exist outside of it. Ideally, I'd like to maximize the number of particles that fall on this boundary (which makes this a kind of spherical packing problem, I suppose) and then fill the rest of the volume inwards.
Are there any kinds of algorithms out there that can solve this sort of thing? It doesn't need to be exact, but the key here is that the density of the final solution needs to be reasonably accurate (+/- ~5% of a "perfect" solution).
There is not a single formula which fills a sphere optimally with n spheres. On this wikipedia page you can see the optimal configurations for n <= 12. For the optimal configurations for n <= 500 you can view this site. As you can see on these sites different numbers of spheres have different optimal symmetry groups.
your constraints are a bit vague so hard to say for sure but I would try field approach for this. First see:
Computational complexity and shape nesting
Path generation for non-intersecting disc movement on a plane
How to implement a constraint solver for 2-D geometry?
and sub-links where you can find some examples of this approach.
Now the algo:
place N particles randomly inside sphere
N should be safely low so it is smaller then your solution particles count.
start field simulation
so use your solution rules to create attractive and repulsive forces and drive your particles via Newton D'Alembert physics. Do not forget to add friction (so movement will stop after time) and sphere volume boundary.
stop when your particles stop moving
so if max(|particles_velocity|)<threshold stop.
now check if all particles are correctly placed
not breaking any of your rules. If yes then remember this placement as solution and try again from #1 with N+1 particles. If not stop and use last correct solution.
To speed this up you can add more particles instead of using (N+1) similarly to binary search (add 32 particles until you can ... then just 16 ... ). Also you do not need to use random locations in #1 for the other runs. you can let the other particles start positions where they were placed in last run solution.
How to determine accuracy of the solution is entirely different matter. As you did not provide exact rules then we can only guess. I would try to estimate ideal particle density and compute the ideal particle count based on sphere volume. You can use this also for the initial guess of N and then compare with the final N.

Trouble finding shortest path across a 2D mesh surface

I asked this question three days ago and I got burned by contributors because I didn't include enough information. I am sorry about that.
I have a 2D matrix and each array position relates to the depth of water in a channel, I was hoping to apply Dijkstra's or a similar "least cost path" algorithm to find out the least amount of concrete needed to build a bridge across the water.
It took some time to format the data into a clean version so I've learned some rudimentary Matlab skills doing that. I have removed most of the land so that now the shoreline is standardised to a certain value, my plan is to use a loop to move through each "pixel" on the "west" shore and run a least cost algorithm against it to the closest "east" shore and move through the entire mesh ultimately finding the least cost one.
This is my problem, fitting the data to any of the algorithms. Unfortunately I get overwhelmed by options and different formats because the other examples are for other use cases.
My other consideration is that when the shortest cost path is calculated that it will be a jagged line which would not be suitable for a bridge so I need to constrain the bend radius in the path if at all possible and I don't know how to go about doing that.
A picture of the channel:
Any advice in an approach method would be great, I just need to know if someone knows a method that should work, then I will spend the time learning how to fit the data.
You can apply Dijkstra to your problem in this way:
the two "dry" regions you want to connect correspond to matrix entries with value 0; the other cells have a positive value designating the depth (or the cost of filling this place with concrete)
your edges are the connections of neighbouring cells in your matrix. (It can be a 4- or 8-neighbourhood.) The weight of the edge is the arithmetic mean of the values of the connected cells.
then you apply the Dijkstra algorithm with a starting point in one "dry" region and an end point in the other "dry" region.
The cheapest path will connect two cells of value 0 and its weight will correspond to sum of costs of cells visited. (One half of each cell weight is coming from the edge going to the cell, the other half from the edge leaving the cell.)
This way you will get a possibly rather crooked path leading over the water, which may be a helpful hint for where to build a cheap bridge.
You can speed up the calculation by using the A*-algorithm. Therefore one needs a lower bound of the remaining costs for reaching the other side for each cell. Such a lower bound can be calculated by examining the "concentric rings" around a point as long as rings do not contain a 0-cell of the other side. The sum of the minimal cell values for each ring is then a lower bound of the remaining costs.
An alternative approach, which emphasizes the constraint that you require a non-jagged shape for your bridge, would be to use Monte-Carlo, simulated annealing or a genetic algorithm, where the initial "bridge" consisted a simple spline curve between two randomly chosen end points (one on each side of the chasm), plus a small number or randomly chosen intermediate points in the chasm. You would end up with a physically 'realistic' bridge and a reasonably optimized cost of concrete.

seeking approximate algorithm to find largest clear circle in an area

Related: Is there a simple algorithm for calculating the maximum inscribed circle into a convex polygon?
I'm writing a graphics program whose goals are artistic rather than mathematical. It composes a picture step by step, using geometric primitives such as line segments or arcs of small angle. As it goes, it looks for open areas to fill in with more detail; as the available open areas get smaller, the detail gets finer, so it's loosely fractal.
At a given step, in order to decide what to do next, we want to find out: where is the largest circular area that's still free of existing geometric primitives?
Some constraints of the problem
It does not need to be exact. A close-enough answer is fine.
Imprecision should err on the conservative side: an almost-maximal circle is acceptable, but a circle that's not quite empty isn't acceptable.
CPU efficiency is a priority, because it will be called often.
The program will run in a browser, so memory efficiency is a priority too.
I'll have to set a limit on level of detail, constrained presumably by memory space.
We can keep track of the primitives already drawn in any way desired, e.g. a spatial index. Exactness of these is not required; e.g. storing bounding boxes instead of arcs would be OK. However the more precision we have, the better, because it will allow the program to draw to a higher level of detail. But, given that the number of primitives can increase exponentially with the level of detail, we'd like storage of past detail not to increase linearly with the number of primitives.
To summarize the order of priorities
Memory efficiency
CPU efficiency
Precision
P.S.
I framed this question in terms of circles, but if it's easier to find the largest clear golden rectangle (or golden ellipse), that would work too.
P.P.S.
This image gives some idea of what I'm trying to achieve. Here is the start of a tendril-drawing program, in which decisions about where to sprout a tendril, and how big, are made without regard to remaining open space. But now we want to know, where is there room to draw a tendril next, and how big? And where after that?
One very efficient way would be to recursively divide your area into rectangular sub-areas, splitting them when necessary to divide occupied areas from unoccupied areas. Then you would simply need to keep track of the largest unoccupied area at each time. See https://en.wikipedia.org/wiki/Quadtree - but you needn't split into squares.
Given any rectangle, you can draw a line inside it, so that at least one of the rectangles to either side of the line is a golden rectangle. Therefore you can recursively erect partitions within a rectangle so that all but one of the rectangles formed by the partitions are golden rectangles, and the add rectangle left over is vanishingly small. You could do this to create a quadtree-like structure, where almost all of the rectangles left over were golden rectangles.
This seems like the kind of situation where a randomized algorithm might be helpful. Choose points at random, reject and choose more if they're inappropriate for some reason, then find the min distance from your choices to each of the figures already included. The random point with the max of the mins would be your choice.
The number of sample points might have to increase as the complexity of the figure increases.
The random algorithm could be improved by checking points nearby good choices. Keep checking neighbors until no more improvement is possible.
Here's a simple way that uses a fixed amount of memory and time per update, regardless of how many drawing primitives you use. How much memory (and time per update) is needed can be controlled according to how high a "resolution" you need:
Divide the space up into a grid of points. We will maintain a 2D array, d[], which records the minimum distance from the grid point (x, y) to any already-drawn primitive in the entry d[x, y]. Initially, set every element in this array to infinity (or some huge number).
Whenever you draw some primitive, iterate over all grid points (x, y) calculating the minimum distance (or some conservative approximation to it) from (x, y) to the just-drawn primitive. E.g., if the primitive just drawn was a circle of radius r centered at (p, q), then this distance would be sqrt((x-p)^2 + (y-q)^2) - r. Then update d[x, y] with this new distance value if it is smaller than its current value.
The grid point at which the largest circle can be drawn without touching any already-drawn primitive is the grid point that is the farthest away from any primitive drawn so far. To find it, simply scan through d[] to find its maximum value, and note the corresponding indices (x, y). d[x, y] will be the maximum radius you could safely use for this circle.
Repeat steps 2 and 3 as necessary.
A couple of points:
For primitives that have area, you can assign 0 or a negative value to all d[x, y] corresponding to grid points inside the primitive.
For any convex primitive, you can often avoid updating most of the d[] array by scanning rows (or columns) "outward" from the just-drawn primitive's border: the distance from the current grid point to the primitive will never decrease, so if this distance becomes larger than the previous maximum value in d[] then we know that we can stop scanning this row, because no further distance value that we would compute on it could possibly be less than an existing distance on it.

How to equally subdivide a closed CGPath?

I've an indeterminate number of closed CGPath elements of various shapes and sizes all containing a single concave bezier curve, like the red and blue shapes in the diagram below.
What is the simplest and most efficient method of dividing these shapes into n regions of (roughly) equal size?
What you want is Delaunay triangulation. Here is an example which resembles what you want to do. It uses an as3 library. Here is an iOS port, that should help you:
https://github.com/czgarrett/delaunay-ios
I don't really understand the context of what you want to achieve and what the constraints are. For instance, is there a hard requirement that the subdivided regions are equal size?
Often the solutions to a performance problem is not a faster algorithm but a different approach, usually one or more of the following:
Pre-compute the values, or compute as much as possible offline. Say by using another server API which is able to do the subdivision offline and cache the results for multiple clients. You could serve the post-computed result as a bitmap where each colour indexes into the table of values you want to display. Looking up the value would be a simple matter of indexing the pixel at the touch position.
Simplify or approximate a solution. Would a grid sub-division be accurate enough? At 500 x 6 = 3000 subdivisions, you only have about 51 square points for each region, that's a region of around 7x7 points. At that size the user isn't going to notice if the region is perfectly accurate. You may need to end up aggregating adjacent regions anyway due to touch resolution.
Progressive refinement. You often don't need to compute the entire algorithm up front. Very often algorithms run in discrete (often symmetrical) units, meaning you're often re-using the information from previous steps. You could compute just the first step up front, and then use a background thread to progressively fill in the rest of the detail. You could also defer final calculation until the the touch occurs. A delay of up to a second is still tolerable at that point, or in the worst case you can display an animation while the calculation is in progress.
You could use some hybrid approach, and possibly compute one or two levels using Delaunay triangulation, and then using a simple, fast triangular sub-division for two more levels.
Depending on the required accuracy, and if discreet samples are not required, the final levels could be approximated using a weighted average between the points of the triangle, i.e., if the touch is halfway between two points, pick the average value between them.

find (near-)minimal covering set of discs on a 2-D plane

OK, say I've got a bunch of discs sitting on a plane in fixed known locations. Each disc is 1 unit in radius. The plane is fully covered by the set of discs, in fact, it is extensively over-covered by the set of discs, by an order of magnitude or two in some areas. I'd like to find a subset of the discs that still fully cover the plane. Optimal is nice, but not necessary.
Here's the before illustration:
And here's the after illustration:
It seems to me that there's a dual problem having to do with Delaunay triangulation, but I'm not quite sure that helps me. I also know that this is similar, but not the same as, the disc covering problem in computational geometry. Is this a standard problem whose name I don't know?
Possible approaches seem to me to include growing a covering set using a local greedy search, and iteratively using a nearest-pair query to remove discs one at a time. I'm not sure if either is guaranteed to work well, and I haven't worked through the details.
Oh, and the application is, if you haven't guessed it, finding a subsample of ZIP code centroids to cover a map when making queries, so n is about 50,000.
Game Plan
The following is basically just a more precise restatement of your problem, but it might help:
Enumerate every connected region in the plane that results when the boundaries of all disks are drawn. By assumption, each of these regions is covered by 1 or more disks.
Each region is a "thing to be covered", and each disk is a "covering thing". Find the minimum set cover on this set of regions. This is NP-hard unfortunately.
This might not be exploiting all the structure available in the problem, but it will definitely give you an optimal answer.
Enumerating Regions
Enumerating the regions and recording which disks cover each in step 1 is the tricky part. Regions are not in general convex which makes intersection tests tricky, and every circle you add potentially doubles the number of regions. Here is how I would approach that:
Forget about the actual location of each region, and define a region only in terms of which disks it is inside and which it is outside. I.e. a region is defined by a length-n vector of 0/1 values, each indicating whether the region inside or outside that disk is to be included in the intersection -- the region in question is formed by intersecting all these n regions. So in principle you could have up to 2^n regions, but in practice some (most) vectors produce empty regions because they entail intersecting two disks that have no intersection -- this is easy to test for, thankfully. It should be straightforward to recursively generate all non-empty regions, except that...
Bad News
Unfortunately I now see that it is necessary to perform full intersection testing, because it's not always possible to tell when a region will be empty. The critical counterexample is that, given two disks A and B that have a small sliver of overlap and another disk C that overlaps each of A and B, depending on the positions of all 3 disks, the intersection of all 3 either may or may not be non-empty. (To see this, draw 3 disks in different colours with 50% opacity in a drawing program, and move them around.)
A Workable Hack
Since generating the exact list of non-empty regions looks like it will be a lot of work and take a long time due to intersection testing, and you claim you don't need optimal solutions, you could try just using a grid of sample points as the set of "things to be covered" instead of the exact list of non-empty regions. It's straightforward to determine which disks cover a given sample point. Then solve maximum set cover as before.
To get confidence that there are no gaps, rerun several times, randomly jittering the sample points' co-ordinates each time. Increase the density of sample points until there is no change in the final result.

Resources