I need to calculate the overlap between two 3D volumes each of which is described as the union of a set of (possibly overlapping) spheres. If the two sets of spheres are S_i (with radius r_i and center at c_i) and S_j (radius r_j and center at c_j); consider the two combined volumes in 3D given by union(S_i) and union(S_j), and then calculate volume(intersection(union(S_i), union(S_j))).
Is there a reasonably fast algorithm for this?
Also: is there a fast approximate algorithm with a bound on the max error?
(Part of what makes that tricky is that spheres within each set can overlap in arbitrary ways; most of the time they overlap a little - let's say for the typical sphere in one set, there are a few other spheres in the same set that overlap it by a few % of its volume; but they could also overlap by a lot more, and in principle the same point in 3D could be in the interior of a lot more than two spheres in the same set, so an exact solution would have to consider 3-way, 4-way and so on up to n-way overlapped regions. A fast approximate solution doesn't necessarily have to do that)
Related
I'm looking for an algorithm for random close packing of spheres in 3D. The trick is that I'd like to pack spheres around a certain number of existing spheres. So for example, given somewhere between 100 and 1000 spheres in 3D (which have fixed positions and sizes; they may overlap, and may be different sizes), I'd like to pack spheres (all same size, positions can be chosen freely) around them (with no overlaps).
The metric for a good quality of packing is the packing density or void fraction. Essentially I'd like the fixed spheres and the packed spheres to occupy a compact volume of space (eg roughly ~spherical, or packed in layers around the fixed spheres) with as few voids in it as possible.
Is there an off the shelf algorithm that does this? How would you approach it in a way that balances speed of calculation with packing quality?
UPDATE Detail on packing density: this depends on what volume is chosen for the calculation. For this, we're looking to pack a certain number of layers of spheres around the fixed ones. Form a surface of points which are exactly a distance d to the surface of the closest fixed sphere; the packing density should be calculated within the volume enclosed by that surface. It's convenient if d = some multiple of the size of the packed spheres. (Assume we can place at least as many free spheres as needed to fill that volume; there may be excess ones, it doesn't matter where they're placed)
The fixed and all the variable spheres are all pretty similar sizes (let's say within 2x range from smallest to largest). In practice the degree of overlap of the fixed spheres is also limited: no fixed sphere is closer than a certain distance (around 0.2-0.3 diameters) of any other fixed sphere (so it is guaranteed that they are spread out, and/or only overlap a few neighbors rather than all overlapping each other)
Bounty posted!
Use a lattice where each point is equidistant by the diameter of the fill sphere. Any lattice shape, meeting the above definition will suffice.
Orient the translation and rotation of the lattice to minimize the center offsets of the fixed spheres to produce the world transform.
Fixed Pass 1:
Create a list of any lattice points within the fixed spheres radii + the diameter of the fill spheres.
For the latter keep the positional (origin - point) difference vectors in a list.
Flag in the lattice points(removal) in the list.
Lattice Pass 1:
Combine,i.e. re-base origin to overlap point(either a true overlap or extended to fill radius), any overlapping Fixed sphere's distance vectors. Storing the value on one side and flagging it on the other, to permit multiple overlaps.
This is where a decision is needed:
Optimize space over time, Computationally slow:
Add points from the adjusted origin radius + fill radius. Then iterating over lattice points moving one point at a time away from other points until all spacing conditions are met. If the lattice points implement spring logic, an optimal solution is produced, given enough iterations(N^2+ N). ?? Stop Here.... Done.
Pull the remaining points in lattice to fill the void:
Warp the lattice near(The size is as large as needed) each overlap point or origin, if no overlap exists pulling the points, to fill the gap.
Lattice Pass 2:
Add missing, i.e. no other point within fill radius + 1, and not near fixed sphere (other radius + fill radius) flagged points as removed. This should be a small amount of points.
Lattice Pass 3:
Adjust all lattice positions to move closer to the proper grid spacing. This will be monotonically decreasing the distances between points, limited to >= radius1 + radius2.
Repeat 3-4(or more) times. Applying a tiny amount of randomness(1 to -1 pixel max per dimension) bias offset to the first pass of the created points to avoid any equal spacing conflicts after the warp. If no suitable gap is created the solution may settle to a poorly optimized solution.
Each fill sphere is centered on a lattice grid point.
I can see many areas of improvement and optimization, but the point was to provide a clear somewhat fast algorithm that is good, but not guaranteed optimal.
Note the difference between 1 and 2:
Number 1 creates a sphere colliding with other spheres and requires all fills to move multiple times to resolve the collisions.
Number 2 only creates new spheres in empty spaces, and moves the rest inward to adapt, resulting in much faster convergence, since there are no collisions to resolve.
For my simulation purposes, I want to generate a randomly distributed k number of spheres (having the same radii) in a confined 3D space (inside a rectangle) where k is in order of 1000. Those spheres should not impinge on one another.
So, I want to generate random k points in a 3D space at least d distance away from one another; considering the number of points and the frequency at which I need those points for simulation, I don't want to apply brute force; I'm looking for some efficient algorithms achieving this.
How about just starting with some regular tessellation of the space (i.e. some primitive 3d lattice) and putting a single point somewhere in each tile? You'd then only need to check a small number of neighboring tiles for proximity.
To get a more statistically uniform, i.e. less regular, set of points, you could:
perturb points in space
generate an overly dense lattice and reject some points
"warp" the space so that the lattice was more dense in certain areas
You could perturb the points sequentially, giving you a monte-carlo chain over their coordinates, and potentially saving work elsewhere. Presumably you could tailor this so that the equilibrium distribution was what you wanted.
I have a 3D mesh consisting of triangle polygons. My mesh can be either oriented left or right:
I'm looking for a method to detect mesh direction: right vs left.
So far I tried to use mesh centroid:
Compare centroid to bounding-box (b-box) center
See if centroid is located left of b-box center
See if centroid is located right of b-box center
But the problem is that the centroid and b-box center don't have a reliable difference in most cases.
I wonder what is a quick algorithm to detect my mesh direction.
Update
An idea proposed by #collapsar is ordering Convex Hull points in clockwise order and investigating the longest edge:
UPDATE
Another approach as suggested by #YvesDaoust is to investigate two specific regions of the mesh:
Count the vertices in two predefined regions of the bounding box. This is a fairly simple O(N) procedure.
Unless your dataset is sorted in some way, you can't be faster than O(N). But if the point density allows it, you can subsample by taking, say, every tenth point while applying the procedure.
You can as well keep your idea of the centroid, but applying it also in a subpart.
The efficiency of an algorithm to solve your problem will depend on the data structures that represent your mesh. You might need to be more specific about them in order to obtain a sufficiently performant procedure.
The algorithms are presented in an informal way. For a more rigorous analysis, math.stackexchange might be a more suitable place to ask (or another contributor is more adept to answer ...).
The algorithms are heuristic by nature. Proposals 1 and 3 will work fine for meshes whose local boundary's curvature is mostly convex locally (skipping a rigorous mathematical definition here). Proposal 2 should be less dependent on the mesh shape (and can be easily tuned to cater for ill-behaved shapes).
Proposal 1 (Convex Hull, 2D)
Let M be the set of mesh points, projected onto a 'suitable' plane as suggested by the graphics you supplied.
Compute the convex hull CH(M) of M.
Order the n points of CH(M) in clockwise order relative to any point inside CH(M) to obtain a point sequence seq(P) = (p_0, ..., p_(n-1)), with p_0 being an arbitrary element of CH(M). Note that this is usually a by-product of the convex hull computation.
Find the longest edge of the convex polygon implied by CH(M).
Specifically, find k, such that the distance d(p_k, p_((k+1) mod n)) is maximal among all d(p_i, p_((i+1) mod n)); 0 <= i < n;
Consider the vector (p_k, p_((k+1) mod n)).
If the y coordinate of its head is greater than that of its tail (ie. its projection onto the line ((0,0), (0,1)) is oriented upwards) then your mesh opens to the left, otherwise to the right.
Step 3 exploits the condition that the mesh boundary be mostly locally convex. Thus the convex hull polygon sides are basically short, with the exception of the side that spans the opening of the mesh.
Proposal 2 (bisector sampling, 2D)
Order the mesh points by their x coordinates int a sequence seq(M).
split seq(M) into 2 halves, let seq_left(M), seq_right(M) denote the partition elements.
Repeat the following steps for both point sets.
3.1. Select randomly 2 points p_0, p_1 from the point set.
3.2. Find the bisector p_01 of the line segment (p_0, p_1).
3.3. Test whether p_01 lies within the mesh.
3.4. Keep a count on failed tests.
Statistically, the mesh point subset that 'contains' the opening will produce more failures for the same given number of tests run on each partition. Alternative test criteria will work as well, eg. recording the average distance d(p_0, p_1) or the average length of (p_0, p_1) portions outside the mesh (both higher on the mesh point subset with the opening). Cut off repetition of step 3 if the difference of test results between both halves is 'sufficiently pronounced'. For ill-behaved shapes, increase the number of repetitions.
Proposal 3 (Convex Hull, 3D)
For the sake of completeness only, as your problem description suggests that the analysis effectively takes place in 2D.
Similar to Proposal 1, the computations can be performed in 3D. The convex hull of the mesh points then implies a convex polyhedron whose faces should be ordered by area. Select the face with the maximum area and compute its outward-pointing normal which indicates the direction of the opening from the perspective of the b-box center.
The computation gets more complicated if there is much variation in the side lengths of minimal bounding box of the mesh points, ie. if there is a plane in which most of the variation of mesh point coordinates occurs. In the graphics you've supplied that would be the plane in which the mesh points are rendered assuming that their coordinates do not vary much along the axis perpendicular to the plane.
The solution is to identify such a plane and project the mesh points onto it, then resort to proposal 1.
I have a static set of simple polygons (they may be nonconvex but are not self-intersecting) and a large number of query ellipses. Assume that this is all being done in 2D. I need to find the distance between each ellipse and the closest polygon to that ellipse. Distance is defined as the short distance between any two points on the ellipse and polygon respectively. If the ellipse intersects a polygon, then we can say the distance is 0 or assign some negative value.
A brute force approach would simply compute the distance between each ellipse and each polygon and return the lowest distance in O(mn) time where m is the number of polygons and n is the average number of vertices per polygon. I would like to reduce the m term here because I think I can cull the amount of polygons being considered though some spacial analysis.
I've considered a few approaches including Voronoi diagrams and R-trees and kd-trees. However, most of these seems to involve points and I'm not sure how to extend these to polygons. I think the most promising approach involves computing bounding boxes for each polygon and the ellipse and using an R-tree to find some set of nearby polygons. However, I'm not quite about the best way to find this close set of polygons. Or perhaps there's a better way that I'm overlooking.
Using bounding boxes or disks has the benefit of reducing the work to compute a distance ellipse/polygon to O(1). And it allows you to obtain a lower and an upper bound on the true distance.
Assume you use disks and also enclose the ellipse in a disk. You will need to perform a modified nearest neighbors search, that enumerates the disks such that the lower bound of their distance to the query disk is smaller than the best upper bound found so far.
This can be accelerated by means of a k-D tree (D=2) built on the disk centers. You can enhance every node in the k-D tree with the radius of the largest and smallest disks in the subtree it roots. During the search you will use this information to evaluate the bounds without knowing the exact radii of the disks, but the deeper you go in the tree, the better you know them.
Perform a search to obtain the tightest upper bound on the distance, then a second search to enumerate all the disks with a lower bound smaller than the tightest upper bound. This will reduce the number of disks to be considered.
You can also use bounding boxes, and store the min/max width/height of the boxes in the tree nodes.
I'm trying to design an implementation of Vector Quantization as a c++ template class that can handle different types and dimensions of vectors (e.g. 16 dimension vectors of bytes, or 4d vectors of doubles, etc).
I've been reading up on the algorithms, and I understand most of it:
here and here
I want to implement the Linde-Buzo-Gray (LBG) Algorithm, but I'm having difficulty figuring out the general algorithm for partitioning the clusters. I think I need to define a plane (hyperplane?) that splits the vectors in a cluster so there is an equal number on each side of the plane.
[edit to add more info]
This is an iterative process, but I think I start by finding the centroid of all the vectors, then use that centroid to define the splitting plane, get the centroid of each of the sides of the plane, continuing until I have the number of clusters needed for the VQ algorithm (iterating to optimize for less distortion along the way). The animation in the first link above shows it nicely.
My questions are:
What is an algorithm to find the plane once I have the centroid?
How can I test a vector to see if it is on either side of that plane?
If you start with one centroid, then you'll have to split it, basically by doubling it and slightly moving the points apart in an arbitrary direction. The plane is just the plane orthogonal to that direction.
But you don't need to compute that plane.
More generally, the region (i) is defined as the set of points which are closer to the centroid c_i than to any other centroid. When you have two centroids, each region is a half space, thus separated by a (hyper)plane.
How to test on a vector x to see on which side of the plane it is? (that's with two centroids)
Just compute the distance ||x-c1|| and ||x-c2||, the index of the minimum value (1 or 2) will give you which region the point x belongs to.
More generally, if you have n centroids, you would compute all the distances ||x-c_i||, and the centroid x is closest to (i.e., for which the distance is minimal) will give you the region x is belonging to.
I don't quite understand the algorithm, but the second question is easy:
Let's call V a vector which extends from any point on the plane to the point-in-question. Then the point-in-question lies on the same side of the (hyper)plane as the normal N iff V·N > 0