Calculation of centroid & volume of a polyhedron when the vertices are given - performance

Given the location of vertices of a convex polyhedron (3D), I need to calculate the centroid and volume of the polyhedron. Following code is available at Mathworks site.
function C = centroid(P)
k=convhulln(P);
if length(unique(k(:)))<size(P,1)
error('Polyhedron is not convex.');
end
T = delaunayn(P);
n = size(T,1);
W = zeros(n,1);
C=0;
for m = 1:n
sp = P(T(m,:),:);
[null,W(m)]=convhulln(sp);
C = C + W(m) * mean(sp);
end
C=C./sum(W);
return
end
The code is elegant but is terribly slow. I need to calculate the volume and centroid of thousands of polyhedrons hundreds of times. Using this code in its current state is not feasible. Does anyone know a better approach or can this code be made faster? There are some minor changes I can think of such as, replacing mean with expression for mean.

There is a much simpler approach to calculate the volume with minimal effort. The first flavour uses 3 local topological information sets of the polyhedron, the tangent unit vector of the edges, the unit vectors of the in-plane normal on this tangent and the unit vector of the facet itself (which are very simple to extract from the vertices). Please refer to Volume of a Polyhedron for further details.
The second flavour uses the face areas, the normal vectors and the face barycenters to calculate the polyhedron volume according to this Wikipedia Article.
Both algorithms are rather simple and very easy to implement and through the simple summing structure easy to vectorize too. I suppose that both approaches will be much faster than doing a fully fledged tesselation of the polyhedron.
The centroid of the polyhedron can then be calculated by applying the divergence theorem transferring the integration over the full polyhedron volume into an integration over the polyhedron surface. A detailed description can be found in Calculating the volume and centroid of a polyhedron in 3d. I did not check if the tesselation of the polyhedron into triangles is really necessary or one could work with the more complex polygon surfaces of the polyhedron too, but in any case the area tessellation of the faces is much simpler than the volume tesselation.
In total such a combined approach should be much faster than the volume approach.

Thinking your only option if quickhull isn't good enough is cudahull if you want exact solutions. Although, even then you're only going to get about a 40x increase max (it seems).
I'm assuming that your the convex hulls you make each have at least 10 vertices (if it's much less than that, there isn't much you can do). If you don't mind "close enough" solutions. You could create a version of quickhull that limits the number the of vertices per polygon. The number of vertices you limit the calculation to will also allow for calculation of maximum error if needed.
The thing is that as the number of vertices on the convex hull approach infinity, you end up with a sphere. This means due to the way quick hull works, each additional vertex you add to the convex hull has less of an effect* than the ones before it.
*Depending on how quickhull is coded, this may only be true in a general sense. Making this true in practice would require modifying quickhull's recursion algorithm, so while the "next vertex" is always calculated (except for after the last vertex is added, or no points remain for that section), vertices are actually added to the convex hull in the order that maximizes the increase to the polyhedrons volume (likely by order of most distant to least distant). You'll incur some performance cost for keeping track of the order to add vertex's but as long as the ratio of pending convex hull points vs pending points is high enough, it should be worth it. As for error, the best option is probably to stop the algorithm either when the actual convex hull is reached, or the max increase to volume gets smaller than a certain fraction of the current total volume. If performance is more important, then simply limit the number of convex hull points per polygon.
You could also look at the various approximate convex hull algorithms, but the method I outlined above should work well for volume/centroid approximation with ability to determine error.

How much you can speed up your code depends a lot on how you want to calculate your centroid. See this answer about centroid calculation for your options. It turns out that if you need the centroid of the solid polyhedron, you're basically out of luck. If, however, only the vertices of the polyhedron have weight, then you could simply write
[k,volume] = convhulln(P);
centroid = mean(P(k,:));

Related

Algorithm for multiple polyline and polygon decimation

We have some polylines (list of points, has start and end point, not cyclic) and polygons (list of points, cyclic, no such thing as endpoints).
We want to map each polyline to a new polyline and each polygon to a new polygon so the total number of edges is small enough.
Let's say the number of edges originally is N, and we want our result to have M edges. N is much larger than M.
Polylines need to keep their start and end points, so they contribute at least 1 edge, one less than their vertex count. Polygons need to still be polygons, so they contribute at least 3 edges, equal to their vertex count. M will be at least large enough for this requirement.
The outputs should be as close as possible to the inputs. This would end up being an optimization problem of minimizing some metric to within some small tolerance of the true optimal solution. Originally I'd have used the area of the symmetric difference of the original and result (area between), but if another metric makes this easier to do I'll gladly take that instead.
It's okay if the results only include vertices in the original, the fit will be a little worse but it might be necessary to keep the time complexity down.
Since I'm asking for an algorithm, it'd be nice to also see an implementation. I'll likely have to re-implement it for where I'll be using it anyway, so details like what language or what data structures won't matter too much.
As for how good the approximation needs to be, about what you'd expect from getting a vector image from a bitmap image. The actual use here is for a tool for a game though, there's some strange details for the specific game, that's why the output edge count is fixed rather than the tolerance.
It's pretty hard to find any information on this kind of thing, so without even providing a full workable algorithm, just some pointers would be very much appreciated.
Ramer–Douglas–Peucker algorithm (mentioned in the comment) is definitely good, but it has some disadvantages:
It requires open polyline on input, for closed polygon one has to fix an arbitrary point, which either decreases final quality or forces to test many points and decreases the performance.
The vertices of simplified polyline are a subset of original polyline vertices, other options are not considered. This permits very fast implementations, but again decreases the precision of simplified polyline.
Another alternative is to take well known algorithm for simplification of triangular meshes Surface Simplification Using Quadric Error Metrics and adapt it for polylines:
distances to planes containing triangles are replaced with distances to lines containing polyline segments,
quadratic forms lose one dimension if the polyline is two dimensional.
But the majority of the algorithm is kept including the queue of edge contraction (minimal heap) according to the estimated distortion such contraction produces in the polyline.
Here is an example of this algorithm application:
Red - original polyline, blue - simplified polyline, and one can see that all its vertices do not lie on the original polyline, while general shape is preserved as much as possible with so few line segments.
it'd be nice to also see an implementation.
One can find an implementation in MeshLib, see MRPolylineDecimate.h/.cpp

Rough test if points are inside/outside of convex hull

I am working on an algorithm where I have to check whether points are inside or outside of the convex hull of some points. The problem is that
I have to check this for a lot of points: ~2000,
the point-cloud defining the convex hull has around 10000 points,
the dimensions I am working in is quite high: 10-50.
The only possible positive thing for my points are, that for every point x, there is also -x, thus the points define a pointsymmetric polytope, and the convex hull is not degenerate (has non-empty interior).
Right now I am doing this with linear programming, for example as in https://stackoverflow.com/a/11731437/8052809
To speed up my program, I want to estimate whether a point is for sure inside or outside the convex hull, prior to computing it exactly. In other words, I need some fast algorithm which can determine for some points whether they are inside or not, resp. whether they are outside or not - and for some points, the fast algorithm can't decide it.
This I am doing right now by first looking at the bounding box of my pointcloud, and second, the approach in https://stackoverflow.com/a/4903615/8052809 - comment by yombo.
But both methods can only determine if a point is for sure outside (and both methods are rather coarse).
Since most of the points I check are inside, I mostly need a test which determines if a point is for sure inside.
Long question short:
I need an algorithm which can test very fast, whether a point is inside/outside the convex hull or not.
The algorihm is allowed to report "inside", "no idea" and "outside".
In order to quickly purge away points that are certified to be inside the convex hull you can reuse the points you found in your bounding box computation.
Namely, the 2k points (of dimension k) containing the min and max value in every dimension.
You can construct a small (2k constraints) linear programming problem and purge away any point that is within the convex hull of these 2k points.
You can do this both for the query points and for the original point cloud, which will leave you with a smaller linear programming problem to solve for the remaining points.

computing 3D reduced convex hull

I'm looking for an algorithm that provides what I call a "shrunken convex hull" (as distinct from a "reduced convex hull") in 3D. I am defining the shrunken hull, H', as the volume of space that has, no less than D distance from some original convex hull, H.
Analytically, this can be formed by moving each plane of H inwards along its normal by D, then computing the convex hull (if it exists) of the resultant planes. The tricky bit is some planes might be trimmed or dropped, others may move past other planes, and get entirely "snipped" out due to normal reversal (if D is big enough). I’m a bit fuzzy on how to do the algorithm, but have some badly thought out ideas below.
I am doing this to to identify the subset of points in a dataset which are guaranteed to be no less than a given distance from the surface of the original point set (which is assumed to be convex, and I have this). This is to remove surface effects that are disrupting our signal in some calculations we are doing.
I'm really looking for a name, or examples of anyone doing this, or another way to compute this. Ideally some good-old open code would be great, but I think my problem is far too niche.
I found reduced convex hulls, but this seems to be a different idea. The current closest thing I can find is "Hausdorff Cores" - however this seems like the more complicated case of non-convex polygons, and is pretty damned dense.
Do not read beyond here, unless you really really want to.
Current, incomplete/badly thought out algorithm
The slow way (i.e. current way) of identifying the reduced point set it is to compute the signed distance for all points, and reject those that are less than a given distance. However, this is pretty damned slow, as the number of points can be up to 100M. I think operating on the original hull to generate the shrunken hull, and computing its AABB and spherical BB, then retaining only those inside the shrunken hull might be much faster (I hope -willing to accept comments saying this is stupid).
I think it should be possible, as I don't strictly need the full distance information for each point, just D_point > D. So once I know this I should be able to stop.
I can see how the shrunken hull might be done in 2D, where you look at each vertex, then use an analytical solution to a constant velocity Eikonal, then move the vertex along the vector derived from each corner.
However, the situation is more complex for the 3D version, afaics, as there are multiple facets (>2) for each vertex . My current plan is to look at each edge pair individually, then work from there to (somehow - create half spaces and union them?) to build this hull.
What your thinking of is downscaling the 3D convex hull, it works just like downscaling a 2D image, except for how the angle
Outline for the algorithm (in 2D) looks something like this:
1. Compute the convex hull.
2. For each point, P, in the convex hull:
3. Find the hull points before and after, P
4. Bisect the angle formed to obtain the angle, A, required.
5. Create a new point, P', along the angle A at a distance, D, from `P`.
7. Add P' to the scaled-down (shrunken) convex hull.
The only difference in 3D occurs in lines 3 and 4. In 3D, step 3 obtains 3 points. In step 4, a 3D angle is used. Thus you'll find a fair bit of benefit in using the 3D transforms in a graphics/geometry libary, as the math may be tricky.
If your objective is to remove surface effects, and it's not important that every surface of the convex hull be displaced by the same distance, you could instead
Identify a point known to be inside the hull (e.g. the centroid of the point cloud or the hull)
Scale the hull inward towards that point
Unless you scale infinitely (collapsing everything to a point), this operation should give an inwardly-displaced hull which has the same connectivity - no points added or removed.

Simplified (or smooth) polygons that contain the original detailed polygon

I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.

Is there an efficient algorithm to generate a 2D concave hull?

Having a set of (2D) points from a GIS file (a city map), I need to generate the polygon that defines the 'contour' for that map (its boundary). Its input parameters would be the points set and a 'maximum edge length'. It would then output the corresponding (probably non-convex) polygon.
The best solution I found so far was to generate the Delaunay triangles and then remove the external edges that are longer than the maximum edge length. After all the external edges are shorter than that, I simply remove the internal edges and get the polygon I want. The problem is, this is very time-consuming and I'm wondering if there's a better way.
One of the former students in our lab used some applicable techniques for his PhD thesis. I believe one of them is called "alpha shapes" and is referenced in the following paper:
http://www.cis.rit.edu/people/faculty/kerekes/pdfs/AIPR_2007_Gurram.pdf
That paper gives some further references you can follow.
This paper discusses the Efficient generation of simple polygons for characterizing the shape of a set of points in the plane and provides the algorithm. There's also a Java applet utilizing the same algorithm here.
The guys here claim to have developed a k nearest neighbors approach to determining the concave hull of a set of points which behaves "almost linearly on the number of points". Sadly their paper seems to be very well guarded and you'll have to ask them for it.
Here's a good set of references that includes the above and might lead you to find a better approach.
The answer may still be interesting for somebody else: One may apply a variation of the marching square algorithm, applied (1) within the concave hull, and (2) then on (e.g. 3) different scales that my depend on the average density of points. The scales need to be int multiples of each other, such you build a grid you can use for efficient sampling. This allows to quickly find empty samples=squares, samples that are completely within a "cluster/cloud" of points, and those, which are in between. The latter category then can be used to determine easily the poly-line that represents a part of the concave hull.
Everything is linear in this approach, no triangulation is needed, it does not use alpha shapes and it is different from the commercial/patented offering as described here ( http://www.concavehull.com/ )
A quick approximate solution (also useful for convex hulls) is to find the north and south bounds for each small element east-west.
Based on how much detail you want, create a fixed sized array of upper/lower bounds.
For each point calculate which E-W column it is in and then update the upper/lower bounds for that column. After you processed all the points you can interpolate the upper/lower points for those columns that missed.
It's also worth doing a quick check beforehand for very long thin shapes and deciding wether to bin NS or Ew.
A simple solution is to walk around the edge of the polygon. Given a current edge om the boundary connecting points P0 and P1, the next point on the boundary P2 will be the point with the smallest possible A, where
H01 = bearing from P0 to P1
H12 = bearing from P1 to P2
A = fmod( H12-H01+360, 360 )
|P2-P1| <= MaxEdgeLength
Then you set
P0 <- P1
P1 <- P2
and repeat until you get back where you started.
This is still O(N^2) so you'll want to sort your pointlist a little. You can limit the set of points you need to consider at each iteration if you sort points on, say, their bearing from the city's centroid.
Good question! I haven't tried this out at all, but my first shot would be this iterative method:
Create a set N ("not contained"), and add all points in your set to N.
Pick 3 points from N at random to form an initial polygon P. Remove them from N.
Use some point-in-polygon algorithm and look at points in N. For each point in N, if it is now contained by P, remove it from N. As soon as you find a point in N that is still not contained in P, continue to step 4. If N becomes empty, you're done.
Call the point you found A. Find the line in P closest to A, and add A in the middle of it.
Go back to step 3
I think it would work as long as it performs well enough — a good heuristic for your initial 3 points might help.
Good luck!
You can do it in QGIS with this plug in;
https://github.com/detlevn/QGIS-ConcaveHull-Plugin
Depending on how you need it to interact with your data, probably worth checking out how it was done here.
As a wildly adopted reference, PostGIS starts with a convexhull and then caves it in, you can see it here.
https://github.com/postgis/postgis/blob/380583da73227ca1a52da0e0b3413b92ae69af9d/postgis/postgis.sql.in#L5819
The Bing Maps V8 interactive SDK has a concave hull option within the advanced shape operations.
https://www.bing.com/mapspreview/sdkrelease/mapcontrol/isdk/advancedshapeoperations?toWww=1&redig=D53FACBB1A00423195C53D841EA0D14E#JS
Within ArcGIS 10.5.1, the 3D Analyst extension has a Minimum Bounding Volume tool with the geometry types of concave hull, sphere, envelope, or convex hull. It can be used at any license level.
There is a concave hull algorithm here: https://github.com/mapbox/concaveman

Resources