I've been scouring the internet for days, but have been unable to find a good answer (or at least one that made sense to me) to what seems like it should be a common question. How does one scale an arbitrary polygon? In particular, concave polygons. I need an algorithm which can handle concave (definitely) and self-intersecting (if possible) polygons. The obvious and simple algorithm I've been using to handle simple convex polygons is calculating the centroid of the polygon, translating that centroid to the origin, scaling all the vertices, and translating the polygon back to its original location.
This approach does not work for many (or maybe all) concave polygons as the centroid often falls outside the polygon, so the scaling operation also results in a translation and I need to be able to scale the polygon "in place" without the final result being translated.
Is anybody aware of a method for scaling concave polygons? Or maybe a way of finding the "visual center" which can be used as a frame of reference for the scaling operation?
Just to clarify, I'm working in 2D space and I would like to scale my polygons using the "visual center" as the frame of reference. So maybe another way to ask the question would be, how do I find the visual center of a concave and/or self-intersecting polygon?
Thanks!
I'm not sure what your problem is.
You're working in an affine space, and you're looking for an affine transformation to scale your polygon ?
If i'm right, just write the transformation matrix:
scaling matrix
homotethy
And transform your polygon with matrix
You can look up for affine transformation matrix.
hope it helps
EDIT
if you want to keep the same "center", you can just do an homotethy of parameter lambda with center G = barycenter of the polygon:
it verifies :
G won't move since it's the center of the homotethy.
It will still verify the relation below, so it will still be the barycenter. (you just multiply the relation by lambda)
in your case G is easy to determinate: G(x,y) : (average of x values of points, average of y values of points)
and it should do what you need
Perhaps Craig is looking for a "polygon offset" algorithm - where each edge in the polygon is offset by a given value. For example, given a clockwise oriented polygon, offsetting edges towards the left will increase the size of the polygon. If this is what Craig is looking for then this has been asked and answered before here - An algorithm for inflating/deflating (offsetting, buffering) polygons.
If you're looking for a ready made (opensource freeware) solution, I've also created a clipping library (Clipper) written in Delphi, C++ and C# which includes a rather simple polygon offsetting function.
The reason why you can't find a good answer is because you are being imprecise with your requirements. First explicitly define what you mean by "in-place". What is being kept constant?
Once you have figured that out, then translate the constant point to the origin, scale the polygon as usual, and translate back.
Related
I have a quite specific task.
I need to compute alpha shape of a set of points. (You can frolic with already implemented algorith there)
The point is that I have predefined subsets of points (let's call them details) and I do not want their structure to be changed. For example, suppose these polygons to be details:
Then, the following hulls are ok depending on alpha-radius:
And the following is not:
In brief, I want the structure of specified subsets of points to stay unchanged during reducing the radius.
So, how do you think:
May I use any of already implemented algorithm or should I figure out some specific one?
Is there implemented example of Alpha-Shape algorithm with open source code anywhere? (Alpha-Shape, not Concave hull. It must split contour into several parts when reducing the radius)
Well, Finally I solved this using constrained Delaunay triangulation.
The idea (that Yves Daoust shared in comment to the question) was to use not just Delaunay triangulation during building Alpha shape, but constrained Delaunay triangulation.
Algorithm: In brief, I:
Took convex hull of the promoted polygons
Computed its constrained triangulation. (Constraining segments are polygon's edges)
On this step I used Triangle .NET library for C#. I guess, every popular language has alternatives to it.
Built alpha shape: threw away all triangles where any edge is longer than predefined alpha
Results of my struggles:
Alpha = 1000, alpha shape is just a convex hull
Alpha = 400
Alpha = 30. Only very small concavities are smoothened
Feel free to write me for a deeper explanation, if you wish.
I'm looking for an algorithm that provides what I call a "shrunken convex hull" (as distinct from a "reduced convex hull") in 3D. I am defining the shrunken hull, H', as the volume of space that has, no less than D distance from some original convex hull, H.
Analytically, this can be formed by moving each plane of H inwards along its normal by D, then computing the convex hull (if it exists) of the resultant planes. The tricky bit is some planes might be trimmed or dropped, others may move past other planes, and get entirely "snipped" out due to normal reversal (if D is big enough). I’m a bit fuzzy on how to do the algorithm, but have some badly thought out ideas below.
I am doing this to to identify the subset of points in a dataset which are guaranteed to be no less than a given distance from the surface of the original point set (which is assumed to be convex, and I have this). This is to remove surface effects that are disrupting our signal in some calculations we are doing.
I'm really looking for a name, or examples of anyone doing this, or another way to compute this. Ideally some good-old open code would be great, but I think my problem is far too niche.
I found reduced convex hulls, but this seems to be a different idea. The current closest thing I can find is "Hausdorff Cores" - however this seems like the more complicated case of non-convex polygons, and is pretty damned dense.
Do not read beyond here, unless you really really want to.
Current, incomplete/badly thought out algorithm
The slow way (i.e. current way) of identifying the reduced point set it is to compute the signed distance for all points, and reject those that are less than a given distance. However, this is pretty damned slow, as the number of points can be up to 100M. I think operating on the original hull to generate the shrunken hull, and computing its AABB and spherical BB, then retaining only those inside the shrunken hull might be much faster (I hope -willing to accept comments saying this is stupid).
I think it should be possible, as I don't strictly need the full distance information for each point, just D_point > D. So once I know this I should be able to stop.
I can see how the shrunken hull might be done in 2D, where you look at each vertex, then use an analytical solution to a constant velocity Eikonal, then move the vertex along the vector derived from each corner.
However, the situation is more complex for the 3D version, afaics, as there are multiple facets (>2) for each vertex . My current plan is to look at each edge pair individually, then work from there to (somehow - create half spaces and union them?) to build this hull.
What your thinking of is downscaling the 3D convex hull, it works just like downscaling a 2D image, except for how the angle
Outline for the algorithm (in 2D) looks something like this:
1. Compute the convex hull.
2. For each point, P, in the convex hull:
3. Find the hull points before and after, P
4. Bisect the angle formed to obtain the angle, A, required.
5. Create a new point, P', along the angle A at a distance, D, from `P`.
7. Add P' to the scaled-down (shrunken) convex hull.
The only difference in 3D occurs in lines 3 and 4. In 3D, step 3 obtains 3 points. In step 4, a 3D angle is used. Thus you'll find a fair bit of benefit in using the 3D transforms in a graphics/geometry libary, as the math may be tricky.
If your objective is to remove surface effects, and it's not important that every surface of the convex hull be displaced by the same distance, you could instead
Identify a point known to be inside the hull (e.g. the centroid of the point cloud or the hull)
Scale the hull inward towards that point
Unless you scale infinitely (collapsing everything to a point), this operation should give an inwardly-displaced hull which has the same connectivity - no points added or removed.
I've been working with the Boost geometry, mostly for manipulating polygons; I was using the centroid built-in method (http://www.boost.org/doc/libs/1_55_0/libs/geometry/doc/html/geometry/reference/algorithms/centroid/centroid_2.html) for calculating the geometric (bary) center of my polygons, but recently after outputting the coordinates of my points (composing a specific polygon) (and analyzing them on the side with some Python scripts) I realized that the centroid coordinates the previous method was giving me do not correspond to the geometric mean of the points of the polygon.
I'm in two dimensions and putting it into equations, I should have:
x_centroid = \frac{1}{number of points composing the polygon} \sum{point i} x_i
and the same for the y coordinates. I'm now suspecting that this could have to do with the fact that the boost geometry library is not just looking at the points on the edge of the polygon (its outer ring) but treating it as a filled object.
Does any of you have some experience in manipulating these functions?
Btw, I using:
point my_center(0,0);
bg::centroid(my_polygon,my_center);
to compute the centroid.
Thank you.
In Boost.Geometry the algorithm proposed by Bashein and Detmer [1] is used by default for the calculation of a centroid of Areal Geometries.
The reason is that the simple average method fails for a case where many closely spaced vertices are placed at one side of a Polygon.
[1] Gerard Bashein and Paul R. Detmer. “Centroid of a Polygon”. Graphics Gems IV, Academic Press, 1994, pp. 3–6
That's what the centroid is -- the mean of the infinite number of points making up the filled polygon. It sounds like what you want is not the centroid, but just the average of the vertices.
Incidentally, "geometric mean" has a different definition than you think, and is not in any way applicable to this situation.
Centroid of polygon is considered as mass center of plane figure (for example, paper sheet), not center of vertices only
Starting with a 3D mesh, how would you give a rounded appearance to the edges and corners between the polygons of that mesh?
Without wishing to discourage other approaches, here's how I'm currently approaching the problem:
Given the mesh for a regular polyhedron, I can give the mesh's edges a rounded appearance by scaling each polygon along its plane and connecting the edges using cylinder segments such that each cylinder is tangent to each polygon where it meets that polygon.
Here's an example involving a cube:
Here's the cube after scaling its polygons:
Here's the cube after connecting the polygons' edges using cylinders:
What I'm having trouble with is figuring out how to deal with the corners between polygons, especially in cases where more than three edges meet at each corner. I'd also like an algorithm that works for all closed polyhedra instead of just those that are regular.
I post this as an answer because I can't put images into comments.
Sattle point
Here's an image of two brothers camping:
They placed their simple tents right beside each other in the middle of a steep walley (that's one bad place for tents, but thats not the point), so one end of each tent points upwards. At the point where the four squares meet you have a sattle point. The two edges on top of each tent can be rounded normally as well as the two downward edges. But at the sattle point you have different curvature in both directions and therefore its not possible to use a sphere. This rules out Svante's solution.
Selfintersection
The following image shows some 3D polygons if viewed from the side. Its some sharp thing with a hole drilled into it from the other side. The left image shows it before, the right after rounding.
.
The mass thats get removed from the sharp edge containts the end of the drill hole.
There is someething else to see here. The drill holes sides might be very large polygons (lets say it's not a hole but a slit). Still you only get small radii at the top. you can't just scale your polygons, you have to take into account the neighboring polygon.
Convexity
You say you're only removing mass, this is only true if your geometry is convex. Look at the image you posted. But now assume that the viewer is inside the volume. The radii turn away from you and therefore add mass.
NURBS
I'm not a nurbs specialist my self. But the constraints would look something like this:
The corners of the nurbs patch must be at the same position as the corners of the scaled-down polygons. The normal vectors of the nurb surface at the corners must be equal to the normal of the polygon. This should be sufficent to gurarantee that the nurb edge will be a straight line following the polygon edge. The normals also ensure that no visible edges will result at the border between polygon and nurbs patch.
I'd just do the math myself. nurbs are just polygons. You'll have some unknown coefficients and your constraints. This gives you a system of equations (often linear) that you can solve.
Is there any upper bound on the number of faces, that meet at that corner?
You might you might employ concepts from CAGD, especially Non-Uniform Rational B-Splines (NURBS) might be of interest for you.
Your current approach - glueing some fixed geometrical primitives might be too inflexible to solve the problem. NURBS require some mathematical work to get used to, but might be more suitable for your needs.
Extrapolating your cylinder-edge approach, the corners should be spheres, resp. sphere segments, that have the same radius as the cylinders meeting there and the centre at the intersection of the cylinders' axes.
Here we have a single C++ header for generating triangulated rounded 3D boxes. The code is in C++ but also easy to transplant to other coding languages. Also it's easy to be modified for other primitives like quads.
https://github.com/nepluno/RoundCornerBox
As #Raymond suggests, I also think that the nepluno repo provides a very good implementation to solve this issue; efficient and simple.
To complete his answer, I just wrote a solution to this issue in JS, based on the BabylonJS 3D engine. This solution can be found here, and can be quite easily replaced by another 3D engine:
https://playground.babylonjs.com/#AY7B23
I have a point (Lat/Lon) and a heading in degrees (true north) for which this point is traveling along. I have numerous stationary polygons (Points defined in Lat/Lon) which may or may not be convex.
My question is, how do I calculate the closest intersection point, if any, with a polygon. I have seen several confusing posts about Ray Tracing but they seem to all relate to 3D when the Ray and Polygon are not on the same Plane and also the Polygons must be convex.
sounds like you should be able to do a simple 2d line intersection...
However I have worked with Lat/Long before and know that they aren't exactly true to any 2d coordinate system.
I would start with a general "IsPointInPolygon" function, you can find a million of them by googling, and then test it on your poly's to see how well it works. If they are accurate enough, just use that. But it is possible that due to the non-square nature of lat/long coordinates, you may have to do some modifications using Spherical geometry.
In 2D, the calculations are fairly simple...
You could always start by checking to make sure the ray's endpoint is not inside the polygon (since that's the intersection point in that case).
If the endpoint is out of the line, you could do a ray/line segment intersection with each of the boundary features of the polygon, and use the closest found location. That handles convex/concave features, etc.
Compute whether the ray intersects each line segment in the polygon using this technique.
The resulting scaling factor in (my accepted) answer (which I called h) is "How far along the ray is the intersection." You're looking for a value between 0 and 1.
If there are multiple intersection points, that's fine! If you want the "first," use the one with the smallest value of h.
The answer on this page seems to be the most accurate.
Question 1.E GodeGuru