I have a quite specific task.
I need to compute alpha shape of a set of points. (You can frolic with already implemented algorith there)
The point is that I have predefined subsets of points (let's call them details) and I do not want their structure to be changed. For example, suppose these polygons to be details:
Then, the following hulls are ok depending on alpha-radius:
And the following is not:
In brief, I want the structure of specified subsets of points to stay unchanged during reducing the radius.
So, how do you think:
May I use any of already implemented algorithm or should I figure out some specific one?
Is there implemented example of Alpha-Shape algorithm with open source code anywhere? (Alpha-Shape, not Concave hull. It must split contour into several parts when reducing the radius)
Well, Finally I solved this using constrained Delaunay triangulation.
The idea (that Yves Daoust shared in comment to the question) was to use not just Delaunay triangulation during building Alpha shape, but constrained Delaunay triangulation.
Algorithm: In brief, I:
Took convex hull of the promoted polygons
Computed its constrained triangulation. (Constraining segments are polygon's edges)
On this step I used Triangle .NET library for C#. I guess, every popular language has alternatives to it.
Built alpha shape: threw away all triangles where any edge is longer than predefined alpha
Results of my struggles:
Alpha = 1000, alpha shape is just a convex hull
Alpha = 400
Alpha = 30. Only very small concavities are smoothened
Feel free to write me for a deeper explanation, if you wish.
Related
I've been working with the Boost geometry, mostly for manipulating polygons; I was using the centroid built-in method (http://www.boost.org/doc/libs/1_55_0/libs/geometry/doc/html/geometry/reference/algorithms/centroid/centroid_2.html) for calculating the geometric (bary) center of my polygons, but recently after outputting the coordinates of my points (composing a specific polygon) (and analyzing them on the side with some Python scripts) I realized that the centroid coordinates the previous method was giving me do not correspond to the geometric mean of the points of the polygon.
I'm in two dimensions and putting it into equations, I should have:
x_centroid = \frac{1}{number of points composing the polygon} \sum{point i} x_i
and the same for the y coordinates. I'm now suspecting that this could have to do with the fact that the boost geometry library is not just looking at the points on the edge of the polygon (its outer ring) but treating it as a filled object.
Does any of you have some experience in manipulating these functions?
Btw, I using:
point my_center(0,0);
bg::centroid(my_polygon,my_center);
to compute the centroid.
Thank you.
In Boost.Geometry the algorithm proposed by Bashein and Detmer [1] is used by default for the calculation of a centroid of Areal Geometries.
The reason is that the simple average method fails for a case where many closely spaced vertices are placed at one side of a Polygon.
[1] Gerard Bashein and Paul R. Detmer. “Centroid of a Polygon”. Graphics Gems IV, Academic Press, 1994, pp. 3–6
That's what the centroid is -- the mean of the infinite number of points making up the filled polygon. It sounds like what you want is not the centroid, but just the average of the vertices.
Incidentally, "geometric mean" has a different definition than you think, and is not in any way applicable to this situation.
Centroid of polygon is considered as mass center of plane figure (for example, paper sheet), not center of vertices only
It is simple to fill rectangle: simply make some grid. But if polygon is unconditioned the task becomes not so trivial.
Probably "regularly" can be formulated as distance between each other point would be: R ± alpha. But I'm not sure about this.
Maybe there is some known algorithm to achieve this.
Added:
I need to generate net, where no large holes, and no big gathering of the points.
Have you though about using a force-directed layout of the points?
Scatter a number of points randomly over the bounding box of your polygon, then repeatedly apply two simple rules to adjust their location:
If a point is outside of the polygon, move it the minimum possible distance so that it lies within, i.e.: to the closest point on the polygon edge.
Points repel each other with a force inversely proportional to the distance between them, i.e.: for every point, consider every other point and compute a repulsion vector that will move the two points directly apart. The vector should be large for proximate points and small for distant points. Sum the vectors and add to the point's position.
After a number of iterations the points should settle into a steady state with an even distribution over the polygon area. How quickly this state is achieved depends on the geometry of the polygon and how you've scaled the repulsive forces between the points.
You can compute a Constrained Delaunay triangulation of the polygon and use a Delaunay refinement algorithm (search with this keyword).
I have recently implemented refinement
in the Fade2D library, http://www.geom.at/fade2d/html/. It takes an
arbitrary polygon without selfintersections as well as an upper bound on the radius of the circumcircle of each resulting triangle. This feature is not contained in the current release 1.02 yet, but I can compile the current development version for Linux or Win64 if you want to try that.
As part of a project I'm working on, I need to generate a 2D triangular mesh.
At the minute, I've implemented a Delaunay triangulation algorithm. I have to input a set of vertices, and it triangulates between them, and that works out great.
However, I'd like to improve on this and instead input a set of vertices that represent the edge of an arbitrary 2D shape (with no holes), and generate a (as uniformly as possible) mesh inside that shape, with varying degrees of precision (target number of triangles).
My Google skills seem to be lacking today, and I haven't found quite what I'm looking for.
Does anyone know of an algorithm / library / concept that will set me on my way?
The triangles of the possibly non-convex 2D shape must not cross the border edges, a Constrained Delaunay triangulation can achieve that.
One solution: Triangulate with Fade [1] and insert the edges of the polygon. A uniform mesh inside the area can then be created using Delaunay Refinement.
[1] http://www.geom.at/fade2d/html/
hth
What is the best method to detect whether the red rectangle overlaps the black polygon? Please refer to this image:
There are four cases.
Rect is outside of Poly
Rect intersects Poly
Rect is inside of Poly
Poly is inside of Rect
First: check an arbitrary point in your Rect against the Poly (see Point in Polygon). If it's inside you are done, because it's either case 3 or 2.
If it's outside case 3 is ruled out.
Second: check an arbitrary point of your Poly against the Rect to validate/rule out case 4.
Third: check the lines of your Rect against the Poly for intersection to validate/rule out case 2.
This should also work for Polygon vs. Polygon (convex and concave) but this way it's more readable.
If your polygon is not convex, you can use tessellation to subdivide it into convex subparts. Since you are looking for methods to detect a possible collision, I think you could have a look at the GJK algorithm too. Even if you do not need something that powerful (it provides information on the minimum distance between two convex shapes and the associated witness points), it could prove to be useful if you decide to handle more different convex shapes.
Christer Ericson made a nice Powerpoint presentation if you want to know more about this algorithm. You could also take a look at his book, Real-Time Collision Detection, which is both complete and accessible for anyone discovering collision detection algorithms.
If you know for a fact that the red rectangle is always axis-aligned and that the black region consists of several axis-aligned rectangles (I'm not sure if this is just a coincidence or if it's inherent to the problem), then you can use the rectangle-on-rectangle intersection algorithm to very efficiently compute whether the two shapes overlap and, if so, where they overlap.
If you use axis-aligned rectangles and polygons consist of rectangles only, templatetypedef's answer is what you need.
If you use arbitrary polygons, it's a much more complex problem.
First, you need to subdivide polygons into convex parts, then perform collision detection using, for example, the SAT algorithm
Simply to find whether there is an intersection, I think you may be able to combine two algorithms.
1) The ray casting algorithm. Using the vertices of each polygon, determine if one of the vertices is in the other. Assuming you aren't worried about the actual intersection region, but just the existence of it. http://en.wikipedia.org/wiki/Point_in_polygon
2) Line intersection. If step 1 produces nothing, check line intersection.
I'm not certain this is 100% correct or optimal.
If you actually need to determine the region of the intersection, that is more complex, see previous SO answer:
A simple algorithm for polygon intersection
I've been scouring the internet for days, but have been unable to find a good answer (or at least one that made sense to me) to what seems like it should be a common question. How does one scale an arbitrary polygon? In particular, concave polygons. I need an algorithm which can handle concave (definitely) and self-intersecting (if possible) polygons. The obvious and simple algorithm I've been using to handle simple convex polygons is calculating the centroid of the polygon, translating that centroid to the origin, scaling all the vertices, and translating the polygon back to its original location.
This approach does not work for many (or maybe all) concave polygons as the centroid often falls outside the polygon, so the scaling operation also results in a translation and I need to be able to scale the polygon "in place" without the final result being translated.
Is anybody aware of a method for scaling concave polygons? Or maybe a way of finding the "visual center" which can be used as a frame of reference for the scaling operation?
Just to clarify, I'm working in 2D space and I would like to scale my polygons using the "visual center" as the frame of reference. So maybe another way to ask the question would be, how do I find the visual center of a concave and/or self-intersecting polygon?
Thanks!
I'm not sure what your problem is.
You're working in an affine space, and you're looking for an affine transformation to scale your polygon ?
If i'm right, just write the transformation matrix:
scaling matrix
homotethy
And transform your polygon with matrix
You can look up for affine transformation matrix.
hope it helps
EDIT
if you want to keep the same "center", you can just do an homotethy of parameter lambda with center G = barycenter of the polygon:
it verifies :
G won't move since it's the center of the homotethy.
It will still verify the relation below, so it will still be the barycenter. (you just multiply the relation by lambda)
in your case G is easy to determinate: G(x,y) : (average of x values of points, average of y values of points)
and it should do what you need
Perhaps Craig is looking for a "polygon offset" algorithm - where each edge in the polygon is offset by a given value. For example, given a clockwise oriented polygon, offsetting edges towards the left will increase the size of the polygon. If this is what Craig is looking for then this has been asked and answered before here - An algorithm for inflating/deflating (offsetting, buffering) polygons.
If you're looking for a ready made (opensource freeware) solution, I've also created a clipping library (Clipper) written in Delphi, C++ and C# which includes a rather simple polygon offsetting function.
The reason why you can't find a good answer is because you are being imprecise with your requirements. First explicitly define what you mean by "in-place". What is being kept constant?
Once you have figured that out, then translate the constant point to the origin, scale the polygon as usual, and translate back.