Convex hull solving using a rubber band? [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
The convex hull can be found by stretching a rubber band so that it contains all the points and then releasing it.
So my question is : lets assume that we have a robot (a theoretical robot) to solve this problem.
We give it the coordinates of our points ( we have n points ) .
It uses some pins to indicate the points in a board (O(n)).
Now we choose a point (it's not important which one we choose) then we check its distance with other points like ( sqr( x^2 + y^2 ) ) . And we find the Max distance .
Then the robot uses a rubber band and extends it in form of a circle with a radius of distance we found in step 2 and centered in the point we chose in step 2. And it releases the band .
Then the robot needs to follow the rubber band to find the vertices of the convex hull in O( m ) where m is the vertices that convex hull consists of them.(m <= n)
so the totall order of the algorithem (this way) would be O(n).
i know i did not take into account the time the rubber band needs to strech or the time it needs for the contraction.
but assuming we have lots of points it (contraction/streching) takes much less than O(n).
is there anyway to simulate the effect of the rubber band in computer?
i know the lowest possible order for convex hull is said to be O(nlg(n)) due to the sorting lower band .

I guess you could emulate that "rubber band algorithm" using some sort of optimization algorithms, but it will probably be horribly slow. Keep in mind that in a sense, the physical world is a gigantic, immensely complex computer, all the time figuring out complex stuff such as gravity, magnetic force, etc., and last but not collision-detection.
First, let's do the setup:
the rubber band is represented as a doubly-linked list of nodes holding the position of each "atom" in the rubber band (thinking of the rubber band as a 1-d chain of atoms)
the pins are represented by some sort of spatial map, or very fine-grained n-dimensional array holding the information of whether some small region contains a pin or not
Now, the actual algorithm:
whenever an "atom" in the rubber band touches/is very near to a pin (according to the spatial map, or n-d array) that atom is fixed and can no longer move
for all other atoms, slightly alter their positions in order to minimize the distances to their respective adjacent neighbours; you could do this with, e.g., a stochastic optimization or a swarm algorithms
repeat until all the atoms have "settled down"
Of course, the complexity of this algorithm is terrible, and far worse than O(n) or even O(nlogn), because all the expensive computation of the "rubber band" is usually performed be that great physics engine called the universe. (You could probably achieve a similar result by entering the "rubber band and board of pins" problem into any modern physics simulation.)

"is there anyway to simulate the effect of the rubber band in computer": no, not as regards computational complexity. A computer operation handles a constant number of operands at a time. For instance, typical convex hull algorithms take the points three by three and check whether they form a clockwise or counterclockwise triangle. This is said to be done in constant time.
Releasing the band involves all N points and cannot be implemented as a primitive operation.
If you try to somehow emulate it with a computer, you can be sure that it will take at least O(N Log(N)) operations. Anyway, in a discrete universe (integer coordinates), O(N) could be possible using radix sort.

but assuming we have lots of points it takes much less than O(n).
No it doesn't, because of this step:
Now we choose a point (it's not important which one we choose) then we check its distance with other points like ( sqr( x^2 + y^2 ) ) . And we find the Max distance .
You cannot find this max distance in less than O(n).
Also:
Then the robot uses a rubber band and extends it in form of a circle with a radius of distance we found in step 2 and centered in the point we chose in step 2. And it releases the band .
Then the robot needs to follow the rubber band to find the vertices of the convex hull in O( m ) where m is the vertices that convex hull consists of them.(m <= n)
This takes O(m*n) time, see the Jarvis march algorithm. You need to check that each point is actually part of the convex hull, you can't just extend the elastic band once and be done with it.

Related

known algorithm to reduce the number of points in a closed contour

I have a library that produce a contour, as a list of coordinates like :
[[464.5, 551. ],
[464.5, 550. ],
[464. , 549.5],
[463.5, 549. ],
[463. , 548.5],
[462. , 548.5],
[461. , 548.5],
[460.5, 549. ],
[460. , 549.5],
[459. , 549.5],
[458. , 549.5],
[457. , 549.5],
...
Coordinates are connected by straight lines, defining a closed irregular not self-intersecting polygon.
From the example above, we can see that some points could be removed without losing any surface area, but I don't care if the algorithm has some loss, as long as it is configurable (like intersection of area over union of area > x, or something else ?)
Are there some known algorithm that will reduce the number of points of a closed contour ?
PS: the naive algorithm is to test all subsets of points, and take the smallest subset that is above the acceptable loss. The issue is that i might have hundreds of coordinates, and the number of subsets is exponential (2^(coord_count)). Even computing the loss is expensive : i need to compute the intersection and union of 2 polygons and then compute their surface.
EDIT :
Removing consecutive points that are aligned is easy and will certainly be the first step to decrease the time complexity of the following steps.
what i wish is a new polygon whose surface coverage is nearly the same but that has far fewer coordinates : i don't even care if the new polygon is not using any coordinates of the original one (but this seems even more complex than removing some points of the original polygon).
I suggest the following procedure:
For each 3 consecutive points, check that the line joining the two points either side does not intersect the polygon.
Calculate the "area contribution" if the middle point is removed; this will be negative if they are convex and positive if concave.
If you want the optimum result with the fewest number of points, always remove the point which minimizes the net change in area at any stage. Be careful with signs.
Repeat this until the next optimal net change exceeds the specified tolerance.
The naive version of this algorithm is O(N^2) in the worst case. You can optimize it somewhat by using a BST / heap to keep track of area deltas corresponding to each point, although updates might be fiddly. A quadtree for intersection testing might also be useful, although it incurs a setup penalty of O(N log N) which can only be negated if a large number of points is removed.
Douglas-Peucker doesn't always produce the optimum result (as many points removed as possible without exceeding the area difference threshold); nor does the original algorithm take self-intersection into account.
The Alex Kemper comment on the question answered my question :
The Ramer-Douglas-Peucker algorithm is quite good in reducing points of a route. The desired accuracy (= max error) can be specified
I used this algorithm, implemented in scikit-image lib : skimage.measure.approximate_polygon

Hill climbing in an n-dimensional space: finding the neighbor

In hill climbing for 1 dimension, I try two neighbors - a small delta to the left and one to the right of my current point, and then keep the one that gives a higher value of the objective function. How do I extend it to an n-dimensional space? How does one define a neighbor for an n-dimensional space? Do I have to try 2^n neighbors (a delta applied to each of the dimension)?
You don't need to compare each pair of neighbors, you need to compute a set of neighbors, e.g. on a circle (sphere/ hypersphere in a higher dimensions) with a radius of delta, and then take the one with the highest values to "climb up". In any case you will discretize the neighborhood of your current solution and compute the score function for each neighbor. When you can differentiate your function, than, Gradient ascent/descent based algorithms may solve your problem:
1) Compute the gradient (direction of steepest ascent)
2) Go a small step into the direction of the gradient
3) Stop if solution does not change
A common problem with those algorithms is, that you often only find local maxima / minima. You can find a great overview on gradient descent/ascent algorithms here: http://sebastianruder.com/optimizing-gradient-descent/
If you are using IEEE-754 floating point numbers then the obvious answer is something like (2^52*(log_2(delta)+1023))^(n-1)+1 if delta>=2^(-1022) (more or less depending on your search space...) as that is the only way you can be certain that there are no more neighboring solutions with a distance of delta.
Even assuming you instead take a random fixed size sample of all points within a given distance of delta, lets say delta=.1, you would still have the problem that if the distance from the local optimum was .0001 the probability of finding an improvement in just 1 dimension would be less than .0001/.1/2=0.05% so you would need to take more and more random samples as you get closer to the local optimum (of which you don't know the value...).
Obviously hill climbing is not intended for the real number space or theoretical graph spaces with infinite degree. You should instead be using a global search algorithm.
One example of a multidimensional search algorithm which needs only O(n) neighbours instead of O(2^n) neighbours is the Torczon simplex method described in Multidirectional search: A direct search algorithm for parallel machines (1989). I chose this over the more widely known Nelder-Mead method because the Torczon simplex method has a convergence proof (convergence to a local optimum given some reasonable conditions).

Largest triangle from a set of points [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How to find largest triangle in convex hull aside from brute force search
I have a set of random points from which i want to find the largest triangle by area who's verticies are each on one of those points.
So far I have figured out that the largest triangle's verticies will only lie on the outside points of the cloud of points (or the convex hull) so i have programmed a function to do just that (using Graham scan in nlogn time).
However that's where I'm stuck. The only way I can figure out how to find the largest triangle from these points is to use brute force at n^3 time which is still acceptable in an average case as the convex hull algorithm usually kicks out the vast majority of points. However in a worst case scenario where points are on a circle, this method would fail miserably.
Dose anyone know an algorithm to do this more efficiently?
Note: I know that CGAL has this algorithm there but they do not go into any details on how its done. I don't want to use libraries, i want to learn this and program it myself (and also allow me to tweak it to exactly the way i want it to operate, just like the graham scan in which other implementations pick up collinear points that i don't want).
Don't know if this help, but if you choose two points from the convex hull and rotate all points of the hull so that the connecting line of the two points is parallel to the x-Axis, either the point with the maximum or the one with the minimum y-coordinate forms the triangle with the largest area together with the two points chosen first.
Of course once you have tested one point for all possible base lines, you can remove it from the list.
Here's a thought on how to get it down to O(n2 log n). I don't really know anything about computational geometry, so I'll mark it community wiki; please feel free to improve on this.
Preprocess the convex hull by finding for each point the range of slopes of lines through that point such that the set lies completely on one side of the line. Then invert this relationship: construct an interval tree for slopes with points in leaf nodes, such that when querying with a slope you find the points such that there is a tangent through those points.
If there are no sets of three or more collinear points on the convex hull, there are at most four points for each slope (two on each side), but in case of collinear points we can just ignore the intermediate points.
Now, iterate through all pairs of points (P,Q) on the convex hull. We want to find the point R such that triangle PQR has maximum area. Taking PQ as the base of the triangle, we want to maximize the height by finding R as far away from the line PQ as possible. The line through R parallel to PQ must be such that all points lie on one side of the line, so we can find a bounded number of candidates in time O(log n) using the preconstructed interval tree.
To improve this further in practice, do branch-and-bound in the set of pairs of points: find an upper bound for the height of any triangle (e.g. the maximum distance between two points), and discard any pair of points whose distance multiplied by this upper bound is less than the largest triangle found so far.
I think the rotating calipers method may apply here.
Off the top of my head, perhaps you could do something involving gridding/splitting the collection of points up into groups? Maybe... separating the points into three groups (not sure what the best way to do that in this case would be, though), doing something to discard those points in each group that are closer to the other two groups than other points in the same group, and then using the remaining points to find the largest triangle that can be made having one vertex in each group? This would actually make the case of all points being on a circle a lot simpler, because you'd just focus on the points that are near the center of the arcs contained within each group, as those would be the ones in each group furthest from the other two groups.
I'm not sure if this would give you the proper result for certain triangles/distributions of points, though. There may be situations where the resultant triangle isn't of optimal area, either because the grouping and/or the vertex choosing aren't/isn't optimal. Something like that.
Anyway, those are my thoughts on the problem. I hope I've at least been able to give you ideas for how to work on it.
How about dropping a point at a time from the convex hull? Starting with the convex hull, calculate the area of the triangle formed by each triple of adjacent points (p1p2p3, p2p3p4, etc.). Find the triangle with minimum area, then drop the middle of the three points that formed that triangle. (In other words, if the smallest area triangle is p3p4p5, drop P4.) Now you have a convex polygon with N-1 points. Repeat the same procedure until you are left with three points. This should take O(N^2) time.
I would not be at all surprised if there is some pathological case where this doesn't work, but I expect that it would work for the majority of cases. (In other words, I haven't proven this, and I have no source to cite.)

Fast algorithm for calculating union of 'local convex hulls'

I have a set of 2D points from which I want to generate a polygon (or collection of polygons) outlining the 'shape' of those points, using the following concept:
For each point in the set, calculate the convex hull of all points within radius R of that point. After doing this for each point, take the union of these convex hulls to produce the final shape.
A brute force approach of actually constructing all these convex hulls is something like O(N^2 + R^2 log R). Is there a known, more efficient algorithm to produce the same result? Or perhaps a different way of expressing the problem?
Note: I am aware of alpha shapes, they are different; I am looking for an algorithm to perform what is described above.
The following solution does not work - disproved experimentally in MATLAB.
Update: I have a proposed solution.
Proposition: take the Delaunay Triangulation of the set of points, remove all triangles having circumradius greater than R. Then take the union of the remaining triangles.
A sweep line algorithm can improve searching for the R-neighbors. Alternatively, you can consider only pairs of points that are in neighboring squares of square grid of width R. Both of these ideas can get rid of the N^2 - of course only if the points are relatively sparse.
I believe that a clever combination of sweeping and convex hull finding cat get rid of the N^2 even if the points are not sparse (as in Olexiy's example), but cannot come up with a concrete algorithm.
Yes, using rotating calipers. My prof wrote some stuff on this, it starts on page 19.
Please let me know if I misunderstood the problem.
I don't see how do you get N^2 time for brute-forcing all convex hulls in the worst case (1). What if almost any 2 points are closer than R - in this case you need at least N^2*logN to just construct the convex hulls, leave alone computing their union.
Also, where does R^2*logR in your estimation comes from?
1 The worst case (as I see it) for a huge N - take a circle of radius R / 2 and randomly place points on its border and just outside it.

Is there an efficient algorithm to generate a 2D concave hull?

Having a set of (2D) points from a GIS file (a city map), I need to generate the polygon that defines the 'contour' for that map (its boundary). Its input parameters would be the points set and a 'maximum edge length'. It would then output the corresponding (probably non-convex) polygon.
The best solution I found so far was to generate the Delaunay triangles and then remove the external edges that are longer than the maximum edge length. After all the external edges are shorter than that, I simply remove the internal edges and get the polygon I want. The problem is, this is very time-consuming and I'm wondering if there's a better way.
One of the former students in our lab used some applicable techniques for his PhD thesis. I believe one of them is called "alpha shapes" and is referenced in the following paper:
http://www.cis.rit.edu/people/faculty/kerekes/pdfs/AIPR_2007_Gurram.pdf
That paper gives some further references you can follow.
This paper discusses the Efficient generation of simple polygons for characterizing the shape of a set of points in the plane and provides the algorithm. There's also a Java applet utilizing the same algorithm here.
The guys here claim to have developed a k nearest neighbors approach to determining the concave hull of a set of points which behaves "almost linearly on the number of points". Sadly their paper seems to be very well guarded and you'll have to ask them for it.
Here's a good set of references that includes the above and might lead you to find a better approach.
The answer may still be interesting for somebody else: One may apply a variation of the marching square algorithm, applied (1) within the concave hull, and (2) then on (e.g. 3) different scales that my depend on the average density of points. The scales need to be int multiples of each other, such you build a grid you can use for efficient sampling. This allows to quickly find empty samples=squares, samples that are completely within a "cluster/cloud" of points, and those, which are in between. The latter category then can be used to determine easily the poly-line that represents a part of the concave hull.
Everything is linear in this approach, no triangulation is needed, it does not use alpha shapes and it is different from the commercial/patented offering as described here ( http://www.concavehull.com/ )
A quick approximate solution (also useful for convex hulls) is to find the north and south bounds for each small element east-west.
Based on how much detail you want, create a fixed sized array of upper/lower bounds.
For each point calculate which E-W column it is in and then update the upper/lower bounds for that column. After you processed all the points you can interpolate the upper/lower points for those columns that missed.
It's also worth doing a quick check beforehand for very long thin shapes and deciding wether to bin NS or Ew.
A simple solution is to walk around the edge of the polygon. Given a current edge om the boundary connecting points P0 and P1, the next point on the boundary P2 will be the point with the smallest possible A, where
H01 = bearing from P0 to P1
H12 = bearing from P1 to P2
A = fmod( H12-H01+360, 360 )
|P2-P1| <= MaxEdgeLength
Then you set
P0 <- P1
P1 <- P2
and repeat until you get back where you started.
This is still O(N^2) so you'll want to sort your pointlist a little. You can limit the set of points you need to consider at each iteration if you sort points on, say, their bearing from the city's centroid.
Good question! I haven't tried this out at all, but my first shot would be this iterative method:
Create a set N ("not contained"), and add all points in your set to N.
Pick 3 points from N at random to form an initial polygon P. Remove them from N.
Use some point-in-polygon algorithm and look at points in N. For each point in N, if it is now contained by P, remove it from N. As soon as you find a point in N that is still not contained in P, continue to step 4. If N becomes empty, you're done.
Call the point you found A. Find the line in P closest to A, and add A in the middle of it.
Go back to step 3
I think it would work as long as it performs well enough — a good heuristic for your initial 3 points might help.
Good luck!
You can do it in QGIS with this plug in;
https://github.com/detlevn/QGIS-ConcaveHull-Plugin
Depending on how you need it to interact with your data, probably worth checking out how it was done here.
As a wildly adopted reference, PostGIS starts with a convexhull and then caves it in, you can see it here.
https://github.com/postgis/postgis/blob/380583da73227ca1a52da0e0b3413b92ae69af9d/postgis/postgis.sql.in#L5819
The Bing Maps V8 interactive SDK has a concave hull option within the advanced shape operations.
https://www.bing.com/mapspreview/sdkrelease/mapcontrol/isdk/advancedshapeoperations?toWww=1&redig=D53FACBB1A00423195C53D841EA0D14E#JS
Within ArcGIS 10.5.1, the 3D Analyst extension has a Minimum Bounding Volume tool with the geometry types of concave hull, sphere, envelope, or convex hull. It can be used at any license level.
There is a concave hull algorithm here: https://github.com/mapbox/concaveman

Resources