Let X be a collection of n points in some moderate-dimensional space - for now, say R^5. Let S be the convex hull of X, let p be a point in S, and let v be any direction. Finally, let L = {p + lambda v : lambda a real number} be the line passing through p in direction v.
I am interested in finding a reasonably efficient algorithm for computing the intersection of S with L. I'd also be interested in hearing if it is known that no such algorithm exists! Note that this intersection can be represented by the (extreme) two points of intersection of L with the boundary of S. I'm particularly interested in finding an algorithm that behaves well when n is large.
I should say that it is easy to do this very efficiently in two dimensions. In that case, one can order the points of X in 'clockwise order' as seen from p, and then do binary search. So, the initial ordering takes O(n log(n)) steps and then further lookups take O(log(n)) steps. I don't see what the analogous algorithm should be in higher dimensions. Part of the problem is that a convex body in two dimensions has n vertices and n faces, while a convex body in 3 or higher dimensions can have n vertices but many, many more than n faces.
You can write a simple linear program for this. You want to minimise/maximise lambda subject to the constraint that x + lambda v lies in the convex hull of your input points. "Lies in the convex hull of" is coordinatewise equality between two points, one of which is a nonnegative weighted average of your input points such that the weights sum to 1.
As a practical matter, it may be useful to start with a handful of randomly chosen points, get a convex combination or a certificate of infeasibility, then interpret the cerificate of infeasibility as a linear inequality and find the input point that most violates it. If you're using a practical solver, this means you want to formulate the dual, switch a bunch of presolve things off, and run the above essentially as a cutting plane method using certificates of unboundedness instead. It is likely that, unless you have pathological data, you will only need to tell the LP solver about a small handful of your input points.
Related
Given a set of 2D points X, I would like to find a convex hull consisting of maximum n points. Of course, this is not always possible. Therefore, I am looking for an approximate convex hull consisting of max. n points that maximally covers the set of points X.
Stated more formally, if F(H,X) returns the amount of points of X the convex hull H covers, where |H| is the amount of points out of which the hull is constructed, then I seek the following quantity:
H_hat = argmax_H F(H,X), s.t |H|<=n
Another way to regard the problem is the task of finding the polygon consisting of max. n corners of a set X such that it maximally covers said set X.
The way I've come up with is the following:
X = getSet() \\ Get the set of 2D points
H = convexHull(X) \\ Compute the convex hull
while |H| > n do
n_max = 0
for h in H:
H_ = remove(h,H) \\ Remove one point of the convex hull
n_c = computeCoverage(X,H_) \\ Compute the coverage of the modified convex hull.
\\ Save which point can be removed such that the coverage is reduced minimally.
if n_c > n_max:
n_max = n_c
h_toremove = h
\\ Remove the point and recompute the convex hull.
X = remove(h,X)
H = convexHull(X)
However, this is a very slow way of doing this. I have a large set of points and n (the constraint) is small. This means that the original convex hull contains a lot of points, hence the while-loop iterates for a very long time. Is there a more efficient way of doing this? Or are there any other suggestions for approximate solutions?
A couple of ideas.
The points that we exclude when deleting a vertex lie in the triangle formed by that vertex and the two vertices adjacent to it on the convex hull. Deleting any other vertex does not affect the set of potentially excluded points. Therefore, we only have to recompute coverage twice for each deleted vertex.
Speaking of recomputing coverage, we don't necessarily have to look at every point. This idea doesn't improve the worst case, but I think it should be a big improvement in practice. Maintain an index of points as follows. Pick a random vertex to be the "root" and group the points by which triangle formed by the root and two other vertices contains them (O(m log m) time with a good algorithm). Whenever we delete a non-root vertex, we unite and filter the two point sets for the triangles involving the deleted vertex. Whenever we recompute coverage, we can scan only points in two relevant triangles. If we ever delete this root, choose a new one and redo the index. The total cost of this maintenance will be O(m log^2 m) in expectation where m is the number of points. It's harder to estimate the cost of computing coverage, though.
If the points are reasonably uniformly distributed within the hull, maybe use area as a proxy for coverage. Store the vertices in a priority queue ordered by the area of their triangle formed by their neighbors (ear). Whenever we delete a point, update the ear area of its two neighbors. This is an O(m log m) algorithm.
May be the following approach could work for you: Initially, compute the convex hull H. Then select a subset of n points at random from H, which constructs a new polygon that might not cover all the points, so let's call it quasi-convex hull Q. Count how many points are contained in Q (inliers). Repeat this for a certain amount of times and keep the Q proposal with the most inliers.
This seems a bit related to RANSAC, but for this task, we don't really have a notion of what an outlier is, so we can't really estimate the outlier ratio. Hence, I don't know how good the approximation will be or how many iterations you need to get a reasonable result. May be you can add some heuristics instead of choosing the n points purely at random or you can have a threshold of how many points should at least be contained in Q and then stop when you reach that threshold.
Edit
Actually after thinking about it, you could use a RANSAC approach:
max_inliers = 0
best_Q = None
while(true):
points = sample_n_points(X)
Q = construct_convex_hull(points)
n_inliers = count_inliers(Q, X)
if n_inliers > max_inliers:
max_inliers = n_inliers
best_Q = Q
if some_condition:
break
The advantage of this is that the creation of the convex hull is faster than in your approach, as it only uses a maximum of n points. Also, checking the amount of inliers should be fast as it can be just a bunch of sign comparisons with each leg of the convex hull.
The following does not solve the question in the way I have formulated it, but it solved the problem which spawned the question. Therefore I wanted to add it in case anybody else encounters something similar.
I investigated two approaches:
1) Regards the convex hull as a polygon and apply a polygon simplification algorithm on it. The specific algorithm I investigated was the Ramer-Douglas-Peucker Algorithm.
2) Apply the algorithm described in in the question, without recomputing the convex hull.
Neither approaches will give you (as far as I can tell), the desired solution to the optimization problem stated, but for my tasks they worked sufficiently well.
I would like to test a conjecture that says that the performance of a certain algorithm depends on the degree to which c violates the triangle inequality, where c is the distance matrix of the instance being solved by the algorithm.
My question is, what is a good way to modify the distance matrix c so that I can control the degree to which it violates the triangle inequality?
So far the best I have come up with is something like this: For parameters p and q, randomly choose p% of the elements of c, and for each chosen element, multiply it by 1+u, where u is chosen uniformly from [0,q].
I suspect we can do better. Any ideas?
I would run a number of tests. For each test I would generate two matrices. One completely random but symmetrical. This will almost certainly violate the triangle inequality. The other the distance matrix based on calculating the distances between random points in n-dimensional space, so it obeys the triangle inequality. Now use a weighted average of these two, using a variety of different weights. Repeat with different base matrices.
Just to add something not in the comment I will note that if you consider any triple of points you can work out the weight, if any, at which the triangle inequality is violated for that triangle, because you are just checking three inequalities where the values are linear plus a constant in the weights. Considering all triples isn't great, since this is O(n^3) in the number of points, but the matrices have O(n^2) elements, so you can always claim its only O(n^1.5) in the amount of input data.
I have a small set of N points in the plane, N < 50.
I want to enumerate all triples of points from the set that form a triangle containing no other point.
Even though the obvious brute force solution could be viable for my tiny N, it has complexity O(N^4).
Do you know a way to decrease the time complexity, say to O(N³) or O(N²) that would keep the code simple ? No library allowed.
Much to my surprise, the number of such triangles is pretty large. Take any point as a center and sort the other ones by increasing angle around it. This forms a star-shaped polygon, that gives N-1 empty triangles, hence a total of Ω(N²). It has been shown that this bound is tight [Planar Point Sets with a Small Number of Empty convex Polygons, I. Bárány and P. Valtr].
In the case of points forming a convex polygon, all triangles are empty, hence O(N³). Chances of a fast algorithm are getting low :(
The paper "Searching for empty Convex polygons" by Dobkin, David P. / Edelsbrunner, Herbert / Overmars, Mark H. contains an algorithm linear in the number of possible output triangles for solving this problem.
A key problem in computational geometry is the identification of subsets of a point set having particular properties. We study this problem for the properties of convexity and emptiness. We show that finding empty triangles is related to the problem of determininng pairs of vertices that see each other in starshaped polygon. A linear time algorithm for this problem which is of independent interest yields an optimal algorithm for finding all empty triangles. This result is then extended to an algorithm for finding
empty convex r-gons (r > 3) and for determining a largest empty convex subset. Finally, extensions to higher dimensions are mentioned.
The sketch of the algorithm by Dobkin, Edelsbrunner and Overmars goes as follows for triangles:
for every point in turn, build the star-shaped polygon formed by sorting around it the points on its left. This takes N sorting operations (which can be lowered to total complexity O(N²) via an arrangement, anyway).
compute the visibility graph inside this star-shaped polygon and report all the triangles that are formed with the given point. This takes N visibility graph constructions, for a total of M operations, where M is the number of empty triangles.
Shortly, from every point you construct every empty triangle on the left of it, by triangulating the corresponding star-shaped polygon in all possible ways.
The construction of the visibility graph is a special version for the star-shaped polygon, which works in a traversal around the polygon, where every vertex has a visibility queue which gets updated.
The figure shows a star-shaped polygon in blue and the edges of its visibility graph in orange. The outline generates 6 triangles, and inner visibility edges 8 of them.
for each pair of points (A, B):
for each of the two half-planes defined by (A, B):
initialize a priority queue Q to empty.
for each point M in the half plane,
with increasing angle(AB, AM):
if angle(BA, BM) is smaller than all angles in Q:
print A,B,M
put M in Q with priority angle(BA, BM)
Inserting and querying the minimum in a priority queue can both be done in O(log N) time, so the complexity is O(N^3 log N) this way.
If I understand your questions, what you're looking for is https://en.wikipedia.org/wiki/Delaunay_triangulation
To quote from said Wikipedia article: "The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertex v is added, we split in three the triangle that contains v, then we apply the flip algorithm. Done naively, this will take O(n) time: we search through all the triangles to find the one that contains v, then we potentially flip away every triangle. Then the overall runtime is O(n2)."
My problem is: Given N points in a plane and a number R, list/enumerate all subsets of points, where points in each subset are enclosed by a circle with radius of R. Two subsets should be different and not covered each other.
Efficiency may not be important, but the algorithm should not be too slow.
In a special case, can we find K subsets with most points? Approximation algorithm can be accepted.
Thanks,
Edit: It seems that the statement is not clear to understand. My bad!
So I restate my question as follows: Given N points and a circle with fixed radius R, use the circle to scan whole the space. At a time, the circle will cover a subset of points. The goal is to list all the possible subset of points that can be covered by such an R-radius circle. One subset cannot be a superset of other subsets.
I am not sure I get what you mean by 'not covered'. If you drop this, what you are looking for is exactely a Cech complex whose complexity is high, you wont have efficient algorithm if you dont have condition on the sampling (sampling should be sparse enough and R not too big otherwise you could have 2^n subsets with n your number of points). You have to enumerate all subsets and check if their minimal enclosing ball radius is lower than R. You can reduce the search to all subsets whose diameter is lower than R (eg pairwise distance lower than R) which may be sufficient in your case.
If 'not covered' for two subsets mean that one is not included into the other, you can have many different decompositions. One of interest is the alpha-complex as it can be computed efficiently in O(nlogn) in dimension 2-3 (I will suggest to use CGAL to compute it, you can also see what it means with pictures). If your points are high dimensional, then you will probably end up computing a Cech complex.
Without loss of generality, we can assume that the enclosing circles considered pass through at least two points (ignoring the trivial cases of no points or one point and assuming that your motivation is maximizing density, so that you don't care if non-maximal subsets are omitted). Build a proximity structure (kd-tree, cover tree, etc.) on the input points. For each input point p, use the structure to find all points q such that d(p, q) ≤ 2R. For each point q, there are one or two circles that contain p and q on their boundary. Find their centers by solving some quadratic equations and then look among the other choices of q to determine the subset.
In this problem r is a fixed positive integer. You are given N rectangles, all the same size, in the plane. The sides are either vertical or horizontal. We assume the area of the intersection of all N rectangles has non-zero area. The problem is how to find N-r of these rectangles, so as to maximize the area of the intersection. This problem arises in practical microscopy when one repeatedly images a given biological specimen, and alignment changes slightly during this process, due to physical reasons (e.g. differential expansion of parts of the microscope and camera). I have expressed the problem for dimension d=2. There is a similar problem for each d>0. For d=1, an O(N log(N)) solution is obtained by sorting the lefthand endpoints of the intervals. But let's stick with d=2. If r=1, one can again solve the problem in time O(N log(N)) by sorting coordinates of the corners.
So, is the original problem solved by solving first the case (N,1) obtaining N-1 rectangles, then solving the case (N-1,1), getting N-2 rectangles, and so on, until we reduce to N-r rectangles? I would be interested to see an explicit counter-example to this optimistic attempted procedure. It would be even more interesting if the procedure works (proof please!), but that seems over-optimistic.
If r is fixed at some value r>1, and N is large, is this problem in one of the NP classes?
Thanks for any thoughts about this.
David
Since the intersection of axis-aligned rectangles is an axis-aligned rectangle, there are O(N4) possible intersections (O(N) lefts, O(N) rights, O(N) tops, O(N) bottoms). The obvious O(N5) algorithm is to try all of these, checking for each whether it's contained in at least N - r rectangles.
An improvement to O(N3) is to try all O(N2) intervals in the X dimension and run the 1D algorithm in the Y dimension on those rectangles that contain the given X-interval. (The rectangles need to be sorted only once.)
How large is N? I expect that fancy data structures might lead to an O(N2 log N) algorithm, but it wouldn't be worth your time if a cubic algorithm suffices.
I think I have a counter-example. Let's say you have r := N-2. I.e. you want to find two rectangles with maximum overlapping. Let's say you have to rectangles covering the same area (=maximum overlapping). Those two will be the optimal result in the end.
Now we need to construct some more rectangles, such that at least one of those two get removed in a reduction step.
Let's say we have three rectangles which overlap a lot..but they are not optimal. They have a very small overlapping area with the other two rectangles.
Now if you want to optimize the area for four rectangles, you will remove one of the two optimal rectangles, right? Or maybe you don't HAVE to, but you're not sure which decision is optimal.
So, I think your reduction algorithm is not quite correct. Atm I'm not sure if there is a good algorithm for this or in which complexity class this belongs to, though. If I have time I think about it :)
Postscript. This is pretty defective, but may spark some ideas. It's especially defective where there are outliers in a quadrant that are near the X and Y axes - they will tend to reinforce each other, as if they were both at 45 degrees, pushing the solution away from that quadrant in a way that may not make sense.
-
If r is a lot smaller than N, and N is fairly large, consider this:
Find the average center.
Sort the rectangles into 2 sequences by (X - center.x) + (Y - center.y) and (X - center.x) - (Y - center.y), where X and Y are the center of each rectangle.
For any solution, all of the reject rectangles will be members of up to 4 subsequences, each of which is a head or tail of each of the 2 sequences. Assuming N is a lot bigger than r, most the time will be in sorting the sequences - O(n log n).
To find the solution, first find the intersection given by removing the r rectangles at the head and tail of each sequence. Use this base intersection to eliminate consideration of the "core" set of rectangles that you know will be in the solution. This will reduce the intersection computations to just working with up to 4*r + 1 rectangles.
Each of the 4 sequence heads and tails should be associated with an array of r rectangles, each entry representing the intersection given by intersecting the "core" with the i innermost rectangles from the head or tail. This precomputation reduces the complexity of finding the solution from O(r^4) to O(r^3).
This is not perfect, but it should be close.
Defects with a small r will come from should-be-rejects that are at off angles, with alternatives that are slightly better but on one of the 2 axes. The maximum error is probably computable. If this is a concern, use a real area-of-non-intersection computation instead of the simple "X+Y" difference formula I used.
Here is an explicit counter-example (with N=4 and r=2) to the greedy algorithm proposed by the asker.
The maximum intersection between three of these rectangles is between the black, blue, and green rectangles. But, it's clear that the maximum intersection between any two of these three is smaller than intersection between the black and the red rectangles.
I now have an algorithm, pretty similar to Ed Staub's above, with the same time estimates. It's a bit different from Ed's, since it is valid for all r
The counter-example by mhum to the greedy algorithm is neat. Take a look.
I'm still trying to get used to this site. Somehow an earlier answer by me was truncated to two sentences. Thanks to everyone for their contributions, particularly to mhum whose counter-example to the greedy algorithm is satisfying. I now have an answer to my own question. I believe it is as good as possible, but lower bounds on complexity are too difficult for me. My solution is similar to Ed Staub's above and gives the same complexity estimates, but works for any value of r>0.
One of my rectangles is determined by its lower left corner. Let S be the set of lower left corners. In time O(N log(N)) we sort S into Sx according to the sizes of the x-coordinates. We don't care about the order within Sx between two lower left corners with the same x-coord. Similarly the sorted sequence Sy is defined by using the sizes of the y-coords. Now let u1, u2, u3 and u4 be non-negative integers with u1+u2+u3+u4=r. We compute what happens to the area when we remove various rectangles that we now name explicitly. We first remove the u1-sized head of Sx and the u2-sized tail of Sx. Let Syx be the result of removing these u1+u2 entries from Sy. We remove the u3-sized head of Syx and the u4-sized tail of Syx. One can now prove that one of these possible choices of (u1,u2,u3,u4) gives the desired maximal area of intersection. (Email me if you want a pdf of the proof details.) The number of such choices is equal to the number of integer points in the regular tetrahedron in 4-d euclidean space with vertices at the 4 points whose coordinate sum is r and for which 3 of the 4 coordinates are equal to 0. This is bounded by the volume of the tetrahedron, giving a complexity estimate of O(r^3).
So my algorithm has time complexity O(N log(N)) + O(r^3).
I believe this produces a perfect solution.
David's solution is easier to implement, and should be faster in most cases.
This relies on the assumption that for any solution, at least one of the rejects must be a member of the complex hull. Applying this recursively leads to:
Compute a convex hull.
Gather the set of all candidate solutions produced by:
{Remove a hull member, repair the hull} r times
(The hull doesn't really need to be repaired the last time.)
If h is the number of initial hull members, then the complexity is less than
h^r, plus the cost of computing the initial hull. I am assuming that a hull algorithm is chosen such that the sorted data can be kept and reused in the hull repairs.
This is just a thought, but if N is very large, I would probably try a Monte-Carlo algorithm.
The idea would be to generate random points (say, uniformly in the convex hull of all rectangles), and score how each random point performs. If the random point is in N-r or more rectangles, then update the number of hits of each subset of N-r rectangles.
In the end, the N-r rectangle subset with the most random points in it is your answer.
This algorithm has many downsides, the most obvious one being that the result is random and thus not guaranteed to be correct. But as most Monte-Carlo algorithms it scales well, and you should be able to use it with higher dimensions as well.