I've been studying Delaunay triangulation (not a homework) and I thought about the following problem : given a set of points S on plane (with cardinality of n) and a set of triangles T (should be with the cardinality of n-2) — how to determine if the triangles set T form a Delaunay triangulation DT(S)?
The first problem is that Delaunay triangulation is not unique, so rebuilding it for the points set again and comparing to the triangles set won't give us the answer. In addition, optimal Delaunay triangulation algorithms are rather hard to implement (however, using libraries like CGAL would have been okay).
Suppose we know how to check if the triangles set is a triangulation (not necessarily Delaunay). Then we should use the definition of a Delanuay triangulation: for every triangle t in triangulation, no point in S is strictly inside the circumference of t. This leads us to the following methods:
The trivial approach. Just iterate over T, compute the circumference and iterate over S, checking if point is inside the circumference. However, this takes O(n^2) time, which is not very optimal.
The enchanted approach. Again, iterate over T and compute the circumference. If any point s in inside the circumference, it means that its distance to the circumference center is less than the radius. Using nearest neighbour searching structures over S, we will speed up our algorithm. For instance, simple kd-tree structure leads us to O(n log n) algorithm in average and O(n sqrt(n)) in the worst case.
Does anyone have an idea of something simpler?
Now let's return to the problem of checking if T is a triangulation at all. Trivial pre-requirements like equality of S and the set of triangles' vertices can be performed no faster than O(n log n). What is left — to check that every two triangles in T intersect in a common face, or not at all.
Again, we could do this by iterating over T and again over T, checking for intersection, but this is O(n^2) algorithm.
Let's think about what «triangles t1 and t2 intersect» mean? They intersect if their edges intersect or if one triangle is completely lies in another. The problem of intersecting all the edges can be solved at O(n log n) time using Bentley-Ottmann algorithm (the worst case is O((n + k) log n), where k is the count of intersections, but we can stop the algorithm at the moment we find the first intersection). Also we didn't recognize the case of one triangle completely containing another, but I believe we can modify Bentley-Ottmann algorithm to maintain triangles crossing the sweep line instead of segments, which as I said, yields us O(n log n) algorithm. However, it is really complex to implement.
I've thought about iterative algorithm — let's maintain the structure of non-intersecting (or only intersecting by edge) triangles (it should be something very similar to kd-tree). Then we try to add the next triangle t: first check if any of t's vertices is already in one of the triangles — then we got an intersection. Otherwise, add t to the structure. However, if we want O(log n) or O(sqrt(n)) time for search and add queries, we have to balance this structure's height, which is too hard even for kd-trees.
So, does anyone know any simple solution to this problem?
There is a Delaunay lemma : "If every edge in a triangulation K of S is locally Delaunay then K is the Delaunay triangulation of S". Maybe this could help in situation from paragraph 1 of your question, where you are certain that K is some triangulation of S. Dunno the computational complexity of this approach though.
Related
I am trying to implement the kernel finding algorithm proposed by D.T. Lee and F.P. Preparata and I'm having trouble understanding why this algorithm runs in O(n) rather than O(n2), where n is the number of vertices in the polygon.
In very short words, the algorithm involves removing from a plane K the section that is not "visible" from a vertex vi by intersecting K with one of the half-lines passing through vi and vi+1.
In order to compute the intersection, parsing the edges of the plane K is required. The number of edges in K is less than n, but not constant, and such the intersection must take O(n). This is confirmed by the authors:
"the total number of vertices visited by the algorithm in handling case (1.1), is bounded above by 3n, Le. it is O(n)"
In the case that the kernel of the polygon exists, such intersections need to be computed for each vertex composing the polygon. So I'm expecting the total time complexity to be O(n2).
The paper describes an additional test but as far as I understand it only serves in terminating the algorithm early if the polygon does not have a kernel.
I'm missing something for sure but I'm not able to determine what.
By a rough reading of the article, I understand that the kernel is built incrementally one edge at a time. When an edge causes the removal of a part of the kernel, the intersections of the half-plane of support with the kernel are found by a linear search along the kernel from an F or L vertex. So I imagine that
the total cost of the linear searches is bounded by O(n), because if a search involves several vertices, it will reduce the size of the kernel accordingly, making the future steps "more bounded";
after a removal, the F and L vertices can be quickly updated.
In other words, the update of the kernel implied by a new edge is done in amortized constant time.
Let P be a 3D convex polyhedron with n vertices.
1. Given an algorithm that takes an arbitrary point q as input, how can I decide in O(n) time whether q is inside or outside the convex polyhedron?
2. Can I do some processing to make it O(logn)?
For (1), if you have a convex polyhedron with n vertices then you have also O(n) faces. One could triangulate each face and in total it would still be O(n) triangles. Now take the query point q and check on which side of a triangle q lies. This check takes O(1) for one triangle, thus O(n) for all triangles.
EDIT: The O(n) faces define O(n) planes. Just check if q lies on the same side for all planes.
For (2), (I did not find source for this but it seem reasonable) one could project the polyhedron P as P' onto a plane. P' can be seen as two separate planar graphs, one graph U' for the upper part of the polyhedron and the second graph L' for the lower part. In total there are O(n) faces in L' and U'. Now one can preprocess L' and U' via Kirkpatrick optimal planar subdivision algorithm. (Other sources for it: 1 and 2) This enables O(log n) point in PSLG (planar straight line graph) checks.
Now using the query point q and projecting it to the same plane with the same projection one can look up the face of L' and U' it lies in in O(log n) time. In 3D each face lies in exactly one plane. Now we check on which side q lies in 3D an know if q is inside the polyhedron.
Another approach for (2) will be spacial subdivision of the polyhedron into slubs - pyramids with their vertex in the polyhedron centroid and polyhedron faces as their bases. Number of such slabs is O(n). You can build a binary space partitioning tree (BSP), consisting of these slubs (probably divided into sub-slubs) - if it's balanced, then the point location will work in O(logn) time.
Of course, it will make sense, if you need to call the point location function many times - because the pre-processing step here will take O(n) time (or more).
Suppose we have two sets of points, say A and B (both of size O(n)) in the plane. Can we find farthest pair of points each being in A & B in O(n) time?
No, you can not calculate the furthest point for each point in O(n). The best you can obtain is O(n log n) with a 2-d tree. You can do this with a technique, similar to finding a closest point.
Read a more detailed answer here where I show a couple of other approaches to solve a similar problem.
Given a set of points in D-dimensional space. What is the optimal algorithm to find maximal possible D-simplex, all the vertexes of which is in the set? Algebraically it means that we have to find a subset of D + 1 points such, that determinant of D * D matrix, constructed from rows as deltas of coordinates each of first D points and last D + 1-st point, have greatest possible value (absolute value) on the set.
I sure, that all D + 1 required points are vertexes of convex hull of given set of points, but I need the algorithm, which not used any convex hull algorithm, because simplex required for they, in turn, required for such algorithms as starting polytope.
If it is not possible to obtain the simplex in less than exponential time, then what is the algorithm, which gives adjustable ratio run-time/precision of approximation for approximate solving of the problem?
I can't think of an exact solution, but you could probably get a reasonable approximation with an iterative approach. Note than I'm assuming that N is larger than D+1 here; if not then I have misunderstood the problem.
First, use a greedy algorithm to construct an initial simplex; choose the first two vertices to be the two most distant points, the next one to maximise your size measure in two dimensions, the next to maximise it in three, and so on. This has polynomial complexity in N and D.
One you have the initial simplex you can switch to iterative improvement. For example, for a given vertex in the simplex you can iterate through the points not in it measuring the change in the size measure that would result if you swapped them. At the end you swap it with the one, if any, that gave the greatest increase. Doing this once for each vertex in the simplex is again polynomial in N and D.
To trade-off betwen run-time cost and how large the resulting simplex is, simply choose how many times you're willing to do this.
Now this is a relatively crude local optimisation algorithm so cannot guarantee that it will find the maximal simplex. However, such approaches have been found to result in reasonably good approximations to the solution of problems like the travelling salesman problem, in the sense that whilst they're not optimal, they result in a distance that isn't too much greater than that of the actual solution in most cases.
Quickhull does not require to find a maximal simplex, this is overkill (too hard a problem, and will not guarantee that the next steps will be quicker).
I suggest you to select D+1 independent directions and take the farthest point in every direction. This will give you a good starting simplex in time O(N.D²). (The D² is because there are D+1 directions and evaluation of the distance in a direction takes D operations.)
Beware anyway that it can be degenerate (several vertexes being identical).
My own approximation of the solution is to take one point, compute furhtest from it and reject first point (totally N=1 point selected), then select else D - 1 points in such manner, that non-oriented N - 1-dimensional hypervolume (formula for S) of each N-points selection is maximal. Finally I find N = D + 1'st point it the way, that oriented D dimensional hypervolume (formula for V) of defined simplex is maximal by absolute value. Total complexity on my mind is something about O(D * N * D^3) (1...D + 1 vertices of simplex, N...N - D - 1 remaining points and D^3 is upper estimate of D * M, M in {1,2,...,D} matrix multiplication complexity). The approach allows us to find the right amount of linearly independent points, or else to find a dimension of the subspace and non-normalized and non-orthogonal basis of the subspace. For large amount of points and large dimensionalities the complexity of proposed algorithm does not predominate over the complexity of, say, quickhull algorithm.
The implementation's repository.
I need to get two points that have biggest distance between.
The easiest method is to compute distance between each of them, but that solution would have an quadratic complexity.
So i'm looking for any faster solution.
How about:
1 Determine the convex hull of the set of points.
2 Find the longest distance between points on the hull.
That should allow you to ignore all points not on the hull when checking for distance.
To elaborate on rossom's answer:
Find the convex hull of the points which can be found in O(n log n) time with an algorithm like Graham's Scan or O(n log h) time with other algorithm's which I assume are harder to implement
Start at a point, say A, and loop through the other points to find the one furthest from it, say B.
Advance A to the next point and advance B until it is furthest from A again. If this distance is larger than the one in part 2, store it as the largest. Repeat until you have looped through all points A in the set
Parts 2 and 3 take amortized O(n) time and therefore the overall algorithm takes O(n log n) or O(n log h) time depending on how much time you can be bothered spending on implementing convex hull.
This is great and all but if you only have a few thousand points (like you said), O(n^2) should work fine (unless you're executing it many times).