I am trying to understand a particular approach of the DeWall algorithm to perform a 2D/3D delaunay triangulation/tetrahedralization (DT). I am especially interested in the 3D case. Where other divide and conquer algorithms create partial DT and merge them together, the DeWall algorithm directly builds the DT along hierarchical cuts:
However, I am stuck at the very beginning of constructing the first simplex. In Cignoni 1997 - DeWall: A Fast Divide & Conquer Delaunay Triangulation Algorithm in Eᵈ the authors write
MakeFirstSimplex selects the point p₁ ∈ P nearest to the plane α.
It then selects a second point p₂ such that p₂ is the nearest point to p₁ on the other side of α.
Then, it searches the point p₃ such that the circum-circle around the 1-face (p₁, p₂) and the point p₃ has the minimum radius;
(p₁ ; p₂; p₃) is therefore a 2-face of Σ.
The process continues until the required d-simplex is built.
However, if I understand correctly this procedure should always lead to a simplex that does belong to the DT, but when I test this with different point sets, it seems to happen sometimes that an edge is picked which does not belong to the DT.
I've tested this by connecting (green) each point (orange) to its left and right nearest neighbour. A DT made with the triangle program (gray) and the local left-right-splitting-plane (black) are also shown.
In the zoomed-in picture above, the green edges are picked by that prodedure, but one is not part the gray DT. So in a similar case where the plane α of the topmost point would be a bit right of the point and α would be a cut-plane of the DeWall algorithm, this would lead to a wrong simplex.
It is known, that the nearest neighbour graph is always part of a DT, but that does not help in our case since is not guaranteed that a cut-plane α always crosses one of these.
Is there an explanation why this should work in the first place, some trick or an alternative construction to obtain the very first simplex at that constrained position?
Related
I am working on a path finding system for my game that uses A* and i need to position the nodes in such a way that they would be within minimal distance from other points.
I wonder if there is an algorithm that would allow me to find best fitting point on a plane or a line (between neighboring points) as close as possible to the specified position, while maintaining minimal distance between the neighbors.
Basically i need an algorithm that given input (in pseudocode) min distance = 2, original position = 1, 1 and a set of existing points would do this:
In the example the shape is a triangle and the point can be calculated using Pythagoras theorem, but i need it to work for any shape.
Your problem seems uneasy. If you draw the "forbidden areas", they form a complex geometry made of the union of disks.
The there are two cases:
if the new point belongs to the allowed area, you are done;
otherwise you need to find the nearest allowed point.
It is easy to see if a point is allowed, by computing all distances. But finding the nearest allowed point seems more challenging. (By the way, this point could be very far.)
If the target point lies inside a circle, the nearest candidate location might be the orthogonal projection on a circle, or the intersection between two circles. Compute all these points and check if they are allowed. Then keep the nearest candidate.
In red, the allowed candidates. In black the forbidden candidates.
For N points, this is an O(N³) process. This can probably be reduced by a factor N by means of computational geometry techniques, but at the price of high complexity.
I have a set of distinct 2D vectors (over real numbers), pointing in different directions. We are allowed to pick a pair of vectors and construct their linear combination, such that the coefficients are positive and their sum is 1.
In simple words we are allowed to take a "weighted average" of any two vectors.
My goal is for an arbitrary direction to pick a pair of vectors whose "weighted average" is in this direction and is maximized.
Speaking algebraically given vectors a and b and a direction vector n we are interested in maximizing this value:
[ a cross b ] / [ (a - b) cross n ]
i.e. pick a and b which maximize this value.
To be concrete the application of this problem is for sailing boats. For every apparent wind direction the boat will have a velocity given by a polar diagram. Here's an example of such a diagram:
(Each line in this diagram corresponds to a specific wind magnitude). Note the "impossible" front sector of about 30 degrees in each direction.
So that in some direction the velocity will be high, for some - low, and for some directions it's impossible to sail directly (for instance in the direction strictly opposite to the wind).
If we need to advance in a direction in which we can't sail directly (or the velocity isn't optimal) - it's possible to advance in zigzags. This is called tacking.
Now, my goal is to recalculate a new diagram which denotes the average advance velocity in any direction, either directly or indirectly. For instance for the above diagram the corrected diagram would be this:
Note that there are no more "impossible" directions. For some directions the diagram resembles the original one, where it's best to advance directly, and no maneuver is required. For others - it shows the maximum average advance velocity in this direction assuming the most optimal maneuver is periodically performed.
What would be the most optimal algorithm to calculate this? Assume the diagram is given as a discrete set of azimuth-velocity pairs, from which we can calculate the vectors.
So far I just check all the vector pairs to select the best. Well, there're cut-off criterias, such as picking only vectors with positive projection on the advance direction, and opposite perpendicular projections, but still the complexity is O(N^2).
I wonder if there's a more efficient algorithm.
EDIT
Many thanks to #mcdowella. For both computer-science and sailor answers!
I too thought in terms of convex polygon, figured out that it's only worth probing vectors on that hull (i.e. if you take a superposition of 2 vectors on this hull, and try to replace one of them by a vector which isn't on this hull, the result would be worse since new vector's projection on the needed direction is worse than of both source vectors).
However I didn't realize that any "weighted average" of 2 vectors is actually a straight line segment connecting those vectors, hence the final diagram is indeed this convex hull! And, as we can see, this is also in agreement with what I calculated by "brute-force" algorithm.
Now the computer science answer
A tacking strategy gives you the convex combination of the vectors from the legs that make up the tacks.
So consider the outline made by just one contour in your diagram. The set of all possible best speeds and directions is the convex polygon formed by taking all convex combinations of the vectors to the contour. So what you want to do is form the convex hull of your contour (https://en.wikipedia.org/wiki/Convex_hull). To find out how to go fast in any particular direction, intersect that vector with the convex hull, and use tacks with legs that correspond to the corners on either side of the edge of the convex hull that you intersect with.
Looking at your diagram, the contour is concave straight upwind and straight downwind, which is what you would expect. However there is also another concave section, somewhere between 4 and 5 O'Clock and also symmetrically between 7 and 8 O'Clock, which appears as a straight line in your corrected diagram - so I guess there is a third direction to tack in, using two reaches on the same side of the wind which I don't recognise from traditional sailing.
First the ex-laser sailor answer
At least for going straight upwind or downwind, the obvious guess is to tack so that each leg is of the same length and of the same bearing to the wind. If the polar diagram is symmetric around the upwind-downwind axis this is correct. Suppose upwind is the Y axis and possible legs are (A, B), (-A, B), (a, b) and (-a, b). Symmetrical tacking moves (A, B)/2 + (-A, B)/2 = (0, B) and the other symmetrical tack gives you (0, b). Asymmetrical tacking is (-A, B)a/(a+A) + (a, b)A/(a+A) = (0, (a/(a+A))B + (A/(a+A))b) and if b!=B lies between b and B and so is not as good as whichever of b or B is best.
For any direction which lies between the port and starboard tacks that you would take to work your way upwind, the obvious strategy is to change the length of those legs but not their direction so that the average vector traveled is in the required direction. Is this the best strategy? If not, the better strategy is making progress upwind faster that the port and starboard tacks that you would take to work your way upwind, which I think is a contradiction - so for any direction which lies between the port and starboard tacks made to go upwind I think the best strategy is indeed to make those tacks but alter the leg lengths to go in the required direction. The same thing should apply for tacking downwind, if you have a boat that makes that a good idea.
I'm working with a really slow renderer, and I need to approximate polygons so that they look almost the same when confined to a screen area containing very few pixels. That is, I'd need an algorithm to go through a polygon and subtract/move a bunch of vertices until the end polygon has a good combination of shape preservation and economy of vertice usage.
I don't know if there's a formal name for these kind of problems, but if anyone knows what it is it would help me get started with my research.
My untested plan is to remove the vertices that change the polygon area the least, and protect the vertices that touch the bounding box from removal, until the difference in area from the original polygon to the proposed approximate one exceeds a tolerance I specify.
This would all be done only once, not in real time.
Any other ideas?
Thanks!
You're thinking about the problem in a slightly off way. If your goal is to reduce the number of vertices with a minimum of distortion, you should be defining your distortion in terms of those same vertices, which define the shape. There's a very simple solution here, which I believe would solve your problem:
Calculate distance between adjacent vertices
Choose a tolerance between vertices, below which the vertices are resolved into a single vertex
Replace all pairs of vertices with distances lower than your cutoff with a single vertex halfway between the two.
Repeat until no vertices are removed.
Since your area is ultimately decided by the vertex placement, this method preserves shape and minimizes shape distortion. The one drawback is that distance between vertices might be slightly less intuitive than polygon area, but the two are proportional. If you really wish, you could run through the change in area that would result from vertex removal, but that's a lot more work for questionable benefit imo.
As mentioned by Angus, if you want a direct solution for the change in area, it's not actually super difficult. Was originally going to leave this as an exercise to the reader, but it's totally possible to solve this exactly, though you need to include vertices on either side.
Assume you're looking at a window of vertices [A, B, C, D] that are connected in that order. In this example we're determining the "cost" of combining B and C.
Calculate the angle offset from collinearity from A toward C. Basically you just want to see how far from collinear the two points are. This is |sin(|arctan(B - A)| - |arctan(C - A)|)| Where pipes are absolute value, and differences are the sensical notion of difference.
Calculate the total distance over which the angle change will effectively be applied, this is just the euclidean distance from A to B times the euclidean distance from B to C.
Multiply the terms from 2 and 3 to get your first term
To get your second term, repeat steps 2 - 4 replacing A with D, B with C, and C with B (just going in the opposite direction)
Calculate the geometric mean of the two terms obtained.
The number that results in step 6 presents the full-picture minus a couple constants.
I tried my own plan first: Protect the vertices touching the bounding box, then remove the rest in the order that changes the resultant area the least, until you can't find a vertice to remove that keeps the new polygon area within X% of the original one. This is the result with X = 5%:
When the user zooms out really far these shapes fit the bill well enough for me. I haven't tried any of the other suggestions. The savings are quite astonishing, sometimes from 80-100 vertices down to 4 or 5.
Im looking for some fairly easy (I know polygon union is NOT an easy operation but maybe someone could point me in the right direction with a relativly easy one) algorithm on merging two intersecting polygons. Polygons could be concave without holes and also output polygon should not have holes in it. Polygons are represented in counter-clockwise manner. What I mean is presented on a picture. As you can see even if there is a hole in union of polygons I dont need it in the output. Input polygons are for sure without holes. I think without holes it should be easier to do but still I dont have an idea.
Remove all the vertices of the polygons which lie inside the other polygon: http://paulbourke.net/geometry/insidepoly/
Pick a starting point that is guaranteed to be in the union polygon (one of the extremes would work)
Trace through the polygon's edges in counter-clockwise fashion. These are points in your union. Trace until you hit an intersection (note that an edge may intersect with more than one edge of the other polygon).
Find the first intersection (if there are more than one). This is a point in your Union.
Go back to step 3 with the other polygon. The next point should be the point that makes the greatest angle with the previous edge.
You can proceed as below:
First, add to your set of points all the points of intersection of your polygons.
Then I would proceed like graham scan algorithm but with one more constraint.
Instead of selecting the point that makes the highest angle with the previous line (have a look at graham scan to see what I mean (*), chose the one with the highest angle that was part of one of the previous polygon.
You will get an envellope (not convex) that will describe your shape.
Note:
It's similar to finding the convex hull of your points.
For example graham scan algorithm will help you find the convex hull of the set of points in O (N*ln (N) where N is the number of points.
Look up for convex hull algorithms, and you can find some ideas.
Remarques:
(*)From wikipedia:
The first step in this algorithm is to find the point with the lowest
y-coordinate. If the lowest y-coordinate exists in more than one point
in the set, the point with the lowest x-coordinate out of the
candidates should be chosen. Call this point P. This step takes O(n),
where n is the number of points in question.
Next, the set of points must be sorted in increasing order of the
angle they and the point P make with the x-axis. Any general-purpose
sorting algorithm is appropriate for this, for example heapsort (which
is O(n log n)). In order to speed up the calculations, it is not
necessary to calculate the actual angle these points make with the
x-axis; instead, it suffices to calculate the cosine of this angle: it
is a monotonically decreasing function in the domain in question
(which is 0 to 180 degrees, due to the first step) and may be
calculated with simple arithmetic.
In the convex hull algorithm you chose the point of the angle that makes the largest angle with the previous side.
To "stick" with your previous polygon, just add the constraint that you must select a side that previously existed.
And you take off the constraint of having angle less than 180°
I don't have a full answer but I'm about to embark on a similar problem. I think there are two step which are fairly important. First would be to find a point on some polygon which lies on the outside edge. Second would be to make a list of bounding boxes for all the vertices and see which of these overlap. This means when you iterate through vertices, you don't have to do tests for all of them, only those which you know have a chance of intersecting (bounding box problems are lightweight).
Since you now have an outside point, you can now iterate through connected points until you detect an intersection. If you know which side is inside and which outside (you may need to do some work on the first vertex to know this), you know which way to go on the intersection. Then it's merely a matter of switching polygons.
This gets a little more interesting if you want to maintain that hole (which I do) in which case, I would probably make sure I had used up all my intersecting bounding boxes. You also didn't specify what should happen if your polygons don't intersect at all. But that's either going to be leave them alone (which could potentially be a problem if you're expecting one polygon out) or return an error.
I am programming an algorithm where I have broken up the surface of a sphere into grid points (for simplicity I have the grid lines parallel and perpendicular to the meridians). Given a point A, I would like to be able to efficiently take any grid "square" and determine the point B in the square with the least spherical coordinate distance AB. In the degenerate case the "squares" are actually "triangles".
I am actually only using it to bound which squares I am searching, so I can also accept a lower bound if it is only a tiny bit off. For this reason, I need the algorithm to be extremely quick otherwise it would be better to just take the loss of accuracy and search a few more squares.
I decided to repost this question to Math Overflow: https://mathoverflow.net/questions/854/closest-grid-square-to-a-point-in-spherical-coordinates. More progress has been made here
For points on a sphere, the points closest in the full 3D space will also be closest when measured along the surface of the sphere. The actual distances will be different, but if you're just after the closest point it's probably easiest to minimize the 3D distance rather than worry about great circle arcs, etc.
To find the actual great-circle distance between two (latitidude, longitude) points on the sphere, you can use the first formula in this link.
A few points, for clarity.
Unless you specifically wish these squares to be square (and hence to not fit exactly in this parallel and perpendicular layout with regards to the meridians), these are not exactly squares. This is particularly visible if the dimensions of the square are big.
The question speaks of a [perfect] sphere. Matters would be somewhat different if we were considering the Earth (or other planets) with its flattened poles.
Following is a "algorithm" that would fit the bill, I doubt it is optimal, but could offer a good basis. EDIT: see Tom10's suggestion to work with the plain 3D distance between the points rather than the corresponding great cirle distance (i.e. that of the cord rather than the arc), as this will greatly reduce the complexity of the formulas.
Problem layout: (A, B and Sq as defined in the OP's question)
A : a given point the the surface of the sphere
Sq : a given "square" from the grid
B : solution to problem : point located within Sq which has the shortest
distance to A.
C : point at the center of Sq
Tentative algorithm:
Using the formulas associated with [Great Circle][1], we can:
- find the equation of the circle that includes A and C
- find the distance between A and C. See the [formula here][2] (kindly lifted
from Tom10's reply).
- find the intersect of the Great Circle arc between these points, with the
arcs of parallel or meridian defining the Sq.
There should be only one such point, unless this finds a "corner" of Sq,
or -a rarer case- if the two points are on the same diameter (see
'antipodes' below).
Then comes the more algorithmic part of this procedure (so far formulas...):
- find, by dichotomy, the point on Sq's arc/seqment which is the closest from
point A. We're at B! QED.
Optimization:
It is probably possible make a good "guess" as to the location
of B, based on the relative position of A and C, hence cutting the number of
iterations for the binary search.
Also, if the distance A and C is past a certain threshold the intersection
of the cicles' arcs is probably a good enough estimate of B. Only when A
and C are relatively close will B be found a bit further on the median or
parallel arc in these cases, projection errors between A and C (or B) are
smaller and it may be ok to work with orthogonal coordinates and their
simpler formulas.
Another approach is to calculate the distance between A and each of the 4
corners of the square and to work the dichotomic search from two of these
points (not quite sure which; could be on the meridian or parallel...)
( * ) *Antipodes case*: When points A and C happen to be diametrically
opposite to one another, all great circle lines between A and C have the same
length, that of 1/2 the circonference of the sphere, which is the maximum any
two points on the surface of a sphere may be. In this case, the point B will
be the "square"'s corner that is the furthest from C.
I hope this helps...
The lazy lower bound method is to find the distance to the center of the square, then subtract the half diagonal distance and bound using the triangle inequality. Given these aren't real squares, there will actually be two diagonal distances - we will use the greater. I suppose that it will be reasonably accurate as well.
See Math Overflow: https://mathoverflow.net/questions/854/closest-grid-square-to-a-point-in-spherical-coordinates for an exact solution