I am working on a path finding system for my game that uses A* and i need to position the nodes in such a way that they would be within minimal distance from other points.
I wonder if there is an algorithm that would allow me to find best fitting point on a plane or a line (between neighboring points) as close as possible to the specified position, while maintaining minimal distance between the neighbors.
Basically i need an algorithm that given input (in pseudocode) min distance = 2, original position = 1, 1 and a set of existing points would do this:
In the example the shape is a triangle and the point can be calculated using Pythagoras theorem, but i need it to work for any shape.
Your problem seems uneasy. If you draw the "forbidden areas", they form a complex geometry made of the union of disks.
The there are two cases:
if the new point belongs to the allowed area, you are done;
otherwise you need to find the nearest allowed point.
It is easy to see if a point is allowed, by computing all distances. But finding the nearest allowed point seems more challenging. (By the way, this point could be very far.)
If the target point lies inside a circle, the nearest candidate location might be the orthogonal projection on a circle, or the intersection between two circles. Compute all these points and check if they are allowed. Then keep the nearest candidate.
In red, the allowed candidates. In black the forbidden candidates.
For N points, this is an O(N³) process. This can probably be reduced by a factor N by means of computational geometry techniques, but at the price of high complexity.
Related
I have a 3D mesh consisting of triangle polygons. My mesh can be either oriented left or right:
I'm looking for a method to detect mesh direction: right vs left.
So far I tried to use mesh centroid:
Compare centroid to bounding-box (b-box) center
See if centroid is located left of b-box center
See if centroid is located right of b-box center
But the problem is that the centroid and b-box center don't have a reliable difference in most cases.
I wonder what is a quick algorithm to detect my mesh direction.
Update
An idea proposed by #collapsar is ordering Convex Hull points in clockwise order and investigating the longest edge:
UPDATE
Another approach as suggested by #YvesDaoust is to investigate two specific regions of the mesh:
Count the vertices in two predefined regions of the bounding box. This is a fairly simple O(N) procedure.
Unless your dataset is sorted in some way, you can't be faster than O(N). But if the point density allows it, you can subsample by taking, say, every tenth point while applying the procedure.
You can as well keep your idea of the centroid, but applying it also in a subpart.
The efficiency of an algorithm to solve your problem will depend on the data structures that represent your mesh. You might need to be more specific about them in order to obtain a sufficiently performant procedure.
The algorithms are presented in an informal way. For a more rigorous analysis, math.stackexchange might be a more suitable place to ask (or another contributor is more adept to answer ...).
The algorithms are heuristic by nature. Proposals 1 and 3 will work fine for meshes whose local boundary's curvature is mostly convex locally (skipping a rigorous mathematical definition here). Proposal 2 should be less dependent on the mesh shape (and can be easily tuned to cater for ill-behaved shapes).
Proposal 1 (Convex Hull, 2D)
Let M be the set of mesh points, projected onto a 'suitable' plane as suggested by the graphics you supplied.
Compute the convex hull CH(M) of M.
Order the n points of CH(M) in clockwise order relative to any point inside CH(M) to obtain a point sequence seq(P) = (p_0, ..., p_(n-1)), with p_0 being an arbitrary element of CH(M). Note that this is usually a by-product of the convex hull computation.
Find the longest edge of the convex polygon implied by CH(M).
Specifically, find k, such that the distance d(p_k, p_((k+1) mod n)) is maximal among all d(p_i, p_((i+1) mod n)); 0 <= i < n;
Consider the vector (p_k, p_((k+1) mod n)).
If the y coordinate of its head is greater than that of its tail (ie. its projection onto the line ((0,0), (0,1)) is oriented upwards) then your mesh opens to the left, otherwise to the right.
Step 3 exploits the condition that the mesh boundary be mostly locally convex. Thus the convex hull polygon sides are basically short, with the exception of the side that spans the opening of the mesh.
Proposal 2 (bisector sampling, 2D)
Order the mesh points by their x coordinates int a sequence seq(M).
split seq(M) into 2 halves, let seq_left(M), seq_right(M) denote the partition elements.
Repeat the following steps for both point sets.
3.1. Select randomly 2 points p_0, p_1 from the point set.
3.2. Find the bisector p_01 of the line segment (p_0, p_1).
3.3. Test whether p_01 lies within the mesh.
3.4. Keep a count on failed tests.
Statistically, the mesh point subset that 'contains' the opening will produce more failures for the same given number of tests run on each partition. Alternative test criteria will work as well, eg. recording the average distance d(p_0, p_1) or the average length of (p_0, p_1) portions outside the mesh (both higher on the mesh point subset with the opening). Cut off repetition of step 3 if the difference of test results between both halves is 'sufficiently pronounced'. For ill-behaved shapes, increase the number of repetitions.
Proposal 3 (Convex Hull, 3D)
For the sake of completeness only, as your problem description suggests that the analysis effectively takes place in 2D.
Similar to Proposal 1, the computations can be performed in 3D. The convex hull of the mesh points then implies a convex polyhedron whose faces should be ordered by area. Select the face with the maximum area and compute its outward-pointing normal which indicates the direction of the opening from the perspective of the b-box center.
The computation gets more complicated if there is much variation in the side lengths of minimal bounding box of the mesh points, ie. if there is a plane in which most of the variation of mesh point coordinates occurs. In the graphics you've supplied that would be the plane in which the mesh points are rendered assuming that their coordinates do not vary much along the axis perpendicular to the plane.
The solution is to identify such a plane and project the mesh points onto it, then resort to proposal 1.
I'm struggling with a 3D problem for which I'm trying to find an efficient algorithm.
I have a bounding box with given width, height, and depth.
I also have a list of spheres. That is, a center coordinate (xi,yi,zi) and radius ri for each sphere.
The spheres are guaranteed to fit within the bounding box, and to not overlap eachother.
So my situation is like this:
Now I have a new sphere with radius r, which I have to fit inside the bounding box, not overlapping any of the previous spheres.
I also have a target point T = (x,y,z) and my goal is to fit this new sphere (given the conditions above) as close as possible to this target point.
I'm trying to construct an efficient algorithm to find an optimal position for the new sphere. Optimal as in: as close to the target point as possible. Or a "false" result if there is no space to fit this new sphere between or around the existing ones anywhere within the bounding box.
I have thought of all sorts of complex approaches, such as building some sort of parametric description of the remaining volume, starting with the bounding box and subtracting the existing spheres one by one. But it doesn't seem to lead me towards a workable solution.
Note that there are a lot of known 'sphere packing' algorithms, but they tend to just fill volumes with random spheres. Also they often use a trial and error approach, just doing a certain amount of random attempts and then terminate.
Whereas I have a given specific new sphere size, and I need to fit that in (or find out that it's not possible).
A possible approach is by computing the "distance map" of the spheres, i.e. the function that returns for every point (x, y, z) the distance to the closest sphere, which is also the distance to the closest center minus the radius of the corresponding sphere. The map is made of the intersection of (hyper)conical surfaces.
Then you can explore the distance map around the target point and find the closest point with a value that exceeds the target radius.
If I am right, the distance map is directly related to the additively weighted Voronoi diagram of the sphere centers (https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram), and the vertices of the diagram correspond to local maxima. Hence the closest Voronoi vertex with a value that exceeds the target radius will give a solution.
Unfortunately, the construction of this diagram won't be a barrel of laughs. Check the article "Euclidean Voronoi diagram of 3D balls and its computation
via tracing edges" and its bibliography.
A possibly workable solution to estimate the distance map is by discretizing space in a regular grid of cubes, and for every cube obtain a lower and an upper bound of the distance function.
For a single given sphere and a given cube, it is possible to find the minimum and maximum value analytically. Then considering all spheres, you can find the smallest maximum and smallest minimum, which are an upper and lower bound of the true distance (the largest minimum won't do). Then you keep all the spheres such that the minimum remains below that upper bound and you get a (hopefully short) list of candidates.
Here you can check the distances to the spheres in the list, and if the upper bound is smaller than the target radius, you can drop the cube. If you find an upper bound above the target radius, you have found a solution.
Otherwise, if the uncertainty range on the distance function is too large, subdivide the cube in smaller ones for a more accurate estimate of the upper and lower bounds.
To obtain a solution close to the target point, you will visit the cubes by increasing distance from the target (using nested digital spheres), until you find a match.
A key point in this process is to quickly find the spheres closest to a given cube, for the initial estimates. A data structure such as a kD-tree or similar might be helpful.
At the entrance, two polygons are given (the coordinates of the vertices of these polygons are listed in the order of their traversal; however, the traversal order for different polygon angles can be chosen different). Can one polygon be transformed into another using only parallel translation and proportional scaling?
I have following idea
So, find some common peak for two polygons and make the transfer of one polygon so that these vertices lie on one point then Scaling so that the neighboring point matches the corresponding point of another polygon, but I think it's wrong , at least I can't write it in code
Is there some special formula or theorem for this problem?
I would solve it like this.
Find the necessary parallel transport.
Find the necessary scaling.
See if they are the same polygon now.
So to start take the vertex that it farthest to the left, and if there is a tie, the one that is farthest down. Find that for both polygons. Use parallel transport to put that vertex at the origin for both.
Now take the vertex that is farthest to the right, and if there is a tie, the one that is farthest up. Find that for both polygons. If it is not at the same slope, then they are different. If it is, then scale one so that the points match.
Now see if all of the points match. If not, they are different. Otherwise the answer is yes.
Compute the axis-aligned bounding boxes of the two polygons.
If the aspect ratios do not match, the answer is negative. Otherwise the ratio of corresponding sides is your scaling factor. The translation is obtained by linking the top left corners and the transformation equations are
X = s.(x - xtl) + Xtl
Y = s.(y - ytl) + Ytl
where s is the scaling factor and (xtl, ytl), (Xtl, Ytl) are the corners.
Now choose a vertex of the first polygon, predict the coordinates in the other and find the matching vertex. If you can't, the answer is negative. Otherwise, you can compare the remaining vertices*.
*I assume that the polygons do not have overlapping vertices. If they can have arbitrary self-overlaps, I guess that you have to try matching all vertices, with all cyclic permutations.
I'm working with a really slow renderer, and I need to approximate polygons so that they look almost the same when confined to a screen area containing very few pixels. That is, I'd need an algorithm to go through a polygon and subtract/move a bunch of vertices until the end polygon has a good combination of shape preservation and economy of vertice usage.
I don't know if there's a formal name for these kind of problems, but if anyone knows what it is it would help me get started with my research.
My untested plan is to remove the vertices that change the polygon area the least, and protect the vertices that touch the bounding box from removal, until the difference in area from the original polygon to the proposed approximate one exceeds a tolerance I specify.
This would all be done only once, not in real time.
Any other ideas?
Thanks!
You're thinking about the problem in a slightly off way. If your goal is to reduce the number of vertices with a minimum of distortion, you should be defining your distortion in terms of those same vertices, which define the shape. There's a very simple solution here, which I believe would solve your problem:
Calculate distance between adjacent vertices
Choose a tolerance between vertices, below which the vertices are resolved into a single vertex
Replace all pairs of vertices with distances lower than your cutoff with a single vertex halfway between the two.
Repeat until no vertices are removed.
Since your area is ultimately decided by the vertex placement, this method preserves shape and minimizes shape distortion. The one drawback is that distance between vertices might be slightly less intuitive than polygon area, but the two are proportional. If you really wish, you could run through the change in area that would result from vertex removal, but that's a lot more work for questionable benefit imo.
As mentioned by Angus, if you want a direct solution for the change in area, it's not actually super difficult. Was originally going to leave this as an exercise to the reader, but it's totally possible to solve this exactly, though you need to include vertices on either side.
Assume you're looking at a window of vertices [A, B, C, D] that are connected in that order. In this example we're determining the "cost" of combining B and C.
Calculate the angle offset from collinearity from A toward C. Basically you just want to see how far from collinear the two points are. This is |sin(|arctan(B - A)| - |arctan(C - A)|)| Where pipes are absolute value, and differences are the sensical notion of difference.
Calculate the total distance over which the angle change will effectively be applied, this is just the euclidean distance from A to B times the euclidean distance from B to C.
Multiply the terms from 2 and 3 to get your first term
To get your second term, repeat steps 2 - 4 replacing A with D, B with C, and C with B (just going in the opposite direction)
Calculate the geometric mean of the two terms obtained.
The number that results in step 6 presents the full-picture minus a couple constants.
I tried my own plan first: Protect the vertices touching the bounding box, then remove the rest in the order that changes the resultant area the least, until you can't find a vertice to remove that keeps the new polygon area within X% of the original one. This is the result with X = 5%:
When the user zooms out really far these shapes fit the bill well enough for me. I haven't tried any of the other suggestions. The savings are quite astonishing, sometimes from 80-100 vertices down to 4 or 5.
I am trying to create an algorithm for 'fleeing' and would like to first find points which are 'safe'. That is to say, points where they are relatively distant from other points.
This is 2D (not that it matters much) and occurs within a fixed sized circle.
I'm guessing the sum of the squared distances would produce a good starting equation, whereby the highest score is the furthest away.
As for picking the points, I do not think it would be possible to solve for X,Y but approximation is sufficient.
I did some reading and determined that in order to cover the area of a circle, you would need 7 half-sized circles (with centers forming a hex, and a seventh at the center)
I could iterate through these, all of which are within the circle to begin with. As I choose the best scoring sphere, I could continue to divide them into 7 spheres. Of course, excluding any points which fall outside the original circle.
I could then iterate to a desired precision or a desired level.
To expand on the approach, the assumption is that it takes time to arrive at a location and while the location may be safe, the trip in between may not. How should I incorporate the distance in the equation so that I arrive at a good solution.
I suppose I could square the distance to the new point and multiply it by the score, and iterate from there. It would strongly favor a local spot, but I imagine that is a good behavior. It would try to resolve a safe spot close by and then upon re-calculating it could find 'outs' and continue to sneak to safety.
Any thoughts on this, or has this problem been done before? I wasn't able to find this problem specifically when I looked.
EDIT:
I've brought in the C# implementation of Fortune's Algorithm, and also added a few points around my points to create a pseudo circular constraint, as I don't understand the algorithm well enough to adjust it manually.
I realize now that the blue lines create a path between nodes. I can use the length of these and the distance between the surrounding points to compute a path (time to traverse and danger) and weigh that against the safety (the empty circle it is trying to get to) to determine what is the best course of action. By playing with how these interact, I can eliminate most of the work I would have had to do, simply by using the voronoi. Also my spawning algorithm will use this now, to determine the LEC and spawn at that spot.
You can take the convex hull of your set of locations - the vertices of the convex hull will give you the set of "most distant" points. Next, take the centroid of the points you're fleeing from, then determine which vertex of the convex hull is the most distant from the centroid. You may be able to speed this up by, for example, dividing the playing field into quadrants - you only need to test the vertices that are in the furthermost quadrant (e.g., if the centroid is in the positive-x positive-y quadrant, then you only need to check the vertices in the negative-x negative-y quadrant); if the playing field is an irregular shape then this may not be an option.
As an alternative to fleeing to the most distant point, if you have a starting point that you're fleeing from (e.g. the points you're fleeing from are enemies, and the player character is currently at point X which denotes its starting point), then rather than have the player flee to the most distant point you can instead have the player follow the trajectory that most quickly takes them from the centroid of the enemies - draw a ray from the enemies' centroid through the player's location, and that ray gives you the direction that the player should flee.
If the player character is surrounded then both of these algorithms will give nonsense results, but in that case the player character doesn't really have any viable options anyway.