I have the task to implement the creation of a visibility graph based on a set of simple polygons which are given by the task. The polygons have positive whole number coordinates and can be non-convex. I wanted to implement the rotational plane sweep algorithm which was mentioned in the lecture but not very well explained.
The only other source I could find about this algorithm was this page, that did not make it fully clear either: https://tanergungor.blogspot.com/2015/04/robot-navigation-rotational-sweep.html
I would appreciate it if someone could explain the rotational plane sweep algorithm to an extent which I can understand.
Here is a screenshot of an example obstacle arrangement with my attempt at a solution which is not yet working and more or less based on trial and error rather than understanding and implementing the actual algorithm. The algorithm is just using a single vertex which is not located on a polygon in this example.
Related
I'm working on my bachelor thesis (on Computer Science) and right now I'm having a problem about finding shortest path between two points on 3D triangular mesh that is manifold. I already read about MMP, but which computes distance function $d(x)$ between given point and vertex $x$ on mesh.
I got to know that the problem I'm solving is named Geodesics but What I really couldn't find is some good algorithm which uses A* for finding shortest path between two given points on two given vertices.
I 'invented' also algorithm which uses A* by using Euclidian Distance Heuristics and correction after finding new Point on any Edge..
I also have edges saved in half-edge structure.
So my main idea is this:
We will find closest edge by A* algorithm and find on this edge point with minimalizing function $f(x) + g(x)$ where $f$ is our current distance and $g$ is heuristics(euclidean distance)
Everytime we find new edge, we will unfold current mesh and find closest path to our starting point
So now my questions:
Do you know some research paper which talks about this problem ??
Why nobody wrote about algorithm that uses A* ??
What are your opinions about algorithm I proposed ?
Here are some papers and tools related to finding geodesics (or approximations) on a surface mesh:
A Survey of Algorithms for Geodesic Paths and Distances
You Can Find Geodesic Paths in Triangle Meshes by Just Flipping Edges (code)
The Vector Heat Method
(code)
You can find more papers in the survey paper.
I implemented the algorithm you mentionned (MMP) a long time ago and it's quite difficult to get it right and quite time consuming since the number of splits along an edge grows quite fast.
I am no expert in the matter so read with prejudice. Also sorry this is more of a comment than answer...
First You should clarify some things:
the mesh is convex or concave?
are the path always on surface or can fly between faces on the outside (if concave) but never inside?
are the start/end points on edges of faces or can be inside?
Assuming concave, points on edges and only surface paths...
I think the graph A* approach is unusable as there is infinite possible paths between point and any edge of the same face so how you test all of them?
If you really want A* then you can do something similar to raster A*
so resample all your edges to more points
so either n points or use some density like 10 points per average edge length or some detail size.
use graph A* on resampled points (do not handle them as edges anymore)
However this will produce only close to shortest path so in order to improve the accuracy you should recursively resample the edges near used point with higher and higher density until the distance between resampled points get smaller than accuracy.
Another option would be using something similar to CCD (cyclic coordinate descent) so:
create plane that goes through your 2 points and center of your mesh
create path that goes through all intersection of plane and faces between the 2 points (use the shorter on from the 2 options)
iterativelly move intersections back and forward and use greedy approach to get the result
However this might get stuck in local minima... You could use search/fitting approaches instead but those will get very slow with increasing number of faces
I got the feeling you might also do this using RANSAC ...
From my point of view I think the first A* approach is the most promising, you just need linked list of points per each edge and one cost counter per each its point from this you can simply encode even the recursive improvement of accuracy. It can be done even in-place so no reallocation needed in the recursion ... And also the algo is not complicated so you should have no problems implementing it, and the result is guaranteed which is not the case with other approaches I mention... Another pros is that it can be used even if start/endpoint does not belong to edge...
I have a polygon 2d map of an environment and I want to simplify this map for some planning tests.
For example I want to close areas which can't be reached with the robot model, because the passage is to small.
The second problem is when i have two segments which a nearly parallel i want to set them parallel.
Can anyone tell me some algorithm names for that? That i know where i have to search?
thank you for your help.
Hunk
The first task is usually solved by applying Minkowski difference to your map and robot. This implies you know the profile of your robot.
For the second task common approach is to use 2D Snap Rounding. You can find a lot of papers on that topic at scholar.google.com. However it may also be helpful to take a look at Ramer–Douglas–Peucker algorithm for reducing a number of points in curve. It can not help with solving your problem, but it is useful to know about its existence.
Most likely your work is connected to Motion Planning, so I highly recommend you to read Computational Geometry by Kreweld, De Berg, Overmars and Schwarzkopf. It is classic of computational geometry. There you can find a lot about visibility graphs and motion planning.
Similarly to Mikhail's answer, for your first problem you can convert the map polygon into a binary image and apply a morphological dilation taking the size of the robot into account. Then, the areas separated by narrow paths will be disconnected components in the binary image.
Another approach is to divide the space into a grid and mark cells as empty or full depending on if some map line traverse them. Then thicken the boundaries according to the robot size and look for connected components from the cell where the robot is to find feasible paths.
You can use a delaunay triangulation of the map and travel the edges of the mesh for the shortest route.
I have spent time looking for information on the best algorithm to create a nesting of irregular polygons in 2D using manual and automatic positioning. I need to use such an algorithm in the context of CAD/CAM software. Here are the real possibilities I've found so far:
Separating Axis Theorem: is a fairly quick and simple algorithm to implement, but the drawback I find with it is that it only works with convex polygons. To work with concave polygons, a convex decomposition would need to be done first. This implies an increase in the run-time and the implementation of a new algorithm that decomposes the concave polygon into convex polygons.
Nesting by a power function: calculating the partial derivatives in the X and Y axes, you could get the escape direction you should take a polygon so that there is a collision between the two polygons. This function of energy and I tested and the three major problems that I have encountered are: first obtaining local minima , second nesting when the collision occurs over a piece and finally the execution time is very high.
Using no-fit polygon: use the no-fit polygon to the nesting can be somewhat interesting. I have read several papers on the subject although there are very few online documentation on it. Not sure if it can really be a useful choice. I still have several doubts on the details of this approach.
Any idea which of these algorithms to choose? Or if you know any other options that can be used? I'm a little confused :-) .
Thank you very much.
There is a lot of documentation around how to detect if a marker is within a polygon in Google Maps. However, my question is how can I arbitrarily place a marker inside a polygon (ideally as far as possible from the edges)
I tried calculating the average latitude and longitude of the polygon's points, but this obviously fails in some non-concave polygons.
I also thought about calculating the area's center of mass, but obviously the same happens.
Any ideas? I would like to avoid trial-and-error approaches, even if it works 99% of the time.
There are a few different ways you could approach this, depending on what exactly you're overall goal is.
One approach would be to construct a triangulation of the polygon and place the marker inside one of the triangles. If you're not too worried about optimality you could employ a simple heuristic, like choosing the centroid of the largest triangle, although this obviously wont necessarily give you the point furthest from the polygon edges. There are a number of algorithms for polygon triangulation: ear-clipping or constrained Delaunay triangulation are probably the way to go, and a number of good libraries exist, i.e. CGAL and Triangle.
If you are interested in finding an optimal placement it might be possible to use a skeleton based approach, using either the medial-axis or the straight skeleton of the polygon. The medial-axis is the set of curves equi-distant from the polygon edges, while the straight skeleton is a related structure. Specifically, these type of structures can be used to find points which are furthest away from the edges, check this out for a label placement application for GIS using an approach based on the straight skeleton.
Hope this helps.
I have a detailed 2D polygon (representing a geographic area) that is defined by a very large set of vertices. I'm looking for an algorithm that will simplify and smooth the polygon, (reducing the number of vertices) with the constraint that the area of the resulting polygon must contain all the vertices of the detailed polygon.
For context, here's an example of the edge of one complex polygon:
My research:
I found the Ramer–Douglas–Peucker algorithm which will reduce the number of vertices - but the resulting polygon will not contain all of the original polygon's vertices. See this article Ramer-Douglas-Peucker on Wikipedia
I considered expanding the polygon (I believe this is also known as outward polygon offsetting). I found these questions: Expanding a polygon (convex only) and Inflating a polygon. But I don't think this will substantially reduce the detail of my polygon.
Thanks for any advice you can give me!
Edit
As of 2013, most links below are not functional anymore. However, I've found the cited paper, algorithm included, still available at this (very slow) server.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
I think Visvalingam’s algorithm can be adapted for this purpose - by skipping removal of triangles that would reduce the area.
I had a very similar problem : I needed an inflating simplification of polygons.
I did a simple algorithm, by removing concav point (this will increase the polygon size) or removing convex edge (between 2 convex points) and prolongating adjacent edges. In any case, doing one of those 2 possibilities will remove one point on the polygon.
I choosed to removed the point or the edge that leads to smallest area variation. You can repeat this process, until the simplification is ok for you (for example no more than 200 points).
The 2 main difficulties were to obtain fast algorithm (by avoiding to compute vertex/edge removal variation twice and maintaining possibilities sorted) and to avoid inserting self-intersection in the process (not very easy to do and to explain but possible with limited computational complexity).
In fact, after looking more closely it is a similar idea than the one of Visvalingam with adaptation for edge removal.
That's an interesting problem! I never tried anything like this, but here's an idea off the top of my head... apologies if it makes no sense or wouldn't work :)
Calculate a convex hull, that might be way too big / imprecise
Divide the hull into N slices, for example joining each one of the hull's vertices to the center
Calculate the intersection of your object with each slice
Repeat recursively for each intersection (calculating the intersection's hull, etc)
Each level of recursion should give a better approximation.... when you reached a satisfying level, merge all the hulls from that level to get the final polygon.
Does that sound like it could do the job?
To some degree I'm not sure what you are trying to do but it seems you have two very good answers. One is Ramer–Douglas–Peucker (DP) and the other is computing the alpha shape (also called a Concave Hull, non-convex hull, etc.). I found a more recent paper describing alpha shapes and linked it below.
I personally think DP with polygon expansion is the way to go. I'm not sure why you think it won't substantially reduce the number of vertices. With DP you supply a factor and you can make it anything you want to the point where you end up with a triangle no matter what your input. Picking this factor can be hard but in your case I think it's the best method. You should be able to determine the factor based on the size of the largest bit of detail you want to go away. You can do this with direct testing or by calculating it from your source data.
http://www.it.uu.se/edu/course/homepage/projektTDB/ht13/project10/Project-10-report.pdf
I've written a simple modification of Douglas-Peucker that might be helpful to anyone having this problem in the future: https://github.com/prakol16/rdp-expansion-only
It's identical to DP except that it pushes a line segment outwards a bit if the points that it would remove are outside the polygon. This guarantees that the resulting simplified polygon contains all the original polygon, but it has almost the same number of line segments as the original DP algorithm and is usually reasonably good at approximating the original shape.