what is the efficient way to understand that a curve is closed or not?
maybe one way is flood fill algorithm and use it to check it ;If your flood fill leaves a pre-determined bounding box, you are outside the shape. Otherwise if your flood fill terminates, then you're within the shape.
but is it an efficient way?
tnx.
Look at a curve as a graph, vertices are pixels and edges are between neighbouring pixels. Testing is done:
Simple curve is if all vertices have two neighbours and graph is connected.
More not intersected simple curves is if all vertices have two neighbours and graph isn't connected. Number of part is find with graph partitioning.
8 curve is if all vertices except one have 2 neighbours, that one have 4 neighbours, and graph is connected.
...
Testing graph/subgraph connectivity and partitioning is done by graph traversal.
Could you walk along the curve with two separate pointers?
If so, do that and set one pointer to traverse twice as fast.
If the loop is closed, the pointers will overlap at one point.
This should be O(n)..
Lets say the degree of each pixel is the number of pixels in its neighborhood.
Walk through your pixel array,
if any pixel has odd degree, then the curve is not closed.
Explanation:
For an even degree pixel, for each path that enters, there is a path that leaves it.
This is not true for odd degrees.
Related
This question is an extension on some computation details of this question.
Suppose one has a set of (potentially overlapping) circles, and one wishes to compute the area this set of circles covers. (For simplicity, one can assume some precomputation steps have been made, such as getting rid of circles included entirely in other circles, as well as that the circles induce one connected component.)
One way to do this is mentioned in Ants Aasma's and Timothy's Shields' answers, being that the area of overlapping circles is just a collection of circle slices and polygons, both of which the area is easy to compute.
The trouble I'm encountering however is the computation of these polygons. The nodes of the polygons (consisting of circle centers and "outer" intersection points) are easy enough to compute:
And at first I thought a simple algorithm of picking a random node and visiting neighbors in clockwise order would be sufficient, but this can result in the following "outer" polygon to be constructed, which is not part of the correct polygons.
So I thought of different approaches. A Breadth First Search to compute minimal cycles, but I think the previous counterexample can easily be modified so that this approach results in the "inner" polygon containing the hole (and which is thus not a correct polygon).
I was thinking of maybe running a Las Vegas style algorithm, taking random points and if said point is in an intersection of circles, try to compute the corresponding polygon. If such a polygon exists, remove circle centers and intersection points composing said polygon. Repeat until no circle centers or intersection points remain.
This would avoid ending up computing the "outer" polygon or the "inner" polygon, but would introduce new problems (outside of the potentially high running time) e.g. more than 2 circles intersecting in a single intersection point could remove said intersection point when computing one polygon, but would be necessary still for the next.
Ultimately, my question is: How to compute such polygons?
PS: As a bonus question for after having computed the polygons, how to know which angle to consider when computing the area of some circle slice, between theta and 2PI - theta?
Once we have the points of the polygons in the right order, computing the area is a not too difficult.
The way to achieve that is by exploiting planar duality. See the Wikipedia article on the doubly connected edge list representation for diagrams, but the gist is, given an oriented edge whose right face is inside a polygon, the next oriented edge in that polygon is the reverse direction of the previous oriented edge with the same head in clockwise order.
Hence we've reduced the problem to finding the oriented edges of the polygonal union and determining the correct order with respect to each head. We actually solve the latter problem first. Each intersection of disks gives rise to a quadrilateral. Let's call the centers C and D and the intersections A and B. Assume without loss of generality that the disk centered at C is not smaller than the disk centered at D. The interior angle formed by A→C←B is less than 180 degrees, so the signed area of that triangle is negative if and only if A→C precedes B→C in clockwise order around C, in turn if and only if B→D precedes A→D in clockwise order around D.
Now we determine which edges are actually polygon boundaries. For a particular disk, we have a bunch of angle intervals around its center from before (each sweeping out the clockwise sector from the first endpoint to the second). What we need amounts to a more complicated version of the common interview question of computing the union of segments. The usual sweep line algorithm that increases the cover count whenever it scans an opening endpoint and decreases the cover count whenever it scans a closing endpoint can be made to work here, with the adjustment that we need to initialize the count not to 0 but to the proper cover count of the starting angle.
There's a way to do all of this with no trigonometry, just subtraction and determinants and comparisons.
I am working with several convex polygons that overlap each other and I need to combine them back together to form one single polygon that may be convex or concave.
The problem is always as follows:
1) The polygons that I need to merge together are always convex.
2) The vertices of each polygon are defined in clockwise order.
3) The polygons are never in any specific order.
4) The final polygon can only be simple convex or concave polygon, i.e. no self-intersection, no duplicate vertices or holes in the shape.
Here is an example of the kind of polygons that I am working with.
![overlapping convex polygons]"image removed")
My current approach is to start from the first polygon and vertex by vertex I loop through all vertices of all of the polygons to find overlap. If there is no overlap, I store the vertex for the final outline and continue.
Upon finding overlapping vertices, I determine which polygon to continue to by measuring the angles of the possible paths and by choosing the one that leads towards the outside of the shape.
This method works until I encounter polygons that do not have vertices overlapping each other, but instead one polygon's vertex is overlapping another polygon's side, as is the case with the rectangle in the image.
I am currently planning on solving these situations by running line intersect checks for all shapes that I have not yet processed, but I am convinced that this cannot be the easiest or the best method in terms of performance.
Does someone know how I should approach this problem in a more efficient manner and/or universal manner?
I solved this issue and I'm posting the answer here in case someone else runs into this issue as well.
My first step was to implement a pre-processing loop based on trincot's suggestions.
I calculated the minimum and maximum x and y bounds for each individual shape.
I used these values to determine all overlapping shapes and I stored a simple array for each shape that I could later use to only look at shapes that can overlap each other.
Then, for the actual loop that determines the outline of the final polygon:
I start from the first shape and simply compare its vertices to those of the nearby shapes. If there is at least one vertex that isn't shared by another vertex, it must be on the outer edge and the loop starts from there. If there are only overlapping vertices, then I add the first shape to a table for all checked shapes and repeat this process with another shape until I find a vertex that is on the outer edge.
Once the starting vertex is found, the main loop will check the vertices of the starting shape one by one and measure how far from the given vertex is from every nearby shapes' edges. If the distance is zero, then the vertex either overlaps with another shape's vertex or the vertex lies on the side of another shape.
Upon finding the aforementioned type of vertex, I add the previous shape's number to the table of checked shapes so that it isn't checked again. Then, I check if there are other shapes that share this particular vertex. If there are, then I determine the outermost shape and continue from there, starting back from step 2.
Once all shapes have been checked, I check that all non-overlapping vertices from the starting shape were indeed added to the outline. If they weren't, I add them at the end.
There may be computationally faster methods, but I found this one to be simple to write, that it meets all of my requirements and it is fast enough for my needs.
Given a vertex, you could speed up the search of an "overlapping" vertex or edge as follows:
Finding vertices
Assuming that the coordinates are exact, in the sense that if two vertices overlap, they have exactly the same x and y coordinates, without any "error" of imprecision, then it would be good to first create a hash by x-coordinate, and then for each x-entry you would have a hash by y-coordinate. The value of that inner hash would be a list of polygons that have that vertex.
That structure can be built in O(n) time, and will allow you to find a matching vertex in constant time.
Only if that gives no match, you would go to the next algorithm:
Finding edges
In a pre-processing step (only once), create a segment tree for these polygons where a "segment" corresponds to a min/max x-coordinate range for a particular polygon.
Given a vertex, use the segment tree to find the polygons that are in the right x-coordinate range, i.e. where the x-coordinate of the vertex is within the min/max range of x-coordinates of the polygon.
Iterate those polygons, and eliminate those that do not have an y-coordinate range that has the y-coordinate of the vertex.
If no polygons remain, the vertex does not participate in any edge of another polygon.
You cannot get more than one polygon here, since that would mean another polygon shares the vertex, which is a case already covered by the hash-based algorithm.
If you get just one polygon, then continue your search by going through the edges of that polygon to find a match -- which is what you already planned on doing (line intersect check), but now you would only need to do it for one polygon.
You could speed that line intersect check up a little bit by first filtering the edges to those that have the right x-range. For convex polygons you would end up with at most two edges. At most one of those two will have the right y-range. If you get such an edge, check whether the vertex is really on that edge.
At the entrance, two polygons are given (the coordinates of the vertices of these polygons are listed in the order of their traversal; however, the traversal order for different polygon angles can be chosen different). Can one polygon be transformed into another using only parallel translation and proportional scaling?
I have following idea
So, find some common peak for two polygons and make the transfer of one polygon so that these vertices lie on one point then Scaling so that the neighboring point matches the corresponding point of another polygon, but I think it's wrong , at least I can't write it in code
Is there some special formula or theorem for this problem?
I would solve it like this.
Find the necessary parallel transport.
Find the necessary scaling.
See if they are the same polygon now.
So to start take the vertex that it farthest to the left, and if there is a tie, the one that is farthest down. Find that for both polygons. Use parallel transport to put that vertex at the origin for both.
Now take the vertex that is farthest to the right, and if there is a tie, the one that is farthest up. Find that for both polygons. If it is not at the same slope, then they are different. If it is, then scale one so that the points match.
Now see if all of the points match. If not, they are different. Otherwise the answer is yes.
Compute the axis-aligned bounding boxes of the two polygons.
If the aspect ratios do not match, the answer is negative. Otherwise the ratio of corresponding sides is your scaling factor. The translation is obtained by linking the top left corners and the transformation equations are
X = s.(x - xtl) + Xtl
Y = s.(y - ytl) + Ytl
where s is the scaling factor and (xtl, ytl), (Xtl, Ytl) are the corners.
Now choose a vertex of the first polygon, predict the coordinates in the other and find the matching vertex. If you can't, the answer is negative. Otherwise, you can compare the remaining vertices*.
*I assume that the polygons do not have overlapping vertices. If they can have arbitrary self-overlaps, I guess that you have to try matching all vertices, with all cyclic permutations.
I want to find the nearest edge in a graph. Consider the following example:
Figure 1: yellow: vertices, black: edges, blue: query-point
General Information:
The graph contains about 10million vertices and about 15million edges. Every vertex has coordinates. Edges are defined by the two adjacent vertices.
Simplest solution:
I could simply calculate the distance from the query-point to every other edge in the graph, but that would be horribly slow.
Idea and difficulties:
My idea was to use some spatial index to accelerate the query. I already implemented a kd-tree to find the nearest vertex. But as Figure 1 shows the edges incident to the nearest vertex are not necessarily the nearest to the query-point. I would get the edge 3-4 instead of the nearer edge 7-8.
Question:
Is there an algorithm to find the nearest edge in a graph?
A very simple solution (but maybe not the one with lowest complexity) would be to use a quad tree for all your edges based on their bounding box. Then you simply extract the set of edges closest to your query point and iterate over them to find the closest edge.
The extracted set of edges returned by the quad tree should be many factors smaller than your original 15 million edges and therefore a lot less expensive to iterate through.
A quad tree is a simpler data structure than the R-tree. It is fairly common and should be readily available in many environments. For example, in Java the JTS Topology Suite has a structure QuadTree that can easily be wrapped to perform this task.
There are spatial query structures which are appropriate for other types of data than points. The most general is the "R-tree" structure (and its many, many variants), which will allow you to store the bounding rectangles of your line segments. You can then search outward from your query points, examining the segments in the bounding rectangles and stopping when the nearest remaining rectangle is further than the closest line encountered so far. This could have poor performance when there are many long line segments overlapping, but for a PSLG such as you seem to have here, that shouldn't happen.
Another option is to use the segments to define a BSP tree, and scan outwards from your point to find all the "visible" lines. This in turn will be problematic if your point can see many edges.
Without proof:
You start with a constrained Delaunay Triangulation, that is a triangulation that takes the existing edges into account. E.g. CGAL or Triangle can do this. For each query point you determine which triangle it belongs to. Then you you only have to check the edges touching a corner of that triangle.
I think this should work in most cases, but there are certainly corner cases where it fails, e.g. when there are many vertices without any edge at all, so at least you have to remove those empty vertices.
You can compute the voronoi diagram and run a query on each voronoi cell. You can subdivide the voronoi diagram to get a better result. You can combine metric and voronoi diagram:http://www.cc.gatech.edu/~phlosoft/voronoi/
you could insert extra vertices in long edges to get some approximation based on closest vertices ..
I want to generate random points in a 2D space, this points will be nodes of a planar graph (built using Gabriel graph algorithm or RNG ).
I wrote java code to do this, but I have two hard problem to solve.
1) I need that all edges of the graph are not longer than a given threshold
2) After I want know faces of graph, a face is a collection of nodes connected by edge. A face does not contain within it other nodes. In image below faces are signed by label (F1, F2...)
How to do these two thing ? some algorithms ? There is some way already known?
Below there is an example of the graph that I must to create
http://imageshack.us/photo/my-images/688/immagineps.png/
If you can tolerate some variance in the number of points, then you could modify your Gabriel graph algorithm to be incremental (most of the effort would be making your Delaunay algorithm incremental) and then whenever an edge is too long, insert a random point in the circle having that edge as a diameter.
The most convenient data structures for plane graphs are edge-centric: for example, the doubly-connected edge list and the quad-edge representations. If you're not already using a data structure of this type for the Delaunay step (and I can't imagine why you wouldn't be), you can sort each vertex's outgoing connections by angle. From there, it's easy to implement a function that takes a half-edge and returns the next half-edge on the same face in counterclockwise order. Now iterate through all of the half-edges, and for each half-edge not already visited, iterate around the face until you return to where you started. Label all of the half-edges in the inner iteration as one face.