I am making an engine for the game of Hive (https://www.gen42.com/games/hive) in C++ and I need it to be highly efficient as I will have an AI searching through many thousands of positions. Note that it is not essential to be familiar with Hive to answer this, as this question is more related to graph theory. There is an example at the end.
Representation
In the game of Hive, pieces can be placed and moved around on an infinite hexagonal grid. There is a crucial rule, the One Hive Rule, that states: The pieces in play must be linked at all times (i.e., the Hive may never be broken)
In other words, the hive can be represented as a connected undirected planar graph, where:
The vertices are the pieces
The edges are the connections between adjacent pieces
The articulation points of this graph represent pieces restricted by the One Hive Rule. Also, no vertex can have more than six edges. (pieces on top of the hive are not included in the graph)
Problem
The problem I have is recalculating these articulation points efficiently after the graph is changed and I am wondering if there is some efficient data structure that could handle this.
Specifically, the data structure would need to accommodate the following updates:
Add a vertex to the graph along with its connecting edges (i.e., placing a piece/moving a piece to its new location)
Remove a vertex from the graph along with its connected edges (i.e., removing a piece when it is being moved to a new location)
When queried, the data structure would return which vertices are articulation points.
Also note that the graph begins empty, and the number of vertices can never decrease. (pieces cannot be removed from the hive)
Ideas
I am aware of algorithms like Tarjan’s which calculate the articulation points of a graph in a single DFS traversal. However, most of the time in Hive when a piece is moved, only a few pieces become restricted/unrestricted by this rule (usally no more than 2), and therefore only a few vertices in the graph should have to be updated. (rather than re-traversing the entire graph every time)
Can anyone provide me with an efficient data structure and/or algorithm for this?
Example
(You don’t need to know how the pieces move)
In the current position, the white ant (blue) is about to move to the location south-east of the black bee (yellow). In the graph, I have shown in dark blue the updates that would be required.
The vertices in the graph circled in red are articulation points (immobile pieces). Also note that after the ant has moved to its new location, the vertex corresponding to the black bee will also become an articulation point.
Example Image
A block-cut tree may be useful in solving the problem, but more to the point of this answer, it may help you understand that there is no easy solution to the problem.
Consider the graph shown below (source:wikipedia with modifications in color)):
The graph with 18 vertices (black) is shown on the left. The corresponding block-cut tree is shown to the right. Notice that the cut points (aka articulation points) are considered to be part of the blocks that they connect. So for example, cut point 1 (C1) in the tree, which is vertex 2 (V2) in the graph is a member of blocks B1, B2, and B3.
I propose to add vertex 19 (magenta), and then consider the consequences. I've circled the cut points in the graph. Those circled in red (V2, V8, and V10) remain as cut points when V19 is added. But V7 (aka C2) ceases to be a cut point when V19 is added. That's because C2 in the block-cut tree is part of a cycle that is created by adding V19. And unlike C1, C3, and C4, it doesn't connect to any blocks that aren't part of the cycle. It only connects B3 and B4, which are both part of the cycle.
So after adding V19, there are only 4 blocks and 3 cut points. B2, B5, and B7 continue to exist as separate blocks, connected by C1, C3, and C4 respectively. B1, C1, B3, C2, B4, C3, C4, and B6 are now all part of a single large block. The resulting block-cut tree is shown below (source:ibid with modifications in color):
Finally getting to the point, notice that V7 is about as far away from V19 as it can possibly be. So the effects of adding a vertex aren't localized to the neighbors of the added vertex. The effects can propagate throughout the graph.
And then it gets worse.
We've seen the effects of adding a vertex. Now consider the reverse. After adding V19, the player decides to move the piece that V19 represents, thereby removing V19 from the graph. Suddenly, block B19 explodes into four blocks (B1, B3, B4, and B6), and C2 appears as a cut point. Basically restructuring the entire block-cut tree. So by the time the code finds the newly formed cut point, and rearranges the block-cut tree, it may have been possible (or even faster) to run Tarjan's again.
Related
I'm solving extended version of knight tour problem, in which program has to return maximum number of cells through which knight can come back to initial position without overlapping its path.
I'm using backtracking approach but got stuck in detecting overlapping.
A graph is defined as a set of vertices plus a set of edges, where an edge is a pair of distinct vertices.
In particular, there is no notion of two edges "intersecting" in the way that you mean, because that's a consequence of how you've chosen to draw the graph — where you've drawn the vertices on the plane — rather than a property of the graph itself. (There is a concept of a "planar graph", meaning a graph that can be embedded in the plane with no edges intersecting; but your graph is a planar graph in that sense, so it's not really what you want.)
So to determine if two line segments intersect, we're outside the area of graph theory. Fortunately, there are some pretty straightforward ways to do this; I see that How can I check if two segments intersect? lists several. The approach that came first to my mind (and is used by a few of the highest-voted answers there) is to observe that line segments AB and CD intersect if and only if ∠CAB and ∠BAD have the same sense (clockwise vs. counterclockwise; this means that C and D are on opposite sites of AB) and ∠ACD and ∠DCB have the same sense (this means that A and B are on opposite sides of CD). You can determine this by taking the cross-products of the various segments CA, AB, etc., and comparing signs (positive vs. negative). If your coordinates are all integers, then this just requires a bit of integer arithmetic.
If this problem is restricted to knight moves, then we can consider that the number of ways that knight moves can intersect is limited (at most 9, not considering direction). For instance, if we have (on a standard chessboard) the move d3-e5, then the only knight moves that intersect are: e2-e4, c3-e4, e3-c4, e3-d5, f3-d4, d4-f5, e4-c5, e4-d6, and f4-d5 -- again, without considering direction. Near the edge of the board there would of course be fewer of those.
This means that you can spend constant time per move to mark those potentially crossing edges as no longer available, and continue the search only along available edges. To allow backtracking, you save on the (recursion) stack which edges you made unavailable at which move.
I am currently working on a node-based house-builder for Unity. The system is pretty simple in its workflow: users can create nodes, which are simply cubes, and connect them with each other to create walls. The mesh processing is already done and it works nice and smooth.
What I am trying to do now is to detect how many closed rooms have been created and what vertices are involved in each one of them. The possible inputs can be seen in the following images:
In the first picture, the loops would be
(1,5,3,4), (1,2,6,8,7,5), (6,9,12,11,10,8), (8,10,14,13) and (10,11,17,16,15,14).
In the second one they'd be
(1,2,5,6,8,7), (2,3,4,14,13,6,5), (6,13,12,11,10) and (8,6,10,9).
Each node can be connected to up to four other nodes, one per cardinal side, and every link is stored on both sides. I do not need the nodes to come in any particular order.
I thought I could use a generic loop-detection algorithm and recursively search for sub-loops until the loop I find has has no internal connections, but this would be extremely resource-consuming. There must be some properties I can use to detect loops with no internal connections without iterating over the graph so many times, but I haven't been able to find it.
Do you have any suggestion?
For the following algorithm to work, you need the following:
A unique direction of the edge (which you probably already have)
Two flags for every edge that specify if the edge has been used in the forward and the backward direction
A list of vertices with unused edges
Then the idea is the following. Take any node with unused edges and go along any of the unused edges to the neighbor (keep the direction in mind). Doing so, immediately mark the edge in the according direction as used. At this neighbor, you know the direction from which you came. Look in counter-clockwise order until you find the first unused edge (again watch out for the edge direction). You can also search in clockwise order, this will define the order of all your output faces. E.g. if you came from the left edge, then check the bottom, right, top edges, respectively. Go across this edge (mark as used) and repeat until you arrive at the start vertex. All visited vertices form your room.
Doing so, you should update the list of vertices with unused edges accordingly.
Eventually, you will also create a face for the border. You can detect this e.g. by calculating its orientation:
v1 x v2 + v2 x v3 + v3 x v4 + ... + vn x v1
, where v are the positions of the vertices and x represents the z-component of the cross product (which represents the face orientation):
(x1, y1) x (x2, y2) = (x1 * y2) - (x2 * y1)
The boundary face will have a different sign for this orientation than all other faces. The actual sign depends on whether you used counter-clockwise or clockwise order during the edge traversal.
This is an answer only to the first question, but it might help you with the second one. The number of closed rooms actually has a closed formula:
1 - V + E where V is the number of vertices and E is the number of edges. In your second example, there are 14 vertices, 17 edges and 4 rooms.
The mathematics are a bit complicated, but the key word is Euler characteristic.
As in the title of the question, how to triangulate a simple polygon that grows dynamically that's say whenever a new vertex is added by user or by a computer dynamically the polygon should get triangulated again. So rather than running some triangulation algorithm after each new vertex is added is there any clever/efficient(possibly easy to implement also) way for each new input it should take say <= O(n) time to triangulate the polygon.
The newly added vertex will be adjacent with the first and last vertices of the current polygon.
When you insert a new vertex and replace an edge with two, the triangle they form may overlap a number of triangles of the triangulation. The overlapped triangles form a subpolygon. Build the outline of this polygon and insert the new vertex. Then retriangulate the updated subpolygon.
I guess that the overlapping triangles can be efficiently found by exploring the neighbors of the starting triangle, recursively, and checking them for overlap. The outline of the subpolygon is formed of the edges not shared by two triangles.
I'm assuming that the polygon is augmented, at each step, by adding a vertex C, removing the segment AB, and adding segments AC and CB. I'm also assuming no degeneracies.
If ABC winds positively (that is, the polygon is expanded "outwards"), simply add ABC to the triangulation.
Otherwise, consider the triangle ABD in the previous triangulation. If C is in that triangle, remove the triangle ABD and add triangles BDC and DAC. If it is not, then it is in the subpolygon on the AD side, or the one on the BD side. Remove ABD and recurse into the appropriate subpolygon, adding C to (say) the segment BD. Once the recursion completes, add triangles BDC and DAC as before.
This solution relies on both the old and the new polygons being simple (non-self-intersecting). Otherwise, adding the triangles following the recursive step might not be valid.
I am building a graph editor in C# where the user can place nodes and then connect them with either a directed or undirected edge. When finished, an A* pathfinding algorithm determines the best path between two nodes.
What I have: A Node class with an x, y, list of connected nodes and F, G and H scores.
An Edge class with a Start, Finish and whether or not it is directed.
A Graph class which contains a list of Nodes and Edges as well as the A* algorithm
Right now when a user wants to select a node or an edge, the mouse position gets recorded and I iterate through every node and edge to determine whether it should be selected. This is obviously slow. I was thinking I can implement a QuadTree for my nodes to speed it up however what can I do to speed up edge selection?
Since users are "drawing" these graphs I would assume they include a number of nodes and edges that humans would likely be able to generate (say 1-5k max?). Just store both in the same QuadTree (assuming you already have one written).
You can easily extend a classic QuadTree into a PMR QuadTree which adds splitting criteria based on the number of line segments crossing through them. I've written a hybrid PR/PMR QuadTree which supported bucketing both points and lines, and in reality it worked with a high enough performance for 10-50k moving objects (rebalancing buckets!).
So your problem is that the person has already drawn a set of nodes and edges, and you'd like to make the test to figure out which edge was clicked on much faster.
Well an edge is a line segment. For the purpose of filtering down to a small number of possible candidate edges, there is no harm in extending edges into lines. Even if you have a large number of edges, only a small number will pass close to a given point so iterating through those won't be bad.
Now divide edges into two groups. Vertical, and not vertical. You can store the vertical edges in a sorted datastructure and easily test which vertical lines are close to any given point.
The not vertical ones are more tricky. For them you can draw vertical boundaries to the left and right of the region where your nodes can be placed, and then store each line as the pair of heights at which the line intersects those lines. And you can store those pairs in a QuadTree. You can add to this QuadTree logic to be able to take a point, and search through the QuadTree for all lines passing within a certain distance of that point. (The idea is that at any point in the QuadTree you can construct a pair of bounding lines for all of the lines below that point. If your point is not between those lines, or close to them, you can skip that section of the tree.)
I think you have all the ingredients already.
Here's a suggestion:
Index all your edges in a spatial data structure (could be QuadTree, R-Tree etc.). Every edge should be indexed using its bounding box.
Record the mouse position.
Search for the most specific rectangle containing your mouse position.
This rectangle should have one or more edges/nodes; Iterate through them, according to the needed mode.
(The tricky part): If the user has not indicated any edge from the most specific rectangle, you should go up one level and iterate over the edges included in this level. Maybe you can do without this.
This should be faster.
I want to generate random points in a 2D space, this points will be nodes of a planar graph (built using Gabriel graph algorithm or RNG ).
I wrote java code to do this, but I have two hard problem to solve.
1) I need that all edges of the graph are not longer than a given threshold
2) After I want know faces of graph, a face is a collection of nodes connected by edge. A face does not contain within it other nodes. In image below faces are signed by label (F1, F2...)
How to do these two thing ? some algorithms ? There is some way already known?
Below there is an example of the graph that I must to create
http://imageshack.us/photo/my-images/688/immagineps.png/
If you can tolerate some variance in the number of points, then you could modify your Gabriel graph algorithm to be incremental (most of the effort would be making your Delaunay algorithm incremental) and then whenever an edge is too long, insert a random point in the circle having that edge as a diameter.
The most convenient data structures for plane graphs are edge-centric: for example, the doubly-connected edge list and the quad-edge representations. If you're not already using a data structure of this type for the Delaunay step (and I can't imagine why you wouldn't be), you can sort each vertex's outgoing connections by angle. From there, it's easy to implement a function that takes a half-edge and returns the next half-edge on the same face in counterclockwise order. Now iterate through all of the half-edges, and for each half-edge not already visited, iterate around the face until you return to where you started. Label all of the half-edges in the inner iteration as one face.