Algorithm for traversing a maze - algorithm

I need some help in writing a set of if-then rules for traversing a maze. This is the problem:
"Assume that the maze is constructed on a grid of square cells by placing walls across some of the edges of cells in such a way that there is a path from any cell within the maze to an outer edge of the maze that has no wall.
One way is left-hand rule, but this strategy can take you around in cycles.
Write if-then rules in English for traversing the wall and detecting a cycle. Assume you know size of grid and max distance you may have to travel to escape the maze."
This is what I have so far:
Start
If only one path (Left or Right or Straight) is found, follow the path.
Else If Multiple path is found:
If left path is found, take a left turn.
Else if straight path is found, follow straight path.
Else if right path is found, take a right turn.
Else If Dead End is found, take a 'U' turn.
Go To step 2
End
But this is not solving the cycle problem. Can anyone help please?

THere are two generic algorithms for exploring graphs: Breadth First Search (BFS) and Depth First Search (DFS). The trick to these algorithms is they start out with all paths in the unexplored list, and as they visit paths they add those to the explored list. As you visit each node you remove it from the unexplored list so you won't revisit it. By only pulling nodes from the unexplored list in each case you don't have a case where you would double back upon yourself.
Here are examples of DFS with checks to prevent cycles and BFS:
function DFS(G,v):
label v as explored
for all edges e in G.adjacentEdges(v) do
if edge e is unexplored then
w ← G.adjacentVertex(v,e)
if vertex w is unexplored then
label e as a discovery edge
recursively call DFS(G,w)
else
label e as a back edge
Now BFS:
procedure BFS(G,v):
create a queue Q
enqueue v onto Q
mark v
while Q is not empty:
t ← Q.dequeue()
if t is what we are looking for:
return t
for all edges e in G.adjacentEdges(t) do
u ← G.adjacentVertex(t,e)
if u is not marked:
mark u
enqueue u onto Q
return none

Related

Find maximal subgraph containing only nodes of degree 2 and 3

I'm trying to implement a (Unweighted) Feedback Vertex Set approximation algorithm from the following paper: FVS-Approximation-Paper. One of the steps of the algorithm (described on page 4) is to compute a maximal 2-3 subgraph of the input graph.
To be precise, a 2-3 graph is one that has only vertices of degree either 2 or 3.
By maximal we mean that that no set of edges or vertices of the original graph can be added to the maximal subgraph without violating the 2-3 condition.
The authors of the paper claim that the computation can be carried out by a "simple Depth First Search (DFS)" on the graph. However, this algorithm seems to elude me. How can the maximal subgraph be computed?
I think I managed to figure out something like what the authors intended. I wouldn't call it simple, though.
Let G be the graph and H be an initially empty 2-3 subgraph of G. The algorithm bears a family resemblance to a depth-first traversal, yet I wouldn't call it that. Starting from an arbitrary node, we walk around in the graph, pushing the steps onto a stack. Whenever we detect that the stack contains a path/cycle/sigma shape that would constitute a 2-3 super-graph of H, we move it from the stack to H and continue. When it's no longer possible to find such a shape, H is maximal, and we're done.
In more detail, the stack usually consists of a path having no nodes of degree 3 in H. The cursor is positioned at one end of the path. With each step, we examine the next edge incident to the end. If the only incident edge is the one by which we arrived, we delete it from both G and the stack and move the end back one. Otherwise we can possibly extend the path by some edge e. If e's other endpoint has degree 3 in H, we delete e from G and consider the next edge incident to the end. If e's other endpoint has degree 2 in H but is not currently on the stack, then we've anchored this end. If the other end is anchored too, then add the stack path to H and keep going. Otherwise, move the cursor to the other end of the stack, reversing the stack. The final case is if the stack loops back on itself. Then we can extract a path/cycle/sigma and keep going.
Typing this out on mobile, so sorry for the terse description. Maybe I'll find time to implement it.

BFS and correctness on the term "VISITED"

mark x as visited
list L = x
tree T = x
while L nonempty
choose some vertex v from front of list
process v
for each unmarked neighbor w
mark w as visited
add it to end of list
add edge vw to T
Most of the code will choose to mark the adjacent node as visited before visiting them. Won't it technically be correct to add all neighbor first and visit them later?
list L = x
tree T = x
while L nonempty
choose some vertex v from front of list
if (V NOT YET VISITD)
MARK V AS VISITED HERE
for each unmarked neighbor w
add it to end of list
add edge vw to T
Why is it that every BFS seems to mark node as visited when you did not even visit them yet? I am trying to find a theoretically correct code for BFS. Which one is correct?
Both algorithms work, but the second version might add the same node to the list L twice. This doesn't affect correctness because of the additional check whether a node was visited, but it increases memory consumption and requires an extra check. That's why you'll typically see the first algorithm in text books.
Both are correct, but they use different definitions of the word visited. It is common for algorithms to have many variations and have many different implementations that are all correct, and BFS is one example.

find shortest path in a graph that compulsorily visits certain Edges while others are not compulsory to visit

I have an undirected graph with about 1000 nodes and 2000 edges, a start node and an end node. I have to traverse from the start node to the end node passing through all the compulsory edges(which are about 10) while its not necessary to traverse through all the vertices or nodes. Is there an easy solution to this like some minor changes in the existing graph traversing algorithms? How do I do it?
Thanks for help.:)
This question is different from Find the shortest path in a graph which visits certain nodes as my question is regarding compulsory edges not vertices.
EDIT: The compulsory edges can be traversed in any order.
To start with a related problem, say you have a graph G = (V, E), 10 specific edges you must traverse in a given order E' = 1, ..., e10 > ∈ E, and a start and end nodes s, v ∈ V. You need to find the shortest distance from s to v using E' in the given order.
You could do this by making 10 copies of the graph. Start with a single copy (i.e., isomorphic t G = (V, E)), except that e1 moves from the first copy to the second copy. In the second copy (again isomorphic t G = (V, E)), remove e1, and have e2 move from the second copy to the third copy. Etc. In the resulting graph, run any algorithm to get from s in the first copy to e in the 10th copy.
Explanation: imagine intuitively that your graph G is drawn on a 2d sheet of paper. photocopy it so that you have 10 copies, and stack them up to a pile of 10 papers (imagine them with a bit of space between each two, though). Now change the graphs a bit so that the only way to go up to the second sheet, from the first sheet, is through an edge e1 leading from the bottom sheet to the second sheet. The only way to go up to the third sheet, from the second sheet, is through an edge e2 leading from the second sheet to the third sheet, and so on. You problem is to find the shortest path starting at the node corresponding to s on the bottom sheet, and ending at the node corresponding to e on the top sheet.
To solve the original problem, just repeat this with all possible permutations of E'. Note that there are 10! ~ 3.5e6 possibilities, which isn't all that much.

Explanation of Algorithm for finding articulation points or cut vertices of a graph

I have searched the net and could not find any explanation of a DFS algorithm for finding all articulation vertices of a graph. There is not even a wiki page.
From reading around, I got to know the basic facts from here. PDF
There is a variable at each node which is actually looking at back edges and finding the closest and upmost node towards the root node. After processing all edges it would be found.
But I do not understand how to find this down & up variable at each node during the execution of DFS. What is this variable doing exactly?
Please explain the algorithm.
Thanks.
Finding articulation vertices is an application of DFS.
In a nutshell,
Apply DFS on a graph. Get the DFS tree.
A node which is visited earlier is a "parent" of those nodes which are reached by it and visited later.
If any child of a node does not have a path to any of the ancestors of its parent, it means that removing this node would make this child disjoint from the graph.
There is an exception: the root of the tree. If it has more than one child, then it is an articulation point, otherwise not.
Point 3 essentially means that this node is an articulation point.
Now for a child, this path to the ancestors of the node would be through a back-edge from it or from any of its children.
All this is explained beautifully in this PDF.
I'll try to develop an intuitive understanding on how this algorithm works and also give commented pseudocode that outputs Bi-Components as well as bridges.
It's actually easy to develop a brute force algorithm for articulation points. Just take out a vertex, and run BFS or DFS on a graph. If it remains connected, then the vertex is not an articulation point, otherwise it is. This will run in O(V(E+V)) = O(EV) time. The challenge is how to do this in linear time (i.e. O(E+V)).
Articulation points connect two (or more) subgraphs. This means there are no edges from one subgraph to another. So imagine you are within one of these subgraphs and visiting its node. As you visit the node, you flag it and then move on to the next unflagged node using some available edge. While you are doing this, how do you know you are within still same subgraph? The insight here is that if you are within the same subgraph, you will eventually see a flagged node through an edge while visiting an unflagged node. This is called a back edge and indicates that you have a cycle. As soon as you find a back edge, you can be confident that all the nodes through that flagged node to the one you are visiting right now are all part of the same subgraph and there are no articulation points in between. If you didn't see any back edges then all the nodes you visited so far are all articulation points.
So we need an algorithm that visits vertices and marks all points between the target of back edges as currently-being-visited nodes as within the same subgraph. There may obviously be subgraphs within subgraphs so we need to select largest subgraph we have so far. These subgraphs are called Bi-Components. We can implement this algorithm by assigning each bi-component an ID which is initialized as just a count of the number of vertices we have visited so far. Later as we find back edges, we can reset the bi-compinent ID to lowest we have found so far.
We obviously need two passes. In the first pass, we want to figure out which vertex we can see from each vertex through back edges, if any. In the second pass we want to visit vertices in the opposite direction and collect the minimum bi-component ID (i.e. earliest ancestor accessible from any descendants). DFS naturally fits here. In DFS we go down first and then come back up so both of the above passes can be done in a single DFS traversal.
Now without further ado, here's the pseudocode:
time = 0
visited[i] = false for all i
GetArticulationPoints(u)
visited[u] = true
u.st = time++
u.low = v.st //keeps track of highest ancestor reachable from any descendants
dfsChild = 0 //needed because if no child then removing this node doesn't decompose graph
for each ni in adj[i]
if not visited[ni]
GetArticulationPoints(ni)
++dfsChild
parents[ni] = u
u.low = Min(u.low, ni.low) //while coming back up, get the lowest reachable ancestor from descendants
else if ni <> parent[u] //while going down, note down the back edges
u.low = Min(u.low, ni.st)
//For dfs root node, we can't mark it as articulation point because
//disconnecting it may not decompose graph. So we have extra check just for root node.
if (u.low = u.st and dfsChild > 0 and parent[u] != null) or (parent[u] = null and dfsChild > 1)
Output u as articulation point
Output edges of u with v.low >= u.low as bridges
output u.low as bicomponent ID
One fact that seems to be left out of all the explanations:
Fact #1: In a depth first search spanning tree (DFSST), every backedge connects a vertex to one of its ancestors.
This is essential for the algorithm to work, it is why an arbitrary spanning tree won't work for the algorithm. It is also the reason why the root is an articulation point iff it has more than 1 child: there cannot be a backedge between the subtrees rooted at the children of the spanning tree's root.
A proof of the statement is, let (u, v) be a backedge where u is not an ancestor of v, and (WLOG) u is visited before v in the DFS. Let p be the deepest ancestor of both u and v. Then the DFS would have to visit p, then u, then somehow revisit p again before visiting v. But it isn't possible to revisit p before visiting v because there is an edge between u and v.
Call V(c) the set of vertices in the subtree rooted at c in the DFSST
Call N(c) the set of vertices for which that have a neighbor in V(c) (by edge or by backedge)
Fact #2:
For a non root node u,
If u has a child c such that N(c) ⊆ V(c) ∪ {u} then u is an articulation point.
Reason: for every vertex w in V(c), every path from the root to w must contain u. If not, such a path would have to contain a back edge that connects an ancestor of u to a descendant of u due to Fact #1, making N(c) larger than V(c).
Fact #3:
The converse of fact #2 is also true.
Reason: Every descendant of u has a path to the root that doesn't pass through u.
A descendant in V(c) can bypass u with a path through a backedge that connects V(c) to N(c)/V(c).
So for the algorithm, you only need to know 2 things about each non-root vertex u:
The depth of the vertex, say D(u)
The minimum depth of N(u), also called the lowpoint, lets say L(u)
So if a vertex u has a child c, and L(c) is less than D(u), then that mean the subtree rooted at c has a backedge that reaches out to an ancestor of u which makes it not an articulation point by Fact #3. Conversely also by Fact #2.
If low of the descendant of u is greater than the dfsnum of u, then u is said to be the Articulation Point.
int adjMatrix[256][256];
int low[256], num=0, dfsnum[256];
void cutvertex(int u){
low[u]=dfsnum[u]=num++;
for (int v = 0; v < 256; ++v)
{
if(adjMatrix[u][v] && dfsnum[v]==-1)
{
cutvertex(v);
if(low[v]>dfsnum[u])
cout<<"Cut Vertex: "<<u<<"\n";
low[u]=min(low[u], low[v]);
}
else{
low[u]=min(low[u], dfsnum[v]);
}
}
}

small cycle finding in a planar graph

I have a geometric undirected planar graph, that is a graph where each node has a location and no 2 edges cross, and I want to find all cycles that have no edges crossing them.
Are there any good solutions known to this problem?
What I'm planning on doing is a sort of A* like solution:
insert every edge in a min heap as a path
extend the shortest path with every option
cull paths that loop back to other than there start (might not be needed)
cull paths that would be the third to use ang given edge
Does anyone see an issue with this? Will it even work?
My first instinct is to use a method similar to a wall following maze solver. Essentially, follow edges, and always take the rightmost edge out of a vertex. Any cycles you encounter with this method will be boundaries of a face. You'll have to keep track of which edges you've traversed in which direction. Once you've traversed an edge in both directions, you've identified the faces it separates. Once all edges have been traversed in both directions, you'll have identified all faces by their boundaries.
A "crossing edge", as you call it, is generally known as a chord. Thus, your problem is to find all chordless cycles.
This paper looks like it may help.
A simple way to do this is to simply go out and enumerate each face. The principle is simple:
We maintain 'α-visited' and 'β-visited' flags for each edge, and a pair of doubly-linked lists of unvisited edges for each such flag. The 'α' and 'β' division should correspond to a partition of the plane on the line corresponding to the edge in question; which side is α and which side is β is arbitrary.
For each vertex V:
For each adjacent pair of edges E = (V_1, V), E'=(V_2, V) (ie, sort by angle, take adjacent pairs, as well as first+last):
Determine which side S of E (S=α or β) V_2 lies on.
Walk the tile, starting at V, using side S of E, as described below:
Walking the tile is performed by:
Let V = V_init
Loop:
V_next = the vertex of E that is not V
E' = The adjacent edge to E from V_next that is on the current side of E
S' = The side of E' that contains V
If V_next = V, we have found a loop; record all the edges we passed by on the way, and mark those edge-pairs as visited.
If E' = E (there is only one edge), then we have hit a dead end; abort this walk
Otherwise, let V = V_next, E = E', S = S' and continue.
I hope this made sense; it perhaps needs some diagrams to explain...

Resources