While I was in the shower today, I had a thought - How difficult would it be to write an algorithm to traverse a weighted di-graph and find the shortest path while allowed to skip a fixed number of edges s. I started thinking about even one skip, and for the brute force method it seems to multiply the problem by the number of edges in your graph, as you have to find the shortest path for each case where an edge is set to 0 cost and then compare across all graphs. I don't know if there are any algorithms that do this, but a cursory search of google didn't show any.
My first question would be for skipping the most costed edge(s), but it's also an interesting problem to examine having to find a path assuming you skip the least costed edge(s).
This is just to satisfy my curiosity, so no rush.
Thanks!
What follows is the logic of how to solve this problem. The way to solve this type of problem is to consider a graph composed of two copies of the original graph you want to traverse, which I'll describe how to create. For your sake, draw a small graph, and then draw it topologically sorted (which helps with the visualization, but is not necessary in the program.) Next, draw a copy of that graph a few inches above the original. You're in the bottom section of this graph when you have not yet used your skip, and you're in the top part when you have used your skip. Let's call the nodes in the bottom graph A1, A2, A3 ... and the nodes in the top graph B1, B2, B3 ... If, in your original graph, node 1 is connected to nodes 2, then your new graph has edges A1->A2, B1->B2, and a free connection, A1->B2 (with edge cost 0).
Consider the following original graph, where you start at the black node, and desire to end up at the blue node.
Your new graph will look like the following, where you again start at black and wish to go to the blue node.
At each location in the bottom half of the graph, you have not used your skip, and thus can either skip (moving to the top part of the graph) or can move normally, going to another node in the bottom graph.
You can then use any of the standard graph traversal algorithms.
Related
I'm trying to implement a (Unweighted) Feedback Vertex Set approximation algorithm from the following paper: FVS-Approximation-Paper. One of the steps of the algorithm (described on page 4) is to compute a maximal 2-3 subgraph of the input graph.
To be precise, a 2-3 graph is one that has only vertices of degree either 2 or 3.
By maximal we mean that that no set of edges or vertices of the original graph can be added to the maximal subgraph without violating the 2-3 condition.
The authors of the paper claim that the computation can be carried out by a "simple Depth First Search (DFS)" on the graph. However, this algorithm seems to elude me. How can the maximal subgraph be computed?
I think I managed to figure out something like what the authors intended. I wouldn't call it simple, though.
Let G be the graph and H be an initially empty 2-3 subgraph of G. The algorithm bears a family resemblance to a depth-first traversal, yet I wouldn't call it that. Starting from an arbitrary node, we walk around in the graph, pushing the steps onto a stack. Whenever we detect that the stack contains a path/cycle/sigma shape that would constitute a 2-3 super-graph of H, we move it from the stack to H and continue. When it's no longer possible to find such a shape, H is maximal, and we're done.
In more detail, the stack usually consists of a path having no nodes of degree 3 in H. The cursor is positioned at one end of the path. With each step, we examine the next edge incident to the end. If the only incident edge is the one by which we arrived, we delete it from both G and the stack and move the end back one. Otherwise we can possibly extend the path by some edge e. If e's other endpoint has degree 3 in H, we delete e from G and consider the next edge incident to the end. If e's other endpoint has degree 2 in H but is not currently on the stack, then we've anchored this end. If the other end is anchored too, then add the stack path to H and keep going. Otherwise, move the cursor to the other end of the stack, reversing the stack. The final case is if the stack loops back on itself. Then we can extract a path/cycle/sigma and keep going.
Typing this out on mobile, so sorry for the terse description. Maybe I'll find time to implement it.
I came across this problem at a coding site and I have no idea on how to solve it. The editorial is not available nor was I able to find any related article online. So I am asking this here.
Problem:
You have a graph G that contains N vertices and M edges. The vertices are numbered from 1 through N. Also each node is colored either Black or White. You want to calculate the shortest path from 1 to N such that the difference of black and white nodes is at most 1.
As obvious as it is, applying straight forward Dijkstra's Algorithm will not work. Any help is appreciated. Thank you!
We can consider a modified graph and run Dijkstra on this one:
For each node in the original graph, the modified graph will have multiple meta vertices (theoretically, infinitely many) that each correspond to a different black-white difference. You only need to create the nodes as you explore the graph with Dijkstra. Thus, you won't need infinitely many nodes.
The edges are then pretty simple (you can also create them while exploring). If you are currently at a node with black-white difference d and the original graph has an edge to a white node, then you create an edge to the respective node with black-white difference d-1. If the original graph has an edge to a black node, you create an edge in the modified graph to the respective node with black-white difference d+1. You don't necessarily need to treat them as different nodes. You can also store the Dijkstra variables in the node grouped by black-white difference.
Running Dijkstra in this way will give you the shortest paths to any node with any black-white difference. As soon as you reach the target node with an acceptable black-white difference, you are done.
The problem: you need to find the minimum spanning tree of a graph (i.e. a set S of edges in said graph such that the edges in S together with the respective vertices form a tree; additionally, from all such sets, the sum of the cost of all edges in S has to be minimal). But there's a catch. You are given an initial set of fixed edges K such that K must be included in S.
In other words, find some MST of a graph with a starting set of fixed edges included.
My approach: standard Kruskal's algorithm but before anything else join all vertices as pointed by the set of fixed edges. That is, if K = {1,2}, {4,5} I apply Kruskal's algorithm but instead of having each node in its own individual set initially, instead nodes 1 and 2 are in the same set and nodes 4 and 5 are in the same set.
The question: does this work? Is there a proof that this always yields the correct result? If not, could anyone provide a counter-example?
P.S. the problem only inquires finding ONE MST. Not interested in all of them.
Yes, it will work as long as your initial set of edges doesn't form a cycle.
Keep in mind that the resulting tree might not be minimal in weight since the edges you fixed might not be part of any MST in the graph. But you will get the lightest spanning tree which satisfies the constraint that those fixed edges are part of the tree.
How to implement it:
To implement this, you can simply change the edge-weights of the edges you need to fix. Just pick the lowest appearing edge-weight in your graph, say min_w, subtract 1 from it and assign this new weight,i.e. (min_w-1) to the edges you need to fix. Then run Kruskal on this graph.
Why it works:
Clearly Kruskal will pick all the edges you need (since these are the lightest now) before picking any other edge in the graph. When Kruskal finishes the resulting set of edges is an MST in G' (the graph where you changed some weights). Note that since you only changed the values of your fixed set of edges, the algorithm would never have made a different choice on the other edges (the ones which aren't part of your fixed set). If you think of the edges Kruskal considers, as a sorted list of edges, then changing the values of the edges you need to fix moves these edges to the front of the list, but it doesn't change the order of the other edges in the list with respect to each other.
Note: As you may notice, giving the lightest weight to your edges is basically the same thing as you suggest. But I think it is a bit easier to reason about why it works. Go with whatever you prefer.
I wouldn't recommend Prim, since this algorithm expands the spanning tree gradually from the current connected component (in the beginning one usually starts with a single node). The case where you join larger components (because your fixed edges might not all be in a single component), would be needed to handled separately - it might not be hard, but you would have to take care of it. OTOH with Kruskal you don't have to adapt anything, but simply manipulate your graph a bit before running the regular algorithm.
If I understood the question properly, Prim's algorithm would be more suitable for this, as it is possible to initialize the connected components to be exactly the edges which are required to occur in the resulting spanning tree (plus the remaining isolated nodes). The desired edges are not permitted to contain a cycle, otherwise there is no spanning tree including them.
That being said, apparently Kruskal's algorithm can also be used, as it is explicitly stated that is can be used to find an edge that connects two forests in a cost-minimal way.
Roughly speaking, as the forests of a given graph form a Matroid, the greedy approach yields the desired result (namely a weight-minimal tree) regardless of the independent set you start with.
Although we can check a if a graph is bipartite using BFS and DFS (2 coloring ) on any given undirected graph, Same implementation may not work for the directed graph.
So for testing same on directed graph , Am building a new undirected graph G2 using my source graph G1, such that for every edge E[u -> v] am adding an edge [u,v] in G2.
So by applying a 2 coloring BFS I can now find if G2 is bipartite or not.
and same applies for the G1 since these two are structurally same. But this method is costly as am using extra space for graph. Though this will suffice my purpose as of now, I'd like know if there any better implementations for the same.
Thanks In advance.
You can execute the algorithm to find the 2-partition of an undirected graph on a directed graph as well, you just need a little twist. (BTW, in the algorithm below I assume that you will eventually find a 2-coloring. If not, then you will run into a node that is already colored and you find you need to color it to the other color. Then you just exit saying it's not bipartite.)
Start from any node and do the 2-coloring by traversing the edges. If you have traversed every edge and every node in the graph then you have your partition. If not, then you have a component that is 2-colored and there are no edges leaving the component. Pick any node not in the component and repeat. If you get into a situation when you have a few components that are all 2-colored, and there are no edges leaving any of them, and you encounter an edge that originates in a node in the component you are currently building and goes into a node in one of the previous components then you just merge the current component with the older one (and possibly need to flip the color of every node in one of the components -- flip it in the smaller component). After merging just continue. You can do the merge, because at the time of the merge you have scanned only one edge between the two components, so flipping the coloring of one of the components leaves you in a valid state.
The time complexity is still O(max(|N|,|E|)), and all you need is an extra field for every node indicating which component that node is in.
Here is the full title I would have posted, but it happens to be too long:
Given a source node, dest node, and intermediate nodes, how does one detect if the shortest Manhattan Distance is blocked by the intermediate nodes?
I've drawn a diagram to make it more clear. On the left side, "u" is the source node and "v" is the destination node. The nodes labeled 1 through 6 are the intermediate nodes. The shortest Manhattan Distance from u -> v would be 12, but the intermediate nodes form a wall blocking it. The diagram on the right, with u' being the source, and v' being the destination, shows that the intermediate nodes 1 through 5 do not block the shortest Manhattan distance from u' to v'.
I'm trying to find an algorithm that won't require me to actually do a graph search (e.g. BFS), because the distance between u and v could potentially be very large.
If all you want to do is detect whether the shortest path (one consisting of moves that monotonically take you in the right direction) is blocked, then you are trying to check whether the blocking nodes cut the rectangle whose corners are given by the source and destination node into two different regions that are disconnected. If no shortest path from the source to the destination is possible, then every path must have some point in it that's blocked.
Let's suppose for simplicity that your start point is below and to the left of the destination point. In that case, find, in O(n), all of the other points that are obstacle points contained in the bounding box holding the start and end point. You now want to see if there is some subset of those nodes that cuts the rectangle into two pieces, one containing the bottom-left corner and one containing the top-right corner. This is possible iff there is a path of the blocking nodes from the left side to the right side, from the left side to the bottom side, from the top side to the right side, or from the top side to the bottom side. Thus we just need to check if any of these are possible.
Fortunately, this can be done efficiently by modeling the problem as a graph search in a graph that has size O(n), where n is the number of blocking points, and has nothing to do with the size of the bounding box. That is, no matter how far apart the test points are, the size of the graph to search depends solely on the number of blocking points.
The graph you want to construct has two parts. First, build a graph where each blocking point is connected to each other blocking point in the 3x3 square surrounding it. These edges link together blocking points that could be part of the same barrier, in that no path from the source to the target could pass between two blocking points joined by an edge. Now, add in four new nodes representing the top wall, left wall, right wall, and bottom wall and connect them to each node that is adjacent to the appropriate wall. That way, for example, a path from the left wall node to the right wall node would represent a series of blocking nodes that make it impossible to get from the bottom-left node to the top-right node.
This graph has size O(n), where n is the number of blocking nodes, since there are O(n) nodes and each node can have at most 12 edges - one for each of the 8 neighbors and potentially one for each of the four walls. You could construct it in at worst quadratic time by scanning over each node and, for each other node, seeing if they are adjacent. There is probably a better way to do this, but nothing comes to me at the moment.
Now that you have the graph, for each of the pairs of walls that, if connected, would disconnect the graph, run a graph search in this graph between those two wall nodes. If a path exists, report that the shortest path is blocked. If not, report that some shortest path is unblocked. These searches could be done with a simple DFS, or since you're running multiple searches and just want to know if they're connected, using a strongly connected components algorithm once and checking if any pair of important nodes are in the same SCC. Either approach takes time O(n).
Thus the time to solve this problem is at most O(n2), with the bottleneck being the time required to construct the graph.
Hope this helps!
Here's my idea:
I'll describe the case when the destination is upper and to right from the source, for other cases, rotate. (For simple cases where the nodes have the same x/y coordinate, just checks whether there's a blocking node directly between them)
Take the matrix with source and destination in corners. Now, a column at a time, from left to right and inside the column, bottom up, mark blocked nodes. A node B is blocked iff any of following is true:
B is an intermediate node
the nodes left to B and bottom from B are both blocked (both were already checked given the order of processing) or outside the bounds of the matrix
In the end, if destination is blocked, there's no free shortest path.
The time required is O(m*n), where m, n are the lengths of sides of the matrix. So when you'll only have several intermediate nodes, templatetypedef's solution may be more appropriate.
EDIT: Got it a little wrong previously, now I hope I didn't miss anything