The problem is to find shortest paths from a start vertice to all other vertices in a directed graph.
But the graph will have m positive weighted edges and k negative weighted edges, and it's guaranteed that negative weighted edges will be not in a cycle. In other words, there is no negative weighted cycle in this graph.
I tried to directly use Bellman-Ford and SPFA to solve this question but does there exists a faster way to do this?
If k is large enough, you should probably just run Bellman–Ford and call it a day.
If k is small, you can borrow the re-weighting trick from Johnson’s algorithm, but with a faster initialization than just running Bellman–Ford. Recall that Johnson computes a potential π(v) for each vertex and adjusts the cost of each arc vw from c(vw) to c′(vw) = c(vw) − π(v) + π(w), choosing π so that c′ is nowhere negative. The shortest path between s and t is the same with respect to c as with respect to c′,
Suppose that the input graph G has exactly one negative arc, let’s say c(xy) < 0. Use Dijkstra’s algorithm to compute distances in G − xy from y to all other vertices. Then define π(v) = distance(y, v). The correctness proof follows the proof of Johnson’s algorithm; the arc xy can’t be part of a shortest path from y because there are no negative cycles, so Dijkstra on G - xy in fact computes a shortest path tree for G.
For general k, we can do this recursively. If k = 0, run Dijkstra. Otherwise, remove a negative arc and compute shortest paths recursively instead of with Dijkstra. Once we have good values for π, run Dijkstra one more time, from the given start vertex.
The overall running time is O((k + 1) (m + n log n)) with Fibonacci heaps.
Related
Given a undirected weighted graph with n vertices and m edges. How do I construct an algorithm which takes at most O((n+m) log(n+m)) for finding the vertex which it's min shortest path distance to a set of vertices S \in V is maximized?
I know I can loop through all vertices and using dijkstra's algorithm find the shortest path to each of the vertices in S but that will surely take much more than O((n+m) log(n+m)).
I am assuming from your ideas (possible use of dijkstra's shortest path algorithm) that weight of every edge is greater or equal than zero.
A clean solution to your problem is to create a new vertex v and add zero weight edges between v and every vertex in S. Run dijkstra's algorithm from v and the answer you look for is the farthest vertex from v.
You can prove by yourself that solution of your problem is equal to the one calculated on the new graph as it is straighforward. Also, it only run dijkstra's algorithm once so it fits on your time complexity constraints.
If I had to give an algorithm in O|V|3| that takes as input a directed graph with positive edge lengths and returns the length of the shortest cycle in the graph (if the graph is acyclic, it should say so). I know that it will be:
Let G be a graph, define a matrix Dij which stores the shortest path from vertex i to j for any pair of vertices u,v. There can be two shortest paths between u and v. The length of the cycle is Duv+ Dvu. This then is enough to compote the minimum of the Duv+Dvu for any given pair of vertices u and v.
Could I write this in a way to make it at most O(nm log n) (where n is the number of vertices and m is the number of edges) instead of O|V|3|?
Yes, in fact this problem can be solved in O(nm) according to a conference paper by Orlin and Sedeño-Noda (2017), titled An O(nm) time algorithm for finding the min length directed cycle in a graph:
In this paper, we introduce an O(nm) time algorithm to determine the minimum length directed cycle (also called the "minimum weight directed cycle") in a directed network with n nodes and m arcs and with no negative length directed cycles.
I have a question which I was asked in some past exams at my school and I can't find an answer to it.
Is it possible knowing the final matrix after running the Johnson Algorithm on a graph, to know if it previously had negative cycles or not? Why?
Johnson Algorithm
Johnson's Algorithm is a technique that is able to compute shortest paths on graphs. Which is able to handle negative weights on edges, as long as there does not exist a cycle with negative weight.
The algorithm consists of (from Wikipedia):
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path from q to v. If this step detects a negative cycle, the algorithm is terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v, having length w(u, v), is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the reweighted graph.
If I understood your question correctly, which should have been as follows:
Is it possible knowing the final pair-wise distances matrix after running the Johnson Algorithm on a graph, to know if it originally had any negative-weight edges or not? Why?
As others commented here, we must first assume the graph has no negative weight cycles, since otherwise the Johnson algorithm halts and returns False (due to the internal Bellman-Form detection of negative weight cycles).
The answer then is that if any negative weight edge e = (u, v) exists in the graph, then the shortest weighted distance between u --> v cannot be > 0 (since at the worst case you can travel the negative edge e between those vertices).
Therefore, at least one of the edges had negative weight in the original graph iff any value in the final pair-wise distances is < 0
If the question is supposed to be interpreted as:
Is it possible, knowing the updated non-negative edge weights after running the Johnson Algorithm on a graph, to know if it originally had any negative-weight edges or not? Why?
Then no, you can't tell.
Running the Johnson algorithm on a graph that has only non-negative edge weights will leave the weights unchanged. This is because all of the shortest distances q -> v will be 0. Therefore, given the edge weights after running Johnson, the initial weights could have been exactly the same.
Okay, first of all I know Dijkstra does not work for negative weights and we can use Bellman-ford instead of it. But in a problem I was given it states that all the edges have weights from 0 to 1 (0 and 1 are not included). And the cost of the path is actually the product.
So what I was thinking is just take the log. Now all the edges are negative. Now I know Dijkstra won't work for negative weights but in this case all the edges are negative so can't we do something so that Dijkstra would work.
I though of multiplying all the weights by -1 but then the shortest path becomes the longest path.
So is there anyway I can avoid the Bellman-Ford algorithm in this case.
The exact question is: "Suppose for some application, the cost of a path is equal to the product all the weights of the edges in the path. How would you use Dijkstra's algorithm in this case? All the weights of the edges are from 0 to 1 (0 and 1 are not inclusive)."
If all the weights on the graph are in the range (0, 1), then there will always be a cycle whose weight is less that 1, and thus you will be stuck in this cycle for ever (every pass on the cycle reduces the total weight of the shortest path). Probably you have misunderstood the problem, and you either want to find the longest path, or you are not allowed to visit the same vertex twice. Anyway, in the first case dijkstra'a algorithm is definitely applicable, even without the log modification. And I am pretty sure the second case cannot be solved with polynomial complexity.
So you want to use a function, let's say F, that you will apply to the weights of the original graph and then with Dijkstra's algorithm you'll find the shortest product path. Let's also consider the following graph that we start from node A and where 0 < x < y < 1:
In the above graph F(x) must be smaller than F(y) for Dijkstra's algorithm to output correctly the shortest paths from A.
Now, let's take a slightly different graph that we start again from node A:
Then how Dijkstra's algorithm will work?
Since F(x) < F(y) then we will select node B at the next step. Then we'll visit the remaining node C. Dijkstra's algorithm will output that the shortest path from A to B is A -> B and the shortest path from A to C is A -> C.
But the shortest path from A to B is A -> C -> B with cost x * y < x.
This means we can't find a weight transformation function and expect Dijkstra's algorithm to work in every case.
You wrote:
I though of multiplying all the weights by -1 but then the shortest
path becomes the longest path.
To switch between the shortest and the longest path inverse the weights. So 1/3 will be 3, 5 will be 1/5 and so on.
If your graph has cycles, no shortest path algorithm will find an answer, because those cycles will always be "negative cycles", as Rontogiannis Aristofanis pointed out.
If your graph doesn't have cycles, you don't have to use Dijkstra at all.
If it is directed, it is a DAG and there are linear-time shortest path algorithms.
If it is undirected, it is a tree, and it's trivial to find shortest path in trees. And if your graph is directed, even without cycles, Dijkstra still won't work for the same reason it doesn't work for negative edge graph.
In all cases, Dijkstra is a terrible choice of algorithm for your problem.
I've been studying all-pair shortest path algorithms recently such as Floyd-Warshall and Johnson's algorithm, and I've noticed that these algorithms produce correct solutions even when a graph contains negative weight edges (but not negative weight cycles). For comparison, Dijkstra's algorithm (which is single-source shortest path) does not work for negative weight edges. What makes the all-pair shortest path algorithms work with negative weights?
Floyd Warshall's all pairs shortest paths algorithm works for graphs with negative edge weights because the correctness of the algorithm does not depend on edge's weight being non-negative, while the correctness of Dijkstra's algorithm is based on this fact.
Correctness of Dijkstra's algorithm:
We have 2 sets of vertices at any step of the algorithm. Set A consists of the vertices to which we have computed the shortest paths. Set B consists of the remaining vertices.
Inductive Hypothesis: At each step we will assume that all previous iterations are correct.
Inductive Step: When we add a vertex V to the set A and set the distance to be dist[V], we must prove that this distance is optimal. If this is not optimal then there must be some other path to the vertex V that is of shorter length.
Suppose this some other path goes through some vertex X in the set B.
Now, since dist[V] <= dist[X] , therefore any other path to V will be atleast dist[V] length, unless the graph has negative edge lengths.
Correctness of Floyd Warshall's algorithm:
Any path from vertex S to vertex T, will go through any other vertex U of the graph. Thus the shortest path from S to T can be computed as the
min( shortest_path(S to U) + shortest_path(U to T)) for all vertices U in the graph.
As you can see there is no dependence on the graph's edges to be non-negative as long as the sub calls compute the paths correctly. And the sub calls compute the paths correctly as long as the base cases have been properly initialized.
Dijkstra's Algorithm doesn't work for negative weight edge because it is based on the greedy strategy(an assumption) that once a vertex v was added to the set S, d[v] contains the minimum distance possible.
But if the last vertex in Q was added to S and it has some outgoing negative weight edges. The effects on the distance that caused by negative edges won't count.
However, all pairs shortest paths algorithm will capture those updates.