Dijkstra's algorithm on directed acyclic graph with negative edges - algorithm

Will Dijkstra's algorithm work on a graph with negative edges if it is acyclic (DAG)? I think it would because since there are no cycles there cannot be a negative loop. Is there any other reason why this algorithm would fail?
Thanks [midterm tomorrow]

Consider the graph (directed 1 -> 2, 2-> 4, 4 -> 3, 1 -> 3, 3 -> 5):
1---(2)---3--(2)--5
| |
(3) (2)
| |
2--(-10)--4
The minimum path is 1 - 2 - 4 - 3 - 5, with cost -3. However, Dijkstra will set d[3] = 2, d[2] = 3 in the first step, then extract node 3 from its priority queue and set d[5] = 4. Since node 3 was extracted from the priority queue, and Dijkstra does not push a given node to its priority queue more than once, it will never end up in it again, so the algorithm won't work.
Dijkstra's algorithm does not work with negative edges, period. The absence of a cycle changes nothing. Bellman-Ford is the one that can detect negative cost cycles and works with negative edges. Dijkstra will not work if you can have negative edges.
If you change Dijkstra's algorithm such that it can push a node to the priority queue more than once, then the algorithm will work with negative cost edges. But it is debatable if the new algorithm is still Dijkstra's: I would say you get Bellman-Ford that way, which is often implemented exactly like that (well, usually a FIFO queue is used and not a priority queue).

I think Dijkstra's algorithm will work for DAG if there is no negative weight. Because Dijkstra's algorithm can't give the right answer for negative weighted edges graph. But sometimes it does based on graph type.

Pure implementation of Dijkstra's will fail , whenever there is a negative edge weight. The following variant will still work for given problem scenario.
Every time an edge u -> v is relaxed, push a pair of (newer/shorter distance to v from source) into queue. This causes more than one copy of the same vertex in queue with different distances from source.
Continue to update the distance until queue is empty.
The above variant works, even if negative edges are present. But not in case if there is negative weight cycle. DAG is acyclic so, we don't have to worry about negative cycles.
There is more efficient way to calculate shortest path distances O(V+E) time for DAGs using topological ordering. More details can be found here

Related

How to make bellman-ford run in worst case?

I am trying to make optimized version of bellman ford algorithm to run in worst case. Optimized version I mean if after relaxing 1 round of edges and there is no further update on the shortest distance, it terminates.
For instance, a simple connected weighted directed graph with 7 vertices such that running Optimized Bellman-Ford's algorithm from source vertex 0 uses at least 5 rounds to get the correct shortest paths.
The graph cannot contain a negative weight cycle.
i.e. all outgoing edges from vertex 0 is processed then edges from vertex 1 and so on
I know it has to do with cycles. But i am not very sure the strategy in drawing the graph to meet the requirement.
Your version of the Bellman-Ford algorithm will need as many iterations as the longest length (in edges) of all shortest paths in the graph.
Consider a directed graph with n vertices. You add edges 1 -> 2 -> 3 -> ... -> n to the graph, each having a small positive weight. Then you can add as many arbitrary heavy edges as you want. It is clear that the shortest path from 1 to n has length n-1. Thus your algorithm will need exactly n-1 iterations.
On further note, there is an improved version of the Bellman-Ford algorithm. It is called the Shortest Path Faster Algorithm. Although it has a worst-case run-time of O(|V|*|E|), which is the same as Bellman-Ford, very few graphs can make the algorithm reach that time. In practice, you can expect an average runtime of O(|E|) (unproven).

Dijkstra and Negative Edges

I'm having trouble understanding why Dijkstra's algorithm does not work on acyclic directed graphs with negative edges. As I understand it, Dijkstra does a breadth-first traversal of the graph, relaxing when appropriate. For instance, consider the graph:
S->A (3)
S->B (4)
B->A (-2)
Where S is the source node. This is how I imagine it working:
1) Mark A with a distance of 3. Mark B with a distance of 4.
2) Recurse on A. Since A points to no nodes, do nothing.
3) Recurse on B. Since B points to A, check to see if B's distance + B->A is less than the current distance of A. 2 < 3, so mark A with a distance of 2.
Yet apparently this is not how it works, as the book I use gives this very graph to show why negatives DON'T work. I cannot follow the book's explanation. How would Dijkstra work on this graph and why would they not use the method I am imagining?
The problem is, once you process a node, you cannot afterwards update its distance, since that would require recursive updates and would throw off the whole thing (read: go against the assumption of the algorithm that the nodes are processed in monotonously increasing distance to the source; see the proof of correctness for the algorithm to see where that is required). So once A was processed, you can't later change its distance, which means you can't have negative edges since they might give you shorter distances to previously processed nodes. The assumption of monotonously increasing distances is why you mark nodes black once they have been processed, and you disregard black nodes afterwards. So even though in that graph A would have a distance of 2 to S, Dijkstra's algorithm would give you a distance of 3 since it disregards any edges leading towards A after A was processed.
EDIT: Here's what Dijkstra's algorithm would do:
1) Mark A with a distance of 3, put it into the queue of nodes awaiting processing; Mark B with a distance of 4, put it into the queue.
2) Take A out of the queue since it's at the front. Since A points to no nodes, don't update any distances, don't add anything to the queue. Mark A as processed.
3) Take B out of the queue. B points to A, but A is marked as already processed; ignore the edge B->A. Since there are no more outgoing edges from B, we're done.
EDIT 2:
Regarding DAGs, you don't need Dijkstra's algorithm at all. DAGs always have a topological ordering, which can be calculated in O(|V| + |E|), and processing the vertices in the topological order, using d(w) = min {d(w); d(v) + c(v, w)} as the rule for updating distances where d(v) is the distance of vertex v from the source and c(v,w) is the length of edge (v,w) will give you the correct distances, again in O(|V| + |E|). Altogether you have two steps each requiring O(|V| + |E|), so that's the total complexity of calculating single source shortest path in DAGs with arbitrary edge lengths.

Dijkstra with negative edges that leave the source node

Dijkstra's Algorithm fails when in a graph we have edges with negative weights. However, to this rule there is an exception: If In a directed acyclic graph only the edges that leave the source node are negative (all the other edges are positive), then we can successfully use Dijkstra's Algorithm.
Now my question is, what if in the above exception the graph has a cycle? I believe Dijkstra won't work, but I cannot come up with an example of a directed graph that has cycles, and the only negative edges are those leaving the source node which does not work with Dijkstra. Anyone can suggest an example?
In the scenario you describe, Dijkstra's algorithm will work just fine.
The reason why it fails in the general case with negative weight since it greedily chooses which node to "close" at each step, and a closed node is never reopened.
Now, assume the source s has k out edges, to k different nodes.
Let the order of them be v_1, v_2, ..., v_k (v_1 being the smallest). Note that for each v_i, v_j such that i < j - there will be no path from s to v_i through v_j with a "better" cost then v_i, thus - the order of investigating these first nodes will never change. (and since it doesn't change, no way a later node will be entered to "closed" before the shortest path is indeed found).
Thus, at overall - no harm is done - once an edge is in the "closed" - you will never find a "shorter" path to it, since the negative edges are only from the source.
In here I assume the source in your question means d_in(source)=0, same as a "source" in a DAG.
If you mean out of the source vertex, it could be a problem since look at a 2 vertices graph such that w(s,t) = -2, w(t,s)=1 - there is a negative cycle in the graph. So, in order to the above explanation to work - you must assume d_in(s) = 0

Negative weights using Dijkstra's Algorithm

I am trying to understand why Dijkstra's algorithm will not work with negative weights. Reading an example on Shortest Paths, I am trying to figure out the following scenario:
2
A-------B
\ /
3 \ / -2
\ /
C
From the website:
Assuming the edges are all directed from left to right, If we start
with A, Dijkstra's algorithm will choose the edge (A,x) minimizing
d(A,A)+length(edge), namely (A,B). It then sets d(A,B)=2 and chooses
another edge (y,C) minimizing d(A,y)+d(y,C); the only choice is (A,C)
and it sets d(A,C)=3. But it never finds the shortest path from A to
B, via C, with total length 1.
I can not understand why using the following implementation of Dijkstra, d[B] will not be updated to 1 (When the algorithm reaches vertex C, it will run a relax on B, see that the d[B] equals to 2, and therefore update its value to 1).
Dijkstra(G, w, s) {
Initialize-Single-Source(G, s)
S ← Ø
Q ← V[G]//priority queue by d[v]
while Q ≠ Ø do
u ← Extract-Min(Q)
S ← S U {u}
for each vertex v in Adj[u] do
Relax(u, v)
}
Initialize-Single-Source(G, s) {
for each vertex v  V(G)
d[v] ← ∞
π[v] ← NIL
d[s] ← 0
}
Relax(u, v) {
//update only if we found a strictly shortest path
if d[v] > d[u] + w(u,v)
d[v] ← d[u] + w(u,v)
π[v] ← u
Update(Q, v)
}
Thanks,
Meir
The algorithm you have suggested will indeed find the shortest path in this graph, but not all graphs in general. For example, consider this graph:
Let's trace through the execution of your algorithm.
First, you set d(A) to 0 and the other distances to ∞.
You then expand out node A, setting d(B) to 1, d(C) to 0, and d(D) to 99.
Next, you expand out C, with no net changes.
You then expand out B, which has no effect.
Finally, you expand D, which changes d(B) to -201.
Notice that at the end of this, though, that d(C) is still 0, even though the shortest path to C has length -200. This means that your algorithm doesn't compute the correct distances to all the nodes. Moreover, even if you were to store back pointers saying how to get from each node to the start node A, you'd end taking the wrong path back from C to A.
The reason for this is that Dijkstra's algorithm (and your algorithm) are greedy algorithms that assume that once they've computed the distance to some node, the distance found must be the optimal distance. In other words, the algorithm doesn't allow itself to take the distance of a node it has expanded and change what that distance is. In the case of negative edges, your algorithm, and Dijkstra's algorithm, can be "surprised" by seeing a negative-cost edge that would indeed decrease the cost of the best path from the starting node to some other node.
Note, that Dijkstra works even for negative weights, if the Graph has no negative cycles, i.e. cycles whose summed up weight is less than zero.
Of course one might ask, why in the example made by templatetypedef Dijkstra fails even though there are no negative cycles, infact not even cycles. That is because he is using another stop criterion, that holds the algorithm as soon as the target node is reached (or all nodes have been settled once, he did not specify that exactly). In a graph without negative weights this works fine.
If one is using the alternative stop criterion, which stops the algorithm when the priority-queue (heap) runs empty (this stop criterion was also used in the question), then dijkstra will find the correct distance even for graphs with negative weights but without negative cycles.
However, in this case, the asymptotic time bound of dijkstra for graphs without negative cycles is lost. This is because a previously settled node can be reinserted into the heap when a better distance is found due to negative weights. This property is called label correcting.
TL;DR: The answer depends on your implementation. For the pseudo code you posted, it works with negative weights.
Variants of Dijkstra's Algorithm
The key is there are 3 kinds of implementation of Dijkstra's algorithm, but all the answers under this question ignore the differences among these variants.
Using a nested for-loop to relax vertices. This is the easiest way to implement Dijkstra's algorithm. The time complexity is O(V^2).
Priority-queue/heap based implementation + NO re-entrance allowed, where re-entrance means a relaxed vertex can be pushed into the priority-queue again to be relaxed again later.
Priority-queue/heap based implementation + re-entrance allowed.
Version 1 & 2 will fail on graphs with negative weights (if you get the correct answer in such cases, it is just a coincidence), but version 3 still works.
The pseudo code posted under the original problem is the version 3 above, so it works with negative weights.
Here is a good reference from Algorithm (4th edition), which says (and contains the java implementation of version 2 & 3 I mentioned above):
Q. Does Dijkstra's algorithm work with negative weights?
A. Yes and no. There are two shortest paths algorithms known as Dijkstra's algorithm, depending on whether a vertex can be enqueued on the priority queue more than once. When the weights are nonnegative, the two versions coincide (as no vertex will be enqueued more than once). The version implemented in DijkstraSP.java (which allows a vertex to be enqueued more than once) is correct in the presence of negative edge weights (but no negative cycles) but its running time is exponential in the worst case. (We note that DijkstraSP.java throws an exception if the edge-weighted digraph has an edge with a negative weight, so that a programmer is not surprised by this exponential behavior.) If we modify DijkstraSP.java so that a vertex cannot be enqueued more than once (e.g., using a marked[] array to mark those vertices that have been relaxed), then the algorithm is guaranteed to run in E log V time but it may yield incorrect results when there are edges with negative weights.
For more implementation details and the connection of version 3 with Bellman-Ford algorithm, please see this answer from zhihu. It is also my answer (but in Chinese). Currently I don't have time to translate it into English. I really appreciate it if someone could do this and edit this answer on stackoverflow.
you did not use S anywhere in your algorithm (besides modifying it). the idea of dijkstra is once a vertex is on S, it will not be modified ever again. in this case, once B is inside S, you will not reach it again via C.
this fact ensures the complexity of O(E+VlogV) [otherwise, you will repeat edges more then once, and vertices more then once]
in other words, the algorithm you posted, might not be in O(E+VlogV), as promised by dijkstra's algorithm.
Since Dijkstra is a Greedy approach, once a vertice is marked as visited for this loop, it would never be reevaluated again even if there's another path with less cost to reach it later on. And such issue could only happen when negative edges exist in the graph.
A greedy algorithm, as the name suggests, always makes the choice that seems to be the best at that moment. Assume that you have an objective function that needs to be optimized (either maximized or minimized) at a given point. A Greedy algorithm makes greedy choices at each step to ensure that the objective function is optimized. The Greedy algorithm has only one shot to compute the optimal solution so that it never goes back and reverses the decision.
Consider what happens if you go back and forth between B and C...voila
(relevant only if the graph is not directed)
Edited:
I believe the problem has to do with the fact that the path with AC* can only be better than AB with the existence of negative weight edges, so it doesn't matter where you go after AC, with the assumption of non-negative weight edges it is impossible to find a path better than AB once you chose to reach B after going AC.
"2) Can we use Dijksra’s algorithm for shortest paths for graphs with negative weights – one idea can be, calculate the minimum weight value, add a positive value (equal to absolute value of minimum weight value) to all weights and run the Dijksra’s algorithm for the modified graph. Will this algorithm work?"
This absolutely doesn't work unless all shortest paths have same length. For example given a shortest path of length two edges, and after adding absolute value to each edge, then the total path cost is increased by 2 * |max negative weight|. On the other hand another path of length three edges, so the path cost is increased by 3 * |max negative weight|. Hence, all distinct paths are increased by different amounts.
You can use dijkstra's algorithm with negative edges not including negative cycle, but you must allow a vertex can be visited multiple times and that version will lose it's fast time complexity.
In that case practically I've seen it's better to use SPFA algorithm which have normal queue and can handle negative edges.
I will be just combining all of the comments to give a better understanding of this problem.
There can be two ways of using Dijkstra's algorithms :
Marking the nodes that have already found the minimum distance from the source (faster algorithm since we won't be revisiting nodes whose shortest path have been found already)
Not marking the nodes that have already found the minimum distance from the source (a bit slower than the above)
Now the question arises, what if we don't mark the nodes so that we can find shortest path including those containing negative weights ?
The answer is simple. Consider a case when you only have negative weights in the graph:
)
Now, if you start from the node 0 (Source), you will have steps as (here I'm not marking the nodes):
0->0 as 0, 0->1 as inf , 0->2 as inf in the beginning
0->1 as -1
0->2 as -5
0->0 as -8 (since we are not relaxing nodes)
0->1 as -9 .. and so on
This loop will go on forever, therefore Dijkstra's algorithm fails to find the minimum distance in case of negative weights (considering all the cases).
That's why Bellman Ford Algo is used to find the shortest path in case of negative weights, as it will stop the loop in case of negative cycle.

I am trying to build a list of limitations of all graph algorithms

Single Source shortest Path
Dijkstra's - directed and undirected - works only for positive edge weights - cycles ??
Bellman Ford - directed - no cycles should exist
All source shortest path
Floyd Warshall - no info
Minimum Spanning Tree
( no info about edge weights or nature of graph or cycles)
Kruskal's
Prim's - undirected
Baruvka's
I'm not sure what the question is but here goes...
The classic implementation of Dijkstra's algorithm can only handle positive edge weights, but there is a way to make it work with negative edge costs. Whenever you update a node, put the updated node back in the queue. However, it's arguable whether this is really Dijkstra or a Bellman-Ford with a priority queue.
For example consider this graph:
1 - 2 (100)
2 - 3 (-200)
1 - 3 (50)
3 - 4 (100)
Classical Dijkstra would set D[1] = 0, D[2] = 100, D[3] = 50, D[4] = 150, D[3] = -100 and stop. However, when setting D[3] = -100, add 3 back into the queue and continue the algorithm. That will give D[4] = 0, which is correct. I'm not sure if this is considered "Dijkstra's algorithm" however.
As for Bellman-Ford, the graph doesn't necessarily have to be directed, and (negative cost cycles, other cycles make no difference anyway) cycles can exist, just make sure that you detect the cycles. A cycle is detected when you extract a node from the queue n times, where n is the number of nodes. You can do the same check to detect a cycle in the "modified Dijkstra's algorithm" I outlined above.
Floyd Warshall - the cost is cubic in the number of nodes. Inefficient for anything but very small graphs. It assumes there are no negative cost cycles, but you can use it to detect such cycles, see wikipedia.
MST - use Kruskal's when the number of edges is closer to O(n) than O(n2). Use Prim's otherwise. Both will work on any kind of graphs, even if they contains negative edge weights and cycles.
Another shortest paths algorithm I personally like a lot is Dial's algorithm. I like to think of it like counting sort on graphs. Also read this rather exhaustive paper.
A* (A star) might be one of the optimal choices in graphs algorithm. However, like explained in the wikipedia article :
The time complexity of A* depends on the heuristic. In the worst case, the number of nodes expanded is exponential in the length of the solution (the shortest path), but it is polynomial when the search space is a tree
Meaning that the time of calculation won't always be the same depending of graph.

Resources