Negative weights using Dijkstra's Algorithm - algorithm

I am trying to understand why Dijkstra's algorithm will not work with negative weights. Reading an example on Shortest Paths, I am trying to figure out the following scenario:
2
A-------B
\ /
3 \ / -2
\ /
C
From the website:
Assuming the edges are all directed from left to right, If we start
with A, Dijkstra's algorithm will choose the edge (A,x) minimizing
d(A,A)+length(edge), namely (A,B). It then sets d(A,B)=2 and chooses
another edge (y,C) minimizing d(A,y)+d(y,C); the only choice is (A,C)
and it sets d(A,C)=3. But it never finds the shortest path from A to
B, via C, with total length 1.
I can not understand why using the following implementation of Dijkstra, d[B] will not be updated to 1 (When the algorithm reaches vertex C, it will run a relax on B, see that the d[B] equals to 2, and therefore update its value to 1).
Dijkstra(G, w, s) {
Initialize-Single-Source(G, s)
S ← Ø
Q ← V[G]//priority queue by d[v]
while Q ≠ Ø do
u ← Extract-Min(Q)
S ← S U {u}
for each vertex v in Adj[u] do
Relax(u, v)
}
Initialize-Single-Source(G, s) {
for each vertex v  V(G)
d[v] ← ∞
π[v] ← NIL
d[s] ← 0
}
Relax(u, v) {
//update only if we found a strictly shortest path
if d[v] > d[u] + w(u,v)
d[v] ← d[u] + w(u,v)
π[v] ← u
Update(Q, v)
}
Thanks,
Meir

The algorithm you have suggested will indeed find the shortest path in this graph, but not all graphs in general. For example, consider this graph:
Let's trace through the execution of your algorithm.
First, you set d(A) to 0 and the other distances to ∞.
You then expand out node A, setting d(B) to 1, d(C) to 0, and d(D) to 99.
Next, you expand out C, with no net changes.
You then expand out B, which has no effect.
Finally, you expand D, which changes d(B) to -201.
Notice that at the end of this, though, that d(C) is still 0, even though the shortest path to C has length -200. This means that your algorithm doesn't compute the correct distances to all the nodes. Moreover, even if you were to store back pointers saying how to get from each node to the start node A, you'd end taking the wrong path back from C to A.
The reason for this is that Dijkstra's algorithm (and your algorithm) are greedy algorithms that assume that once they've computed the distance to some node, the distance found must be the optimal distance. In other words, the algorithm doesn't allow itself to take the distance of a node it has expanded and change what that distance is. In the case of negative edges, your algorithm, and Dijkstra's algorithm, can be "surprised" by seeing a negative-cost edge that would indeed decrease the cost of the best path from the starting node to some other node.

Note, that Dijkstra works even for negative weights, if the Graph has no negative cycles, i.e. cycles whose summed up weight is less than zero.
Of course one might ask, why in the example made by templatetypedef Dijkstra fails even though there are no negative cycles, infact not even cycles. That is because he is using another stop criterion, that holds the algorithm as soon as the target node is reached (or all nodes have been settled once, he did not specify that exactly). In a graph without negative weights this works fine.
If one is using the alternative stop criterion, which stops the algorithm when the priority-queue (heap) runs empty (this stop criterion was also used in the question), then dijkstra will find the correct distance even for graphs with negative weights but without negative cycles.
However, in this case, the asymptotic time bound of dijkstra for graphs without negative cycles is lost. This is because a previously settled node can be reinserted into the heap when a better distance is found due to negative weights. This property is called label correcting.

TL;DR: The answer depends on your implementation. For the pseudo code you posted, it works with negative weights.
Variants of Dijkstra's Algorithm
The key is there are 3 kinds of implementation of Dijkstra's algorithm, but all the answers under this question ignore the differences among these variants.
Using a nested for-loop to relax vertices. This is the easiest way to implement Dijkstra's algorithm. The time complexity is O(V^2).
Priority-queue/heap based implementation + NO re-entrance allowed, where re-entrance means a relaxed vertex can be pushed into the priority-queue again to be relaxed again later.
Priority-queue/heap based implementation + re-entrance allowed.
Version 1 & 2 will fail on graphs with negative weights (if you get the correct answer in such cases, it is just a coincidence), but version 3 still works.
The pseudo code posted under the original problem is the version 3 above, so it works with negative weights.
Here is a good reference from Algorithm (4th edition), which says (and contains the java implementation of version 2 & 3 I mentioned above):
Q. Does Dijkstra's algorithm work with negative weights?
A. Yes and no. There are two shortest paths algorithms known as Dijkstra's algorithm, depending on whether a vertex can be enqueued on the priority queue more than once. When the weights are nonnegative, the two versions coincide (as no vertex will be enqueued more than once). The version implemented in DijkstraSP.java (which allows a vertex to be enqueued more than once) is correct in the presence of negative edge weights (but no negative cycles) but its running time is exponential in the worst case. (We note that DijkstraSP.java throws an exception if the edge-weighted digraph has an edge with a negative weight, so that a programmer is not surprised by this exponential behavior.) If we modify DijkstraSP.java so that a vertex cannot be enqueued more than once (e.g., using a marked[] array to mark those vertices that have been relaxed), then the algorithm is guaranteed to run in E log V time but it may yield incorrect results when there are edges with negative weights.
For more implementation details and the connection of version 3 with Bellman-Ford algorithm, please see this answer from zhihu. It is also my answer (but in Chinese). Currently I don't have time to translate it into English. I really appreciate it if someone could do this and edit this answer on stackoverflow.

you did not use S anywhere in your algorithm (besides modifying it). the idea of dijkstra is once a vertex is on S, it will not be modified ever again. in this case, once B is inside S, you will not reach it again via C.
this fact ensures the complexity of O(E+VlogV) [otherwise, you will repeat edges more then once, and vertices more then once]
in other words, the algorithm you posted, might not be in O(E+VlogV), as promised by dijkstra's algorithm.

Since Dijkstra is a Greedy approach, once a vertice is marked as visited for this loop, it would never be reevaluated again even if there's another path with less cost to reach it later on. And such issue could only happen when negative edges exist in the graph.
A greedy algorithm, as the name suggests, always makes the choice that seems to be the best at that moment. Assume that you have an objective function that needs to be optimized (either maximized or minimized) at a given point. A Greedy algorithm makes greedy choices at each step to ensure that the objective function is optimized. The Greedy algorithm has only one shot to compute the optimal solution so that it never goes back and reverses the decision.

Consider what happens if you go back and forth between B and C...voila
(relevant only if the graph is not directed)
Edited:
I believe the problem has to do with the fact that the path with AC* can only be better than AB with the existence of negative weight edges, so it doesn't matter where you go after AC, with the assumption of non-negative weight edges it is impossible to find a path better than AB once you chose to reach B after going AC.

"2) Can we use Dijksra’s algorithm for shortest paths for graphs with negative weights – one idea can be, calculate the minimum weight value, add a positive value (equal to absolute value of minimum weight value) to all weights and run the Dijksra’s algorithm for the modified graph. Will this algorithm work?"
This absolutely doesn't work unless all shortest paths have same length. For example given a shortest path of length two edges, and after adding absolute value to each edge, then the total path cost is increased by 2 * |max negative weight|. On the other hand another path of length three edges, so the path cost is increased by 3 * |max negative weight|. Hence, all distinct paths are increased by different amounts.

You can use dijkstra's algorithm with negative edges not including negative cycle, but you must allow a vertex can be visited multiple times and that version will lose it's fast time complexity.
In that case practically I've seen it's better to use SPFA algorithm which have normal queue and can handle negative edges.

I will be just combining all of the comments to give a better understanding of this problem.
There can be two ways of using Dijkstra's algorithms :
Marking the nodes that have already found the minimum distance from the source (faster algorithm since we won't be revisiting nodes whose shortest path have been found already)
Not marking the nodes that have already found the minimum distance from the source (a bit slower than the above)
Now the question arises, what if we don't mark the nodes so that we can find shortest path including those containing negative weights ?
The answer is simple. Consider a case when you only have negative weights in the graph:
)
Now, if you start from the node 0 (Source), you will have steps as (here I'm not marking the nodes):
0->0 as 0, 0->1 as inf , 0->2 as inf in the beginning
0->1 as -1
0->2 as -5
0->0 as -8 (since we are not relaxing nodes)
0->1 as -9 .. and so on
This loop will go on forever, therefore Dijkstra's algorithm fails to find the minimum distance in case of negative weights (considering all the cases).
That's why Bellman Ford Algo is used to find the shortest path in case of negative weights, as it will stop the loop in case of negative cycle.

Related

Why priority-queue based Dijkstra shortest-path algorithm cannot work for negative-weights graph? [duplicate]

Can somebody tell me why Dijkstra's algorithm for single source shortest path assumes that the edges must be non-negative.
I am talking about only edges not the negative weight cycles.
Recall that in Dijkstra's algorithm, once a vertex is marked as "closed" (and out of the open set) - the algorithm found the shortest path to it, and will never have to develop this node again - it assumes the path developed to this path is the shortest.
But with negative weights - it might not be true. For example:
A
/ \
/ \
/ \
5 2
/ \
B--(-10)-->C
V={A,B,C} ; E = {(A,C,2), (A,B,5), (B,C,-10)}
Dijkstra from A will first develop C, and will later fail to find A->B->C
EDIT a bit deeper explanation:
Note that this is important, because in each relaxation step, the algorithm assumes the "cost" to the "closed" nodes is indeed minimal, and thus the node that will next be selected is also minimal.
The idea of it is: If we have a vertex in open such that its cost is minimal - by adding any positive number to any vertex - the minimality will never change.
Without the constraint on positive numbers - the above assumption is not true.
Since we do "know" each vertex which was "closed" is minimal - we can safely do the relaxation step - without "looking back". If we do need to "look back" - Bellman-Ford offers a recursive-like (DP) solution of doing so.
Consider the graph shown below with the source as Vertex A. First try running Dijkstra’s algorithm yourself on it.
When I refer to Dijkstra’s algorithm in my explanation I will be talking about the Dijkstra's Algorithm as implemented below,
So starting out the values (the distance from the source to the vertex) initially assigned to each vertex are,
We first extract the vertex in Q = [A,B,C] which has smallest value, i.e. A, after which Q = [B, C]. Note A has a directed edge to B and C, also both of them are in Q, therefore we update both of those values,
Now we extract C as (2<5), now Q = [B]. Note that C is connected to nothing, so line16 loop doesn't run.
Finally we extract B, after which . Note B has a directed edge to C but C isn't present in Q therefore we again don't enter the for loop in line16,
So we end up with the distances as
Note how this is wrong as the shortest distance from A to C is 5 + -10 = -5, when you go .
So for this graph Dijkstra's Algorithm wrongly computes the distance from A to C.
This happens because Dijkstra's Algorithm does not try to find a shorter path to vertices which are already extracted from Q.
What the line16 loop is doing is taking the vertex u and saying "hey looks like we can go to v from source via u, is that (alt or alternative) distance any better than the current dist[v] we got? If so lets update dist[v]"
Note that in line16 they check all neighbors v (i.e. a directed edge exists from u to v), of u which are still in Q. In line14 they remove visited notes from Q. So if x is a visited neighbour of u, the path is not even considered as a possible shorter way from source to v.
In our example above, C was a visited neighbour of B, thus the path was not considered, leaving the current shortest path unchanged.
This is actually useful if the edge weights are all positive numbers, because then we wouldn't waste our time considering paths that can't be shorter.
So I say that when running this algorithm if x is extracted from Q before y, then its not possible to find a path - which is shorter. Let me explain this with an example,
As y has just been extracted and x had been extracted before itself, then dist[y] > dist[x] because otherwise y would have been extracted before x. (line 13 min distance first)
And as we already assumed that the edge weights are positive, i.e. length(x,y)>0. So the alternative distance (alt) via y is always sure to be greater, i.e. dist[y] + length(x,y)> dist[x]. So the value of dist[x] would not have been updated even if y was considered as a path to x, thus we conclude that it makes sense to only consider neighbors of y which are still in Q (note comment in line16)
But this thing hinges on our assumption of positive edge length, if length(u,v)<0 then depending on how negative that edge is we might replace the dist[x] after the comparison in line18.
So any dist[x] calculation we make will be incorrect if x is removed before all vertices v - such that x is a neighbour of v with negative edge connecting them - is removed.
Because each of those v vertices is the second last vertex on a potential "better" path from source to x, which is discarded by Dijkstra’s algorithm.
So in the example I gave above, the mistake was because C was removed before B was removed. While that C was a neighbour of B with a negative edge!
Just to clarify, B and C are A's neighbours. B has a single neighbour C and C has no neighbours. length(a,b) is the edge length between the vertices a and b.
Dijkstra's algorithm assumes paths can only become 'heavier', so that if you have a path from A to B with a weight of 3, and a path from A to C with a weight of 3, there's no way you can add an edge and get from A to B through C with a weight of less than 3.
This assumption makes the algorithm faster than algorithms that have to take negative weights into account.
Correctness of Dijkstra's algorithm:
We have 2 sets of vertices at any step of the algorithm. Set A consists of the vertices to which we have computed the shortest paths. Set B consists of the remaining vertices.
Inductive Hypothesis: At each step we will assume that all previous iterations are correct.
Inductive Step: When we add a vertex V to the set A and set the distance to be dist[V], we must prove that this distance is optimal. If this is not optimal then there must be some other path to the vertex V that is of shorter length.
Suppose this some other path goes through some vertex X.
Now, since dist[V] <= dist[X] , therefore any other path to V will be atleast dist[V] length, unless the graph has negative edge lengths.
Thus for dijkstra's algorithm to work, the edge weights must be non negative.
Dijkstra's Algorithm assumes that all edges are positive weighted and this assumption helps the algorithm run faster ( O(E*log(V) ) than others which take into account the possibility of negative edges (e.g bellman ford's algorithm with complexity of O(V^3)).
This algorithm wont give the correct result in the following case (with a -ve edge) where A is the source vertex:
Here, the shortest distance to vertex D from source A should have been 6. But according to Dijkstra's method the shortest distance will be 7 which is incorrect.
Also, Dijkstra's Algorithm may sometimes give correct solution even if there are negative edges. Following is an example of such a case:
However, It will never detect a negative cycle and always produce a result which will always be incorrect if a negative weight cycle is reachable from the source, as in such a case there exists no shortest path in the graph from the source vertex.
Try Dijkstra's algorithm on the following graph, assuming A is the source node and D is the destination, to see what is happening:
Note that you have to follow strictly the algorithm definition and you should not follow your intuition (which tells you the upper path is shorter).
The main insight here is that the algorithm only looks at all directly connected edges and it takes the smallest of these edge. The algorithm does not look ahead. You can modify this behavior , but then it is not the Dijkstra algorithm anymore.
You can use dijkstra's algorithm with negative edges not including negative cycle, but you must allow a vertex can be visited multiple times and that version will lose it's fast time complexity.
In that case practically I've seen it's better to use SPFA algorithm which have normal queue and can handle negative edges.
Recall that in Dijkstra's algorithm, once a vertex is marked as "closed" (and out of the open set) -it assumes that any node originating from it will lead to greater distance so, the algorithm found the shortest path to it, and will never have to develop this node again, but this doesn't hold true in case of negative weights.
The other answers so far demonstrate pretty well why Dijkstra's algorithm cannot handle negative weights on paths.
But the question itself is maybe based on a wrong understanding of the weight of paths. If negative weights on paths would be allowed in pathfinding algorithms in general, then you would get permanent loops that would not stop.
Consider this:
A <- 5 -> B <- (-1) -> C <- 5 -> D
What is the optimal path between A and D?
Any pathfinding algorithm would have to continuously loop between B and C because doing so would reduce the weight of the total path. So allowing negative weights for a connection would render any pathfindig algorithm moot, maybe except if you limit each connection to be used only once.
So, to explain this in more detail, consider the following paths and weights:
Path | Total weight
ABCD | 9
ABCBCD | 7
ABCBCBCD | 5
ABCBCBCBCD | 3
ABCBCBCBCBCD | 1
ABCBCBCBCBCBCD | -1
...
So, what's the perfect path? Any time the algorithm adds a BC step, it reduces the total weight by 2.
So the optimal path is A (BC) D with the BC part being looped forever.
Since Dijkstra's goal is to find the optimal path (not just any path), it, by definition, cannot work with negative weights, since it cannot find the optimal path.
Dijkstra will actually not loop, since it keeps a list of nodes that it has visited. But it will not find a perfect path, but instead just any path.
Adding few points to the explanation, on top of the previous answers, for the following simple example,
Dijktra's algorithm being greedy, it first finds the minimum distance vertex C from the source vertex A greedily and assigns the distance d[C] (from vertex A) to the weight of the edge AC.
The underlying assumption is that since C was picked first, there is no other vertex V in the graph s.t. w(AV) < w(AC), otherwise V would have been picked instead of C, by the algorithm.
Since by above logic, w(AC) <= w(AV), for all vertex V different from the vertices A and C. Now, clearly any other path P that starts from A and ends in C, going through V , i.e., the path P = A -> V -> ... -> C, will be longer in length (>= 2) and total cost of the path P will be sum of the edges on it, i.e., cost(P) >= w(AV) >= w(AC), assuming all edges on P have non-negative weights, so that
C can be safely removed from the queue Q, since d[C] can never get smaller / relaxed further under this assumption.
Obviously, the above assumption does not hold when some.edge on P is negative, in a which case d[C] may decrease further, but the algorithm can't take care of this scenario, since by that time it has removed C from the queue Q.
In Unweighted graph
Dijkstra can even work without set or priority queue, even if you just use STACK the algorithm will work but with Stack its time of execution will increase
Dijkstra don't repeat a node once its processed becoz it always tooks the minimum route , which means if you come to that node via any other path it will certainly have greater distance
For ex -
(0)
/
6 5
/
(2) (1)
\ /
4 7
\ /
(9)
here once you get to node 1 via 0 (as its minimum out of 5 and 6)so now there is no way you can get a minimum value for reaching 1
because all other path will add value to 5 and not decrease it
more over with Negative weights it will fall into infinite loop
In Unweighted graph
Dijkstra Algo will fall into loop if it has negative weight
In Directed graph
Dijkstra Algo will give RIGHT ANSWER except in case of Negative Cycle
Who says Dijkstra never visit a node more than once are 500% wrong
also who says Dijkstra can't work with negative weight are wrong

Dijkstra Algorithm for Negative Weights

Now I know Dijkstra won't work if the graph contains negative weight edges. But there is an exception to this. (Only the edges that leave the source can have negative weights while all the other edges must be positive.)
I want to be able to prove this. I don't know how to get started with this. I've made some diagrams and in all those Dijkstra ended up working perfectly but I don't understand on how I can prove this?
So what I actually want is someone to prove that Dijkstra will work in this case or won't (Only the outgoing edges from the source are negative.)
Plus the graph can't contain any cycles which involve the source.
First note that if there is no cycle involving the source, we can trim the graph for any edges incoming to the source, and it will not affect the result, the vertices that lead to the source will not be reached in Dijkstra's algorithm anyway (since otherwise, there is a cycle involving the source).
From now on, we'll assume there is no incoming edge to the source.
Note that the claim needed for relaxation step (triangular inequeality): d(u,v) <= d(u,x) + w(x,v) must hold (vacuously) for each w(x,v) < 0, the only u where there exists a path from u to x (which is the source) is u=x=source, and the path is the empty path.
This leads us to d(u,x) + w(x,v) = 0 + w(x,v) = w(u,v) = d(u,v) (where u is the source), and the inequality still holds.
You can simply convert the general initial instance D into a feasible instance D' (i.e., one containing only positive edge values) and then show that Dijkstra's algorithm behaves the same on both instances. From this you can conclude that Dijkstra's algorithm is correct on instances of the form of D.
The following are the details:
You convert D to D' by inserting a "super source" s' which has only one edge -- an outgoing one to s with the distance d(s',s) = -min{d(s,x)|x is a node}. This instance is equivalent to one where d(s',s) is added to all the outgoing edges of s, which we call D'; it has only positive edge weights.
The behaviour of Dijkstra's algorithm on D and D' is the same as the ordering of the distances in the queue stays the same and therefore the same nodes are settled with the same (shifted) distances etc. Note that in this argument we implicitly use that there is no (negative) cycle containing s.
Because we know that Dijkstra's algorithm on D' finds the correct distances for the transformed graph and the structure of the shortest paths of D and D' must be the same we can conclude that Dijkstra's algorithm on instances of the form of D is correct.
Note: You can always use a so called Johnson Shift to let Dijkstra's algorithm perform correctly on graphs without negative cycles. You can also use a Johnson Shift for the proof but that wouldn't be self-contained.

Dijkstra's Algorithm for Negative Weights

Okay, first of all I know Dijkstra does not work for negative weights and we can use Bellman-ford instead of it. But in a problem I was given it states that all the edges have weights from 0 to 1 (0 and 1 are not included). And the cost of the path is actually the product.
So what I was thinking is just take the log. Now all the edges are negative. Now I know Dijkstra won't work for negative weights but in this case all the edges are negative so can't we do something so that Dijkstra would work.
I though of multiplying all the weights by -1 but then the shortest path becomes the longest path.
So is there anyway I can avoid the Bellman-Ford algorithm in this case.
The exact question is: "Suppose for some application, the cost of a path is equal to the product all the weights of the edges in the path. How would you use Dijkstra's algorithm in this case? All the weights of the edges are from 0 to 1 (0 and 1 are not inclusive)."
If all the weights on the graph are in the range (0, 1), then there will always be a cycle whose weight is less that 1, and thus you will be stuck in this cycle for ever (every pass on the cycle reduces the total weight of the shortest path). Probably you have misunderstood the problem, and you either want to find the longest path, or you are not allowed to visit the same vertex twice. Anyway, in the first case dijkstra'a algorithm is definitely applicable, even without the log modification. And I am pretty sure the second case cannot be solved with polynomial complexity.
So you want to use a function, let's say F, that you will apply to the weights of the original graph and then with Dijkstra's algorithm you'll find the shortest product path. Let's also consider the following graph that we start from node A and where 0 < x < y < 1:
In the above graph F(x) must be smaller than F(y) for Dijkstra's algorithm to output correctly the shortest paths from A.
Now, let's take a slightly different graph that we start again from node A:
Then how Dijkstra's algorithm will work?
Since F(x) < F(y) then we will select node B at the next step. Then we'll visit the remaining node C. Dijkstra's algorithm will output that the shortest path from A to B is A -> B and the shortest path from A to C is A -> C.
But the shortest path from A to B is A -> C -> B with cost x * y < x.
This means we can't find a weight transformation function and expect Dijkstra's algorithm to work in every case.
You wrote:
I though of multiplying all the weights by -1 but then the shortest
path becomes the longest path.
To switch between the shortest and the longest path inverse the weights. So 1/3 will be 3, 5 will be 1/5 and so on.
If your graph has cycles, no shortest path algorithm will find an answer, because those cycles will always be "negative cycles", as Rontogiannis Aristofanis pointed out.
If your graph doesn't have cycles, you don't have to use Dijkstra at all.
If it is directed, it is a DAG and there are linear-time shortest path algorithms.
If it is undirected, it is a tree, and it's trivial to find shortest path in trees. And if your graph is directed, even without cycles, Dijkstra still won't work for the same reason it doesn't work for negative edge graph.
In all cases, Dijkstra is a terrible choice of algorithm for your problem.

Why doesn't Dijkstra's algorithm work for negative weight edges?

Can somebody tell me why Dijkstra's algorithm for single source shortest path assumes that the edges must be non-negative.
I am talking about only edges not the negative weight cycles.
Recall that in Dijkstra's algorithm, once a vertex is marked as "closed" (and out of the open set) - the algorithm found the shortest path to it, and will never have to develop this node again - it assumes the path developed to this path is the shortest.
But with negative weights - it might not be true. For example:
A
/ \
/ \
/ \
5 2
/ \
B--(-10)-->C
V={A,B,C} ; E = {(A,C,2), (A,B,5), (B,C,-10)}
Dijkstra from A will first develop C, and will later fail to find A->B->C
EDIT a bit deeper explanation:
Note that this is important, because in each relaxation step, the algorithm assumes the "cost" to the "closed" nodes is indeed minimal, and thus the node that will next be selected is also minimal.
The idea of it is: If we have a vertex in open such that its cost is minimal - by adding any positive number to any vertex - the minimality will never change.
Without the constraint on positive numbers - the above assumption is not true.
Since we do "know" each vertex which was "closed" is minimal - we can safely do the relaxation step - without "looking back". If we do need to "look back" - Bellman-Ford offers a recursive-like (DP) solution of doing so.
Consider the graph shown below with the source as Vertex A. First try running Dijkstra’s algorithm yourself on it.
When I refer to Dijkstra’s algorithm in my explanation I will be talking about the Dijkstra's Algorithm as implemented below,
So starting out the values (the distance from the source to the vertex) initially assigned to each vertex are,
We first extract the vertex in Q = [A,B,C] which has smallest value, i.e. A, after which Q = [B, C]. Note A has a directed edge to B and C, also both of them are in Q, therefore we update both of those values,
Now we extract C as (2<5), now Q = [B]. Note that C is connected to nothing, so line16 loop doesn't run.
Finally we extract B, after which . Note B has a directed edge to C but C isn't present in Q therefore we again don't enter the for loop in line16,
So we end up with the distances as
Note how this is wrong as the shortest distance from A to C is 5 + -10 = -5, when you go .
So for this graph Dijkstra's Algorithm wrongly computes the distance from A to C.
This happens because Dijkstra's Algorithm does not try to find a shorter path to vertices which are already extracted from Q.
What the line16 loop is doing is taking the vertex u and saying "hey looks like we can go to v from source via u, is that (alt or alternative) distance any better than the current dist[v] we got? If so lets update dist[v]"
Note that in line16 they check all neighbors v (i.e. a directed edge exists from u to v), of u which are still in Q. In line14 they remove visited notes from Q. So if x is a visited neighbour of u, the path is not even considered as a possible shorter way from source to v.
In our example above, C was a visited neighbour of B, thus the path was not considered, leaving the current shortest path unchanged.
This is actually useful if the edge weights are all positive numbers, because then we wouldn't waste our time considering paths that can't be shorter.
So I say that when running this algorithm if x is extracted from Q before y, then its not possible to find a path - which is shorter. Let me explain this with an example,
As y has just been extracted and x had been extracted before itself, then dist[y] > dist[x] because otherwise y would have been extracted before x. (line 13 min distance first)
And as we already assumed that the edge weights are positive, i.e. length(x,y)>0. So the alternative distance (alt) via y is always sure to be greater, i.e. dist[y] + length(x,y)> dist[x]. So the value of dist[x] would not have been updated even if y was considered as a path to x, thus we conclude that it makes sense to only consider neighbors of y which are still in Q (note comment in line16)
But this thing hinges on our assumption of positive edge length, if length(u,v)<0 then depending on how negative that edge is we might replace the dist[x] after the comparison in line18.
So any dist[x] calculation we make will be incorrect if x is removed before all vertices v - such that x is a neighbour of v with negative edge connecting them - is removed.
Because each of those v vertices is the second last vertex on a potential "better" path from source to x, which is discarded by Dijkstra’s algorithm.
So in the example I gave above, the mistake was because C was removed before B was removed. While that C was a neighbour of B with a negative edge!
Just to clarify, B and C are A's neighbours. B has a single neighbour C and C has no neighbours. length(a,b) is the edge length between the vertices a and b.
Dijkstra's algorithm assumes paths can only become 'heavier', so that if you have a path from A to B with a weight of 3, and a path from A to C with a weight of 3, there's no way you can add an edge and get from A to B through C with a weight of less than 3.
This assumption makes the algorithm faster than algorithms that have to take negative weights into account.
Correctness of Dijkstra's algorithm:
We have 2 sets of vertices at any step of the algorithm. Set A consists of the vertices to which we have computed the shortest paths. Set B consists of the remaining vertices.
Inductive Hypothesis: At each step we will assume that all previous iterations are correct.
Inductive Step: When we add a vertex V to the set A and set the distance to be dist[V], we must prove that this distance is optimal. If this is not optimal then there must be some other path to the vertex V that is of shorter length.
Suppose this some other path goes through some vertex X.
Now, since dist[V] <= dist[X] , therefore any other path to V will be atleast dist[V] length, unless the graph has negative edge lengths.
Thus for dijkstra's algorithm to work, the edge weights must be non negative.
Dijkstra's Algorithm assumes that all edges are positive weighted and this assumption helps the algorithm run faster ( O(E*log(V) ) than others which take into account the possibility of negative edges (e.g bellman ford's algorithm with complexity of O(V^3)).
This algorithm wont give the correct result in the following case (with a -ve edge) where A is the source vertex:
Here, the shortest distance to vertex D from source A should have been 6. But according to Dijkstra's method the shortest distance will be 7 which is incorrect.
Also, Dijkstra's Algorithm may sometimes give correct solution even if there are negative edges. Following is an example of such a case:
However, It will never detect a negative cycle and always produce a result which will always be incorrect if a negative weight cycle is reachable from the source, as in such a case there exists no shortest path in the graph from the source vertex.
Try Dijkstra's algorithm on the following graph, assuming A is the source node and D is the destination, to see what is happening:
Note that you have to follow strictly the algorithm definition and you should not follow your intuition (which tells you the upper path is shorter).
The main insight here is that the algorithm only looks at all directly connected edges and it takes the smallest of these edge. The algorithm does not look ahead. You can modify this behavior , but then it is not the Dijkstra algorithm anymore.
You can use dijkstra's algorithm with negative edges not including negative cycle, but you must allow a vertex can be visited multiple times and that version will lose it's fast time complexity.
In that case practically I've seen it's better to use SPFA algorithm which have normal queue and can handle negative edges.
Recall that in Dijkstra's algorithm, once a vertex is marked as "closed" (and out of the open set) -it assumes that any node originating from it will lead to greater distance so, the algorithm found the shortest path to it, and will never have to develop this node again, but this doesn't hold true in case of negative weights.
The other answers so far demonstrate pretty well why Dijkstra's algorithm cannot handle negative weights on paths.
But the question itself is maybe based on a wrong understanding of the weight of paths. If negative weights on paths would be allowed in pathfinding algorithms in general, then you would get permanent loops that would not stop.
Consider this:
A <- 5 -> B <- (-1) -> C <- 5 -> D
What is the optimal path between A and D?
Any pathfinding algorithm would have to continuously loop between B and C because doing so would reduce the weight of the total path. So allowing negative weights for a connection would render any pathfindig algorithm moot, maybe except if you limit each connection to be used only once.
So, to explain this in more detail, consider the following paths and weights:
Path | Total weight
ABCD | 9
ABCBCD | 7
ABCBCBCD | 5
ABCBCBCBCD | 3
ABCBCBCBCBCD | 1
ABCBCBCBCBCBCD | -1
...
So, what's the perfect path? Any time the algorithm adds a BC step, it reduces the total weight by 2.
So the optimal path is A (BC) D with the BC part being looped forever.
Since Dijkstra's goal is to find the optimal path (not just any path), it, by definition, cannot work with negative weights, since it cannot find the optimal path.
Dijkstra will actually not loop, since it keeps a list of nodes that it has visited. But it will not find a perfect path, but instead just any path.
Adding few points to the explanation, on top of the previous answers, for the following simple example,
Dijktra's algorithm being greedy, it first finds the minimum distance vertex C from the source vertex A greedily and assigns the distance d[C] (from vertex A) to the weight of the edge AC.
The underlying assumption is that since C was picked first, there is no other vertex V in the graph s.t. w(AV) < w(AC), otherwise V would have been picked instead of C, by the algorithm.
Since by above logic, w(AC) <= w(AV), for all vertex V different from the vertices A and C. Now, clearly any other path P that starts from A and ends in C, going through V , i.e., the path P = A -> V -> ... -> C, will be longer in length (>= 2) and total cost of the path P will be sum of the edges on it, i.e., cost(P) >= w(AV) >= w(AC), assuming all edges on P have non-negative weights, so that
C can be safely removed from the queue Q, since d[C] can never get smaller / relaxed further under this assumption.
Obviously, the above assumption does not hold when some.edge on P is negative, in a which case d[C] may decrease further, but the algorithm can't take care of this scenario, since by that time it has removed C from the queue Q.
In Unweighted graph
Dijkstra can even work without set or priority queue, even if you just use STACK the algorithm will work but with Stack its time of execution will increase
Dijkstra don't repeat a node once its processed becoz it always tooks the minimum route , which means if you come to that node via any other path it will certainly have greater distance
For ex -
(0)
/
6 5
/
(2) (1)
\ /
4 7
\ /
(9)
here once you get to node 1 via 0 (as its minimum out of 5 and 6)so now there is no way you can get a minimum value for reaching 1
because all other path will add value to 5 and not decrease it
more over with Negative weights it will fall into infinite loop
In Unweighted graph
Dijkstra Algo will fall into loop if it has negative weight
In Directed graph
Dijkstra Algo will give RIGHT ANSWER except in case of Negative Cycle
Who says Dijkstra never visit a node more than once are 500% wrong
also who says Dijkstra can't work with negative weight are wrong

What modifications could you make to a graph to allow Dijkstra's algorithm to work on it?

So I've been thinking, without resorting to another algorithm, what modifications could you make to a graph to allow Dijkstra's algorithm to work on it, and still get the correct answer at the end of the day? If it's even possible at all?
I first thought of adding a constant equal to the most negative weight to all weights, but I found that that will mess up everthing and change the original single source path.
Then, I thought of traversing through the graph, putting all weights that are less than zero into an array or somwthing of the sort and then multiplying it by -1. I think his would work (disregarding running time constraints) but maybe I'm looking at the wrong way.
EDIT:
Another idea. How about permanently setting all negative weights to infinity. that way ensuring that they are ignored?
So I just want to hear some opinions on this; what do you guys think?
Seems you looking for something similar to Johnson's algorithm:
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path
from q to v. If this step detects a negative cycle, the algorithm is
terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v,
having length w(u,v), is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the
reweighted graph.
By any algorithm, you should check for negative cycles, and if there isn't negative cycle, find the shortest path.
In your case you need to run Dijkstra's algorithm one time. Also note that in Johnson's algorithm semi Bellman–Ford algorithm runs just for new added node. (not all vertices).

Resources