Finding the shortest path in a graph without any negative prefixes - algorithm

Find the shortest path from source to destination in a directed graph with positive and negative edges, such that at no point in the path the sum of edges coming before it is negative. If no such path exists report that too.
I tried to use modified Bellman Ford, but could not find the correct solution.
I would like to clarify a few points :
yes there can be negative weight cycles.
n is the number of edges.
Assume that a O(n) length path exists if the problem has a solution.
+1/-1 edge weights.

Admittedly this isn't a constructive answer, however it's too long to post in a comment...
It seems to me that this problem contains the binary as well as discrete knapsack problems, so its worst-case-running-time is at best pseudo-polynomial. Consider a graph that is connected and weighted as follows:
Then the equivalent binary knapsack problem is trying to choose weights from the set {a0, ..., an} that maximizes Σ ai where Σ ai < X.
As a side note, if we introduce weighted loops it's easy to construct the unbounded knapsack problem instead.
Therefore, any practical algorithm you might choose has a running time that depends on what you consider the "average" case. Is there a restriction to the problem that I've either not considered or not had at my disposal? You seem rather sure it's an O(n3) problem. (Although what's n in this case?)

Peter de Rivaz pointed out in a comment that this problem includes HAMILTONIAN PATH as a special case. His explanation was a bit terse, and it took me a while to figure out the details, so I've drawn some diagrams for the benefit of others who might be struggling. I've made this post community wiki.
I'll use the following graph with six vertices as an example. One of its Hamiltonian paths is shown in bold.
Given an undirected graph with n vertices for which we want to find a Hamiltonian path, we construct a new weighted directed graph with n2 vertices, plus START and END vertices. Label the original vertices vi and the new vertices wik for 0 ≤ i, k < n. If there is an edge between vi and vj in the original graph, then for 0 ≤ k < n−1 there are edges in the new graph from wik to wj(k+1) with weight −2j and from wjk to wi(k+1) with weight −2i. There are edges from START to wi0 with weight 2n − 2i − 1 and from wi(n−1) to END with weight 0.
It's easiest to think of this construction as being equivalent to starting with a score of 2n − 1 and then subtracting 2i each time you visit wij. (That's how I've drawn the graph below.)
Each path from START to END must visit exactly n + 2 vertices (one from each row, plus START and END), so the only way for the sum along the path to be zero is for it to visit each column exactly once.
So here's the original graph with six vertices converted to a new graph with 38 vertices. The original Hamiltonian path corresponds to the path drawn in bold. You can verify that the sum along the path is zero.

UPDATE: The OP now has had several rounds of clarifications, and it is a different problem now. I'll leave this here for documenting my ideas for the first version of the problem (or rather my understanding of it). I'll try a new answer for the current version of the problem.
End of UPDATE
It's a pity that the OP hasn't clarified some of the open questions. I'll assume the following:
The weights are +/- 1.
n is the number of vertices
The first assumption is no loss of generality, obviously, but it has great impact on the value of n (via the second assumption). Without the first assumption, even a tiny (fixed) graph can have arbitrary long solutions by varying the weights without limits.
The algorithm I propose is quite simple, and similar to well-known graph algorithms. I'm no graph expert though, so I may use the wrong words in some places. Feel free to correct me.
For the source vertex, remember cost 0. Add (source, 0) to the todo list.
Pop an item from the todo list. Follow all outgoing edges of the vertex, computing the new cost c to reach the new vertex v. If the new cost is valid (c >= 0 and c <= n ^ 2, see below) and not remembered for v, add it to the remembered cost values of v, and add (v, c) to your todo list.
If the todo list is not empty, continue with step 2. (Or break early if the destination can be reached with cost 0).
It's clear that each "step" that's not an immediate dead end creates a new (vertex, cost) combination. There will be stored at most n * n ^2 = n ^ 3 of these combinations, and thus, in a certain sense, this algorithm is O(n^3).
Now, why does this find the optimal path? I don't have a real proof, but I think it the following ideas justify why I think this suffices, and it may be possible that they can be turned into a real proof.
I think it is clear that the only thing we have to show is that the condition c <= n ^ 2 is sufficient.
First, let's note that any (reachable) vertex can be reached with cost less than n.
Let (v, c) be part of an optimal path and c > n ^ 2.
As c > n, there must be some cycle on the path before reaching (v, c), where the cost of the cycle is 0 < m1 < n, and there must be some cycle on the path after reaching (v, c), where the cost of the cycle is 0 > m2 > -n.
Furthermore, let v be reachable from the source with cost 0 <= c1 < n, by a path that touches the first cycle mentioned above, and let the destination be reachable from v with cost 0 <= c2 < n, by a path that touches the other cycle mentioned above.
Then we can construct paths from source to v with costs c1, c1 + m1, c1 + 2 * m1, ..., and paths from v to destination with costs c2, c2 + m2, c2 + 2 * m2, ... . Choose 0 <= a <= m2 and 0 <= b <= m1 such that c1 + c2 + a * m1 + b * m2 is minimal and thus the cost of an optimal path. On this optimal path, v would have the cost c1 + a * m1 < n ^ 2.
If the gcd of m1 and m2 is 1, then the cost will be 0. If the gcd is > 1, then it might be possible to choose other cycles such that the gcd becomes 1. If that is not possible, it's also not possible for the optimal solution, and there will be a positive cost for the optimal solution.
(Yes, I can see several problems with this attempt of a proof. It might be necessary to take the gcd of several positive or negative cycle costs etc. I would be very interested in a counterexample, though.)
Here's some (Python) code:
def f(vertices, edges, source, dest):
# vertices: unique hashable objects
# edges: mapping (u, v) -> cost; u, v in vertices, cost in {-1, 1}
#vertex_costs stores the possible costs for each vertex
vertex_costs = dict((v, set()) for v in vertices)
vertex_costs[source].add(0) # source can be reached with cost 0
#vertex_costs_from stores for each the possible costs for each vertex the previous vertex
vertex_costs_from = dict()
# vertex_gotos is a convenience structure mapping a vertex to all ends of outgoing edges and their cost
vertex_gotos = dict((v, []) for v in vertices)
for (u, v), c in edges.items():
vertex_gotos[u].append((v, c))
max_c = len(vertices) ** 2 # the crucial number: maximal cost that's possible for an optimal path
todo = [(source, 0)] # which vertices to look at
while todo:
u, c0 = todo.pop(0)
for v, c1 in vertex_gotos[u]:
c = c0 + c1
if 0 <= c <= max_c and c not in vertex_costs[v]:
vertex_costs[v].add(c)
vertex_costs_from[v, c] = u
todo.append((v, c))
if not vertex_costs[dest]: # destination not reachable
return None # or raise some Exception
cost = min(vertex_costs[dest])
path = [(dest, cost)] # build in reverse order
v, c = dest, cost
while (v, c) != (source, 0):
u = vertex_costs_from[v, c]
c -= edges[u, v]
v = u
path.append((v, c))
return path[::-1] # return the reversed path
And the output for some graphs (edges and their weight / path / cost at each point of the path; sorry, no nice images):
AB+ BC+ CD+ DA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-
A B C D A X Y H I J K L M H
0 1 2 3 4 5 6 7 6 5 4 3 2 1
AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-
A B C D E F G A B C D E F G A B C D E F G A X Y H I J K L M H I J K L M H I J K L M H I J K L M H
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NH-
A X Y H
0 1 2 3
AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NO- OP- PH-
A B C D E F G A B C D E F G A B C D E F G A B C D E F G A B C D E F G A B C D E F G A X Y H I J K L M N O P H I J K L M N O P H I J K L M N O P H I J K L M N O P H I J K L M N O P H
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Here's the code to produce that output:
def find_path(edges, source, path):
from itertools import chain
print(edges)
edges = dict(((u, v), 1 if c == "+" else -1) for u, v, c in edges.split())
vertices = set(chain(*edges))
path = f(vertices, edges, source, dest)
path_v, path_c = zip(*path)
print(" ".join("%2s" % v for v in path_v))
print(" ".join("%2d" % c for c in path_c))
source, dest = "AH"
edges = "AB+ BC+ CD+ DA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-"
# uv+ means edge from u to v exists and has cost 1, uv- = cost -1
find_path(edges, source, path)
edges = "AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-"
find_path(edges, source, path)
edges = "AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NH-"
find_path(edges, source, path)
edges = "AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NO- OP- PH-"
find_path(edges, source, path)

As Kaganar notes, we basically have to make some assumption in order to get a polytime algorithm. Let's assume that the edge lengths are in {-1, 1}. Given the graph, construct a weighted context-free grammar that recognizes valid paths from source to destination with weight equal to the number of excess 1 edges (it generalizes the grammar for balanced parentheses). Compute, for each nonterminal, the cost of the cheapest production by initializing everything to infinity or 1, depending on whether there is a production whose RHS has no nonterminal, and then relaxing n - 1 times, where n is the number of nonterminals.

I would use recursion brute forcing here:
something like (pseudo code to make sure it's not language specific)
you will need:
2D array of bools showing where you CAN and where you CAN'T go, this should NOT include "forbidden values", like before negative edge, you can choose to add a vertical and horizontal 'translation' to make sure it starts at [0][0]
an integer (static) containing the shortest path
a 1D array of 2 slots, showing the goal. [0] = x, [1] = y
you will do:
function(int xPosition, int yPosition, int steps)
{
if(You are at target AND steps < Shortest Path)
Shortest Path = steps
if(This Position is NOT legal)
/*exit function*/
else
/*try to move in every legal DIRECTION, not caring whether the result is legal or not
but always adding 1 to steps, like using:*/
function(xPosition+1, yPosition, steps+1);
function(xPosition-1, yPosition, steps+1);
function(xPosition, yPosition+1, steps+1);
function(xPosition, yPosition-1, steps+1);
}
then just run it with
function(StartingX, StartingY, 0);
the shortest path will be contained in the static external int

I would like to clarify a few points :
yes there can be negative weight cycles.
n is the number of edges.
weights are arbitrary not just +1/-1.
Assume that a O(n) length path exists if the problem has a solution. (n is number of edges)

Although people have shown that no fast solution exists (unless P=NP)..
I think for most graphs (95%+) you should be able to find a solution fairly quickly.
I take advantage of the fact that if there are cycles then there are usually many solutions and we only need to find one of them. There are probably some glaring holes in my ideas so please let me know.
Ideas:
1. find the negative cycle that is closest to the destination. denote the shortest distance between the cycle and destination as d(end,negC)
(I think this is possible, one way might be to use floyds to detect (i,j) with a negative cycle, and then breadth first search from the destination until you hit something that is connected to a negative cycle.)
2. find the closest positive cycle to the start node, denote the distance from the start as d(start,posC)
(I argue in 95% of graphs you can find these cycles easily)
Now we have cases:
a) there is both the positive and negative cycles were found:
The answer is d(end,negC).
b) no cycles were found:
simply use shortest path algorithm?
c) Only one of the cycles was found. We note in both these cases the problem is the same due to symmetry (e.g. if we swap the weights and start/end we get the same problem). I'll just consider the case that there was a positive cycle found.
find the shortest path from start to end without going around the positive cycle. (perhaps using modified breadth first search or something). If no such path exists (without going positive).. then .. it gets a bit tricky.. we have to do laps of the positive cycle (and perhaps some percentage of a lap).
If you just want an approximate answer, work out shortest path from positive cycle to end node which should usually be some negative number. Calculate number of laps required to overcome this negative answer + the distance from the entry point to the cycle to the exit point of the cycle. Now to do better perhaps there was another node in the cycle you should have exited the cycle from... To do this you would need to calculate the smallest negative distance of every node in the cycle to the end node.. and then it sort of turns into a group theory/ random number generator type problem... do as many laps of the cycle as you want till you get just above one of these numbers.
good luck and hopefully my solutions would work for most cases.

The current assumptions are:
yes there can be negative weight cycles.
n is the number of edges.
Assume that a O(n) length path exists if the problem has a solution.
+1/-1 edge weights.
We may assume without loss of generality that the number of vertices is at most n.
Recursively walk the graph and remember the cost values for each vertex. Stop if the cost was already remembered for the vertex, or if the cost would be negative.
After O(n) steps, either the destination has not been reached and there is no solution.
Otherwise, for each of the O(n) vertices we have remembered at most O(n) different cost values, and for each of these O(n ^ 2) combinations there might have been up to n unsuccessful attempts to walk to other vertices. All in all, it's O(n ^ 3). q.e.d.
Update: Of course, there is something fishy again. What does assumption 3 mean: an O(n) length path exists if the problem has a solution? Any solution has to detect that, because it also has to report if there is no solution. But it's impossible to detect that, because that's not a property of the individual graph the algorithm works on (it is asymptotic behaviour).
(It is also clear that not all graphs for which the destination can be reached have a solution path of length O(n): Take a chain of m edges of weight -1, and before that a simple cycle of m edges and total weight +1).
[I now realize that most of the Python code from my other answer (attempt for the first version of the problem) can be reused.]

Step 1: Note that your answer will be at most 2*n (if it exists).
Step 2: Create a new graph with vertexes that are a pairs of [vertex][cost]. (2*n^2 vertexes)
Step 3: Note that new graph will have all edges equal to one, and at most 2*n for each [vertex][cost] pair.
Step 4: Do a dfs over this graph, starting from [start][0]
Step 5: Find minimum k, such that [finish][k] is accesible.
Total complexity is at most O(n^2)*O(n) = O(n^3)
EDIT: Clarification on Step 1.
If there is a positive cycle, accesible from start, you can go all the way up to n. Now you can walk to any accesible vertex, over no more than n edges, each is either +1 or -1, leaving you with [0;2n] range.
Otherwise you'll walk either through negative cycles, or no more than n +1, that aren't in negative cycle, leaving you with [0;n] range.

Related

Is there an algorithm for finding the path of length k with the highest cost in a undirected graph

I have been thinking about this problem for a few weeks but can't wrap my head around an efficient solution.
So basically imagine that you have a undirected graph where every node has a value assigned to it (only positive values). I want to find a path of length k (start and end node don't matter) that has, if you sum up the values of the visited nodes, the highest "cost". You can visit nodes only once.
Let's take this graph for example:
d
|
a - b - c
| |
e - f
With the following values for the nodes:
a: 5
b: 10
c: 2
d: 3
e: 6
f: 7
The path with the highest cost with the length k=3 would be e-f-b, because the sum is 23.
I found a solution that solves this in O(n^k) time by basically finding every possible path for every node and then finding the one with the highest cost, but i think that there must be a more optimal solution.
Step1 Having the costs assigned to the nodes is awkward. The Dijsktra algorithm expects the costs to be on the links. So it would be good to transform the nodes to links and the links to nodes
(a,cost ac) -> (b,cost bc)
becomes
() - cost ac > () - cost bc > ()
Step2 Dijsktra calculates the lowest cost path. So you have recalculate every cost as
c = max cost - c
Step3 Now you can use Dijkstra to find the most expensive path
LOOP n1 over nodes
Dijsktra
LOOP n2 over nodes starting at n1 + 1
IF n1 to n2 most expensive so far
Save n1 to n2

Running Dijkstra Algorithm

Given a graph like this one:
A
^ ^
/ \
3 4
/ \
B -- 5 -> C
E={(B,A)(C,A)(B,C)}
What happens if we run Dijkstra on node A?
A is initialized to 0, B and C to infinity, but A doesn't points anywhere.
So then we choose randomly between B and C? Or the algorithm doesn't work in that case?
Thanks!
Dijkstra will still run and give you the right answer for this graph. If you so choose you can initialize the queue with just the start node and add or update neighbors to/in the queue as you explore them. In this case the algorithm will just terminate after one iteration of extracting (A) from the queue and exploring its zero neighbors, appropriately leaving the distances to B and C as infinity (with no prev nodes) and leaving A's path zero. If you think about it, this is the desired answer, as there are no paths from A to B or C.
Or, if you implement it as in Wikipedia, adding every node to the queue at the start, it will still produce the same results.
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex priority queue Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that are still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist, prev
After extracting A and exploring it's nonexistent neighbors, nothing is updated. It will then arbitrarily choose between B and C to extract next as they have the same distance (not 'randomly' of course, just depending on how you initialize/extract from your queue).
When it checks B, it will see it can get to C in Infinity + 5, not any better than the current distance to C of Infinity so nothing updates, and to A in Infinity + 3, not better than A's current distance of 0.
When it checks C, it will see it can get to A in Infinity + 4, not better than the current distance to A of 0, so nothing updates.
Then the queue is empty and the same result of dist[A] = 0, dist[B] = dist[C] = Infinity is returned.
So a correct implementation of Dijkstra will be able to handle such a graph (as it should any directed graph with non-negative weights).

Traveling Salesman Variation Algorithm

I'm having a trouble finding a contradicting example of the next variation of the TSP problem.
Input: G=(V,E) undirected complete graph which holds the triangle inequality, w:E->R+ weight function, and a source vertex s.
Output: Simple Hamilton cycle that starts and ends at s, with a minimum weight.
Algorithm:
1. S=Empty-Set
2. B=Sort E by weights.
3. Initialized array M of size |V|,
where each cell in the array holds a counter (Initialized to 0)
and a list of pointers to all the edges of that vertex (In B).
4. While |S|!=|V|-1
a. e(u,v)=removeHead(B).
b. If e does not close a cycle in S then
i. s=s union {e}
ii. Increase degree counter for u,v.
iii. If M[u].deg=2 then remove all e' from B s.t e'=(u,x).
iv. If M[v].deg=2 then remove all e' from B s.t e'=(v,x).
5. S=S union removeHead(B).
This will be done similar to the Kruskal Algorithm (Using union-find DS).
Steps 4.b.iii and 4.b.iv will be done using the List of pointers.
I highly doubt that this algorithm is true so I instantly turned into finding why it is wrong. Any help would be appreciated.
Lets say we have a graph with 4 vertices (a, b, c, d) with edge weights as follows:
w_ab = 5
w_bc = 6
w_bd = 7
w_ac = 8
w_da = 11
w_dc = 12
7
|--------------|
5 | 6 12 |
a ---- b ---- c ----- d
|______________| |
| 8 |
|_____________________|
11
The triangle inequality holds for each triangle in this graph.
Your algorithm will choose the cycle a-b-c-d-a (cost 34), when a better cycle is a-b-d-c-a (cost 32).
Your procedure may not terminate. Consider a graph with nodes { 1, 2, 3, 4 } and edges { (1,2), (1,3), (2,3), (2,4), (3,4) }. The only Hamiltonian cycle in this graph is { (1,2), (1,3), (2,4), (3,4) }. Suppose the lowest weighted edge is (2,3). Then your procedure will pick (2,3), pick one of { (1,2), (1,3) } and eliminate the other, pick one of { (2,4), (3,4) } and eliminate the other, then loop forever.
Nuances like this are what makes the Travelling Salesman problem so difficult.
Consider the complete graph on 4 vertices, where {a,b,c,d} are the nodes, imagined as the clockwise arranged corners of a square. Let the edge weights be as follows.
w({a,b}) := 2, // "edges"
w({b,c}) := 2,
w({c,d}) := 2,
w({d,a}) := 2,
w({a,c}) := 1, // "diagnoals"
w({b,d}) := M
where M is an integer larger than 2. On one hand, the hamiltonian cycle consisting of the "edges" has weight 8. On the other hand, the hamiltonian cycle containing {a,c} , which is the lightest edge, must contain {b,d} and has total weight
1 + M + 2 + 2 = 5 + M > 8
which is larger than the minimum possible weight. In total, this means that in general a hamitonian cycle of minimum weight does not necessarily contain the lightst edge, which is chosen by the algorithm in the original question. Furthermore, as M tends to infinity, the algorithm performs arbitrarily badly in terms of the approximation ratio, as
(5 + M) / 8
grows arbitrarily large.

Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you.
Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this
Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles
(this is the most important one. I mean, this is the one I'm least sure about:)
3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right?
If you need to have a look, the corresponding algorithms are (courtesy Wikipedia):
Bellman-Ford:
procedure BellmanFord(list vertices, list edges, vertex source)
// This implementation takes in a graph, represented as lists of vertices
// and edges, and modifies the vertices so that their distance and
// predecessor attributes store the shortest paths.
// Step 1: initialize graph
for each vertex v in vertices:
if v is source then v.distance := 0
else v.distance := infinity
v.predecessor := null
// Step 2: relax edges repeatedly
for i from 1 to size(vertices)-1:
for each edge uv in edges: // uv is the edge from u to v
u := uv.source
v := uv.destination
if u.distance + uv.weight < v.distance:
v.distance := u.distance + uv.weight
v.predecessor := u
// Step 3: check for negative-weight cycles
for each edge uv in edges:
u := uv.source
v := uv.destination
if u.distance + uv.weight < v.distance:
error "Graph contains a negative-weight cycle"
Dijkstra:
1 function Dijkstra(Graph, source):
2 for each vertex v in Graph: // Initializations
3 dist[v] := infinity ; // Unknown distance function from
4 // source to v
5 previous[v] := undefined ; // Previous node in optimal path
6 // from source
7
8 dist[source] := 0 ; // Distance from source to source
9 Q := the set of all nodes in Graph ; // All nodes in the graph are
10 // unoptimized - thus are in Q
11 while Q is not empty: // The main loop
12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case
13 if dist[u] = infinity:
14 break ; // all remaining vertices are
15 // inaccessible from source
16
17 remove u from Q ;
18 for each neighbor v of u: // where v has not yet been
19 removed from Q.
20 alt := dist[u] + dist_between(u, v) ;
21 if alt < dist[v]: // Relax (u,v,a)
22 dist[v] := alt ;
23 previous[v] := u ;
24 decrease-key v in Q; // Reorder v in the Queue
25 return dist;
Floyd-Warshall:
1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j
2 (infinity if there is none).
3 Also assume that n is the number of vertices and edgeCost(i,i) = 0
4 */
5
6 int path[][];
7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path
8 from i to j using intermediate vertices (1..k−1). Each path[i][j] is initialized to
9 edgeCost(i,j).
10 */
11
12 procedure FloydWarshall ()
13 for k := 1 to n
14 for i := 1 to n
15 for j := 1 to n
16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );
You are correct about the first two questions, and about the goal of Floyd-Warshall (finding the shortest paths between all pairs), but not about the relationship between Bellman-Ford and Floyd-Warshall: Both algorithms use dynamic programming to find the shortest path, but FW isn't the same as running BF from each starting node to every other node.
In BF, the question is: What is the shortest path from the source to the target using at most k steps, and the running time is O(EV). If we were to run it to each other node, the running time would be O(EV^2).
In FW, the question is: what is the shortest path from i to j through k, for all nodes i,j,k. This leads to O(V^3) running time - better than BF for each starting node (by a factor of up to |V| for dense graphs).
One more note about negative cycles / weights: Dijkstra may simply fail to give the correct results. BF and FW won't fail - they will correctly state that there is no minimum weight path, since the negative weight is unbounded.
Single source shortest paths:
Dijkstra Algorithm - No negative weight allowed - O(E+Vlg(V))
Bellman ford Algorithm - Negative weight is allowed. But if a negative cycle is present Bellman ford will detect the -ve cycle - O(VE)
Directed Acyclic Graph - as name suggests it works only for DAG - O(V+E)
All pairs shortest paths:
Dijkstra Algorithm - No negative weight allowed - O(VE + V^2lg(V))
Bellman ford Algorithm - O(V^2E)
Matrix chain multiplication method -complexity same as Bellman ford algorithm
Floyd Warshall algorithm -uses dynamic programming method - Complexity is O(V^3)

Why does Dijkstra's algorithm work?

I understand what Dijkstra's algorithm is, but I don't understand why it works.
When selecting the next vertex to examine, why does Dijkstra's algorithm select the one with the smallest weight? Why not just select a vertex arbitrarily, since the algorithm visits all vertices anyway?
You can think of Djikstra's algorithm as a water-filling algorithm (i.e. a pruned breadth-first search). At each stage, the goal is to cover more of the whole graph with the lowest-cost path possible. Suppose you have vertices at the edge of the area you've filled in, and you list them in terms of distance:
v0 <= v1 <= v2 <= v3 ...
Could there possibly be a cheaper way to get to vertex v1? If so, the path must go through v0, since no untested vertex could be closer. So you examine vertex v0 to see where you can get to, checking if any path through v0 is cheaper (to any other vertex one step away).
If you peel away the problem this way, you're guaranteed that your distances are all minimums, because you always check exactly that vertex that could lead to a shortest path. Either you find that shortest path, or you rule it out, and move on to the next vertex. Thus, you're guaranteed to consume one vertex per step.
And you stop without doing any more work than you need to, because you stop when your destination vertex occupies the "I am smallest" v0 slot.
Let's look at a brief example. Suppose we're trying to get from 1 to 12 by multiplication, and the cost between nodes is the number you have to multiply by. (We'll restrict the vertices to the numbers from 1 to 12.)
We start with 1, and we can get to any other node by multiplying by that value. So node 2 has cost 2, 3 has cost 3, ... 12 has cost 12 if you go in one step.
Now, a path through 2 could (without knowing about the structure) get to 12 fastest if there was a free link from 2 to 12. There isn't, but if there was, it would be fastest. So we check 2. And we find that we can get to 4 for cost 2, to 6 for 3, and so on. We thus have a table of costs like so:
3 4 5 6 7 8 9 10 11 12 // Vertex
3 4 5 5 7 6 9 7 11 8 // Best cost to get there so far.
Okay, now maybe we can get to 12 from 3 for free! Better check. And we find that 3*2==6 so the cost to 6 is the cost to 3 plus 2, and to 9 is plus 3, and 12 is plus 4.
4 5 6 7 8 9 10 11 12
4 5 5 7 6 6 7 11 7
Fair enough. Now we test 4, and we see we can get to 8 for an extra 2, and to 12 for an extra 3. Again, the cost to get to 12 is thus no more than 4+3 = 7:
5 6 7 8 9 10 11 12
5 5 7 6 8 7 11 7
Now we try 5 and 6--no improvements so far. This leaves us with
7 8 9 10 11 12
7 6 8 7 11 7
Now, for the first time, we see that the cost of getting to 8 is less than the cost of getting to 7, so we had better check that there isn't some free way to get to 12 from 8. There isn't--there's no way to get there at all with integers--so we throw it away.
7 9 10 11 12
7 8 7 11 7
And now we see that 12 is as cheap as any path left, so the cost to reach 12 must be 7. If we'd kept track of the cheapest path so far (only replacing the path when it's strictly better), we'd find that 3*4 is the first cheapest way to hit 12.
Dijkstra's algorithm picks the vertex with the least path cost thus far, because a path through any other vertex is at least as costly as a path through the vertex with the least path cost.
Visiting any other vertex, therefore, if it is more costly (which is quite possible) would necessitate visiting not only that other vertex, but also the one with the least path cost thus far, so you would have to visit more vertices before finding the shortest path. In fact, you would end up with the Bellman-Ford algorithm if you did that.
I should also add that the vertex doesn't have a weight, it is the edge that has a weight. The key for a given vertex is the cost of the shortest path found thus far to that vertex from the source vertex.
The reason why Dijsktra's algorithm works the way it does is in part because it exploits the fact that the shortest path between node u and w that includes point v also contains the shortest path from u to v and from v to w. If there existed something shorter between u to v, then it wouldn't be the shortest path.
To really understand why Dijkstra's algorithm works, look into the basics of dynamic programming, sounds hard but it's really pretty easy to understand the principles.
It likes greedy strategy.My English is not good.It translates by google.If you don't understand,I am very sorry.
Dijkstra algorithm, a G from S to all vertices of the shortest path length.
We assume that each vertex of G in V have been given a flag L (V), it is either a number, either ∞. Suppose P is the set of vertices of G, P contains S, to satisfy:
A) If V is P, then L (V) from V S to the length of the shortest path, and the existence of such V from S to the shortest path: path on the vertices in P in
B) If V does not belong to P, then L (V) from S to V satisfy the following restrictions on the length of the shortest path: V is the only path P does not belong to the vertex.
We can use induction to prove P Dijkstra algorithm in line with the above definition of the collection:
1) When the number of elements in P = 1, P corresponds to the first step in the algorithm, P = (S), is clearly satisfied.
2) Suppose P is k, the number of elements, P satisfy the above definition, see the algorithm below the third step
3) P in and the first to find out is not marked with the minimum vertex U, marked as L (U), can be proved from S to U of U outside the shortest path, in addition to P does not contain the elements do not belong.
Because if there is outside the other vertices except U, then the shortest path to S, P1, P2 ... Pn, Q1, Q2 ... Qn, U (P1, P2 ... Pn is P; Q1, Q2, ... Qn does not belong to P), from the nature of B) the shortest path length L (Q1) + PATH (Q1, U)> L (U).
Which is greater than S, P1, P2 ... Pn, U of the channel length L (U), is not the shortest path. Therefore, from the S to the U of U outside the shortest path, in addition to P does not contain the elements do not belong to U from S to the length of the shortest path from the L (U) is given.
U is added to P in the form P ', clearly P' to meet the nature of the A).
Take V does not belong to P ', obviously does not belong to V P, then from S to V except the shortest path and meet all the vertices outside V in P' in the path there are two possibilities, i) contains U, ii) does not contain U.
On i) S, P1, P2 ... Pn, U, V = L (U) + W (U, V)
ii) S, P1, P2 ... Pn, V = L (V)
Obviously the two are given in the smallest V from S to meet the minimum access and outside addition to all the vertices are V P 'in length.
The third step to algorithm given in P 'with k +1 elements and meet the A), B).
By the induction proposition may permit.
Here is the source.
It checks the path with the lowest weight first because this is most likely (without additional information) to reduce the number of paths checked. For example:
a->b->c cost is 20
a->b->d cost is 10
a->b->d->e cost is 12
If goal is to get from a to e, we don't need to even check the cost of:
a->b->c->e
Because we know it's at least 20 so we know it's not optimal because there is already another path with a cost of 12. You can maximize this effect by checking the lowest weights first. This is similar (same?) to how minimax works in chess and other games to reduce the branching factor of the game tree.
Dijsktra's algorithm is a greedy algorithm which follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum.
For understand the basic concept of this algorithm, I wrote this code for you. There is an explanation of how it works.
graph = {}
graph["start"] = {}
graph["start"]["a"] = 6
graph["start"]["b"] = 2
graph["a"] = {}
graph["a"]["finish"] = 1
graph["b"] = {}
graph["b"]["a"] = 3
graph["b"]["finish"] = 5
graph["finish"] = {}
infinity = float("inf")
costs = {}
costs["a"] = 6
costs["b"] = 2
costs["finish"] = infinity
print "The weight of each node is: ", costs
parents = {}
parents["a"] = "start"
parents["b"] = "start"
parents["finish"] = None
processed = []
def find_lowest_cost_node(costs):
lowest_cost = float("inf")
lowest_cost_node = None
for node in costs:
cost = costs[node]
if cost < lowest_cost and node not in processed:
lowest_cost = cost
lowest_cost_node = node
return lowest_cost_node
node = find_lowest_cost_node(costs)
print "Start: the lowest cost node is", node, "with weight",\
graph["start"]["{}".format(node)]
while node is not None:
cost = costs[node]
print "Continue execution ..."
print "The weight of node {} is".format(node), cost
neighbors = graph[node]
if neighbors != {}:
print "The node {} has neighbors:".format(node), neighbors
else:
print "It is finish, we have the answer: {}".format(cost)
for neighbor in neighbors.keys():
new_cost = cost + neighbors[neighbor]
if costs[neighbor] > new_cost:
costs[neighbor] = new_cost
parents[neighbor] = node
processed.append(node)
print "This nodes we researched:", processed
node = find_lowest_cost_node(costs)
if node is not None:
print "Look at the neighbor:", node
# to draw graph
import networkx
G = networkx.Graph()
G.add_nodes_from(graph)
G.add_edge("start", "a", weight=6)
G.add_edge("b", "a", weight=3)
G.add_edge("start", "b", weight=2)
G.add_edge("a", "finish", weight=1)
G.add_edge("b", "finish", weight=5)
import matplotlib.pyplot as plt
networkx.draw(G, with_labels=True)
plt.show()

Resources