If there exists a weighted graph G, and all weights are 0, does Dijkstra's algorithm still find the shortest path? If so, why?
As per my understanding of the algorithm, Dijsktra's algorithm will run like a normal BFS if there are no edge weights, but I would appreciate some clarification.
Explanation
Dijkstra itself has no problem with 0 weight, per definition of the algorithm. It only gets problematic with negative weights.
Since in every round Dijkstra will settle a node. If you later find a negative weighted edge, this could lead to a shorter path to that settled node. The node would then need to be unsettled, which Dijkstras algorithm does not allow (and it would break the complexity of the algorithm). It gets clear if you take a look at the actual algorithm and some illustration.
The behavior of Dijkstra on such an all zero-graph is the same as if all edges would have a different, but same, value, like 1 (except of the resulting shortest path length). Dijkstra will simply visit all nodes, in no particular order. Basically, like an ordinary Breadth-first search.
Details
Take a look at the algorithm description from Wikipedia:
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Node with the least distance
14 // will be selected first
15 remove u from Q
16
17 for each neighbor v of u: // where v is still in Q.
18 alt ← dist[u] + length(u, v)
19 if alt < dist[v]: // A shorter path to v has been found
20 dist[v] ← alt
21 prev[v] ← u
22
23 return dist[], prev[]
The problem with negative values lies in line 15 and 17. When you remove node u, you settle it. That is, you say that the shortest path to this node is now known. But that means you won't consider u again in line 17 as a neighbor of some other node (since it's not contained in Q anymore).
With negative values it could happen that you later find a shorter path (due to negative weights) to that node. You would need to consider u again in the algorithm and re-do all the computation that depended on the previous shortest path to u. So you would need to add u and every other node that was removed from Q that had u on its shortest path back to Q.
Especially, you would need to consider all edges that could lead to your destination, since you never know where some nasty -1_000_000 weighted edge hides.
The following example illustrates the problem:
Dijkstra will declare the red way as shortest path from A to C, with length 0. However, there is a shorter path. It is marked blue and has a length of 99 - 300 + 1 = -200.
With negative weights you could even create a more dangerous scenario, negative cycles. That is a cycle in the graph with a negative total weight. You then need a way to stop moving along the cycle all the time, endlessly dropping your current weight.
Notes
In an undirected graph edges with weight 0 can be eliminated and the nodes be merged. A shortest path between them will always have length 0. If the whole graph only has 0 weights, then the graph could just be merged to one node. The result to every shortest path query is simply 0.
The same holds for directed graphs if you have such an edge in both directions. If not, you can't do that optimization as you would change reach-ability of nodes.
There is a directed graph (which might contain cycles), and each node has a value on it, how could we get the sum of reachable value for each node. For example, in the following graph:
the reachable sum for node 1 is: 2 + 3 + 4 + 5 + 6 + 7 = 27
the reachable sum for node 2 is: 4 + 5 + 6 + 7 = 22
.....
My solution: To get the sum for all nodes, I think the time complexity is O(n + m), the n is the number of nodes, and m stands for the number of edges. DFS should be used,for each node we should use a method recursively to find its sub node, and save the sum of sub node when finishing the calculation for it, so that in the future we don't need to calculate it again. A set is needed to be created for each node to avoid endless calculation caused by loop.
Does it work? I don't think it is elegant enough, especially many sets have to be created. Is there any better solution? Thanks.
This can be done by first finding Strongly Connected Components (SCC), which can be done in O(|V|+|E|). Then, build a new graph, G', for the SCCs (each SCC is a node in the graph), where each node has value which is the sum of the nodes in that SCC.
Formally,
G' = (V',E')
Where V' = {U1, U2, ..., Uk | U_i is a SCC of the graph G}
E' = {(U_i,U_j) | there is node u_i in U_i and u_j in U_j such that (u_i,u_j) is in E }
Then, this graph (G') is a DAG, and the question becomes simpler, and seems to be a variant of question linked in comments.
EDIT previous answer (striked out) is a mistake from this point, editing with a new answer. Sorry about that.
Now, a DFS can be used from each node to find the sum of values:
DFS(v):
if v.visited:
return 0
if v is leaf:
return v.value
v.visited = true
return sum([DFS(u) for u in v.children])
This is O(V^2 + VE) worst vase, but since the graph has less nodes, V
and E are now significantly lower.
Some local optimizations can be made, for example, if a node has a single child, you can reuse the pre-calculated value and not apply DFS on the child again, since there is no fear of counting twice in this case.
A DP solution for this problem (DAG) can be:
D[i] = value(i) + sum {D[j] | (i,j) is an edge in G' }
This can be calculated in linear time (after topological sort of the DAG).
Pseudo code:
Find SCCs
Build G'
Topological sort G'
Find D[i] for each node in G'
apply value for all node u_i in U_i, for each U_i.
Total time is O(|V|+|E|).
You can use DFS or BFS algorithms for solving Your problem.
Both have complexity O(V + E)
You dont have to count all values for all nodes. And you dont need recursion.
Just make something like this.
Typically DFS looks like this.
unmark all vertices
choose some starting vertex x
mark x
list L = x
while L nonempty
choose some vertex v from front of list
visit v
for each unmarked neighbor w
mark w
add it to end of list
In Your case You have to add some lines
unmark all vertices
choose some starting vertex x
mark x
list L = x
float sum = 0
while L nonempty
choose some vertex v from front of list
visit v
sum += v->value
for each unmarked neighbor w
mark w
add it to end of list
Let as assume that we have a known minimum spanning tree.
Our task is to find the maximum edge for the path that exists between each pair of vertices.
To give an example,
We have the following minimum spanning tree:
1---10---2
\
5\
\
4---4---3
Between vertex 1 and 2 , we have an edge with cost 10.
Between vertex 1 and 3 , we have an edge with cost 5.
Between vertex 3 and 4 , we have an edge with cost 4.
Maximum edge for each path:
path 1-2 : It contains only the edge with cost 10. So the answer is 10.
path 1-3: It contains only the edge with cost 5. So the answer is 5.
path 1-4: To get from vertex 1 to vertex 4 , the path is 1-3-4. It contains edge with cost 5 and edge with cost 4. So the answer is 5.
path 2-3: We need to follow the path 2-1-3. Maximum edge is 10.
path 2-4: We need to follow the path 2-1-3-4. Maximum edge 10.
path 3-4: Maximum edge 4.
So the final answer would be:
X 10 5 5
X X 10 10
X X X 4
X X X X
Which one is the most suitable algorithm for this task?
So far , I have considered the possibility of using DFS for each pair of vertices. However , since we have O(V^2) pairs of vertices , total complexity would be O(V^3), which does not look good.
For each vertex, you can do DFS to find the matrix entries for the row/column corresponding to that vertex. Something like
fill-entries-DFS(root, maxEdgeRootToV, v):
set the entry for (root, v) to maxEdgeRootToV
for each child w of v:
fill-entries-DFS(root, max(maxEdgeRootToV, edgeWeight(v, w)), w)
for each vertex v:
fill-entries-DFS(v, -infinity, v)
The running time is O(V^2), the asymptotic optimum.
I'm having a trouble finding a contradicting example of the next variation of the TSP problem.
Input: G=(V,E) undirected complete graph which holds the triangle inequality, w:E->R+ weight function, and a source vertex s.
Output: Simple Hamilton cycle that starts and ends at s, with a minimum weight.
Algorithm:
1. S=Empty-Set
2. B=Sort E by weights.
3. Initialized array M of size |V|,
where each cell in the array holds a counter (Initialized to 0)
and a list of pointers to all the edges of that vertex (In B).
4. While |S|!=|V|-1
a. e(u,v)=removeHead(B).
b. If e does not close a cycle in S then
i. s=s union {e}
ii. Increase degree counter for u,v.
iii. If M[u].deg=2 then remove all e' from B s.t e'=(u,x).
iv. If M[v].deg=2 then remove all e' from B s.t e'=(v,x).
5. S=S union removeHead(B).
This will be done similar to the Kruskal Algorithm (Using union-find DS).
Steps 4.b.iii and 4.b.iv will be done using the List of pointers.
I highly doubt that this algorithm is true so I instantly turned into finding why it is wrong. Any help would be appreciated.
Lets say we have a graph with 4 vertices (a, b, c, d) with edge weights as follows:
w_ab = 5
w_bc = 6
w_bd = 7
w_ac = 8
w_da = 11
w_dc = 12
7
|--------------|
5 | 6 12 |
a ---- b ---- c ----- d
|______________| |
| 8 |
|_____________________|
11
The triangle inequality holds for each triangle in this graph.
Your algorithm will choose the cycle a-b-c-d-a (cost 34), when a better cycle is a-b-d-c-a (cost 32).
Your procedure may not terminate. Consider a graph with nodes { 1, 2, 3, 4 } and edges { (1,2), (1,3), (2,3), (2,4), (3,4) }. The only Hamiltonian cycle in this graph is { (1,2), (1,3), (2,4), (3,4) }. Suppose the lowest weighted edge is (2,3). Then your procedure will pick (2,3), pick one of { (1,2), (1,3) } and eliminate the other, pick one of { (2,4), (3,4) } and eliminate the other, then loop forever.
Nuances like this are what makes the Travelling Salesman problem so difficult.
Consider the complete graph on 4 vertices, where {a,b,c,d} are the nodes, imagined as the clockwise arranged corners of a square. Let the edge weights be as follows.
w({a,b}) := 2, // "edges"
w({b,c}) := 2,
w({c,d}) := 2,
w({d,a}) := 2,
w({a,c}) := 1, // "diagnoals"
w({b,d}) := M
where M is an integer larger than 2. On one hand, the hamiltonian cycle consisting of the "edges" has weight 8. On the other hand, the hamiltonian cycle containing {a,c} , which is the lightest edge, must contain {b,d} and has total weight
1 + M + 2 + 2 = 5 + M > 8
which is larger than the minimum possible weight. In total, this means that in general a hamitonian cycle of minimum weight does not necessarily contain the lightst edge, which is chosen by the algorithm in the original question. Furthermore, as M tends to infinity, the algorithm performs arbitrarily badly in terms of the approximation ratio, as
(5 + M) / 8
grows arbitrarily large.
Find the shortest path from source to destination in a directed graph with positive and negative edges, such that at no point in the path the sum of edges coming before it is negative. If no such path exists report that too.
I tried to use modified Bellman Ford, but could not find the correct solution.
I would like to clarify a few points :
yes there can be negative weight cycles.
n is the number of edges.
Assume that a O(n) length path exists if the problem has a solution.
+1/-1 edge weights.
Admittedly this isn't a constructive answer, however it's too long to post in a comment...
It seems to me that this problem contains the binary as well as discrete knapsack problems, so its worst-case-running-time is at best pseudo-polynomial. Consider a graph that is connected and weighted as follows:
Then the equivalent binary knapsack problem is trying to choose weights from the set {a0, ..., an} that maximizes Σ ai where Σ ai < X.
As a side note, if we introduce weighted loops it's easy to construct the unbounded knapsack problem instead.
Therefore, any practical algorithm you might choose has a running time that depends on what you consider the "average" case. Is there a restriction to the problem that I've either not considered or not had at my disposal? You seem rather sure it's an O(n3) problem. (Although what's n in this case?)
Peter de Rivaz pointed out in a comment that this problem includes HAMILTONIAN PATH as a special case. His explanation was a bit terse, and it took me a while to figure out the details, so I've drawn some diagrams for the benefit of others who might be struggling. I've made this post community wiki.
I'll use the following graph with six vertices as an example. One of its Hamiltonian paths is shown in bold.
Given an undirected graph with n vertices for which we want to find a Hamiltonian path, we construct a new weighted directed graph with n2 vertices, plus START and END vertices. Label the original vertices vi and the new vertices wik for 0 ≤ i, k < n. If there is an edge between vi and vj in the original graph, then for 0 ≤ k < n−1 there are edges in the new graph from wik to wj(k+1) with weight −2j and from wjk to wi(k+1) with weight −2i. There are edges from START to wi0 with weight 2n − 2i − 1 and from wi(n−1) to END with weight 0.
It's easiest to think of this construction as being equivalent to starting with a score of 2n − 1 and then subtracting 2i each time you visit wij. (That's how I've drawn the graph below.)
Each path from START to END must visit exactly n + 2 vertices (one from each row, plus START and END), so the only way for the sum along the path to be zero is for it to visit each column exactly once.
So here's the original graph with six vertices converted to a new graph with 38 vertices. The original Hamiltonian path corresponds to the path drawn in bold. You can verify that the sum along the path is zero.
UPDATE: The OP now has had several rounds of clarifications, and it is a different problem now. I'll leave this here for documenting my ideas for the first version of the problem (or rather my understanding of it). I'll try a new answer for the current version of the problem.
End of UPDATE
It's a pity that the OP hasn't clarified some of the open questions. I'll assume the following:
The weights are +/- 1.
n is the number of vertices
The first assumption is no loss of generality, obviously, but it has great impact on the value of n (via the second assumption). Without the first assumption, even a tiny (fixed) graph can have arbitrary long solutions by varying the weights without limits.
The algorithm I propose is quite simple, and similar to well-known graph algorithms. I'm no graph expert though, so I may use the wrong words in some places. Feel free to correct me.
For the source vertex, remember cost 0. Add (source, 0) to the todo list.
Pop an item from the todo list. Follow all outgoing edges of the vertex, computing the new cost c to reach the new vertex v. If the new cost is valid (c >= 0 and c <= n ^ 2, see below) and not remembered for v, add it to the remembered cost values of v, and add (v, c) to your todo list.
If the todo list is not empty, continue with step 2. (Or break early if the destination can be reached with cost 0).
It's clear that each "step" that's not an immediate dead end creates a new (vertex, cost) combination. There will be stored at most n * n ^2 = n ^ 3 of these combinations, and thus, in a certain sense, this algorithm is O(n^3).
Now, why does this find the optimal path? I don't have a real proof, but I think it the following ideas justify why I think this suffices, and it may be possible that they can be turned into a real proof.
I think it is clear that the only thing we have to show is that the condition c <= n ^ 2 is sufficient.
First, let's note that any (reachable) vertex can be reached with cost less than n.
Let (v, c) be part of an optimal path and c > n ^ 2.
As c > n, there must be some cycle on the path before reaching (v, c), where the cost of the cycle is 0 < m1 < n, and there must be some cycle on the path after reaching (v, c), where the cost of the cycle is 0 > m2 > -n.
Furthermore, let v be reachable from the source with cost 0 <= c1 < n, by a path that touches the first cycle mentioned above, and let the destination be reachable from v with cost 0 <= c2 < n, by a path that touches the other cycle mentioned above.
Then we can construct paths from source to v with costs c1, c1 + m1, c1 + 2 * m1, ..., and paths from v to destination with costs c2, c2 + m2, c2 + 2 * m2, ... . Choose 0 <= a <= m2 and 0 <= b <= m1 such that c1 + c2 + a * m1 + b * m2 is minimal and thus the cost of an optimal path. On this optimal path, v would have the cost c1 + a * m1 < n ^ 2.
If the gcd of m1 and m2 is 1, then the cost will be 0. If the gcd is > 1, then it might be possible to choose other cycles such that the gcd becomes 1. If that is not possible, it's also not possible for the optimal solution, and there will be a positive cost for the optimal solution.
(Yes, I can see several problems with this attempt of a proof. It might be necessary to take the gcd of several positive or negative cycle costs etc. I would be very interested in a counterexample, though.)
Here's some (Python) code:
def f(vertices, edges, source, dest):
# vertices: unique hashable objects
# edges: mapping (u, v) -> cost; u, v in vertices, cost in {-1, 1}
#vertex_costs stores the possible costs for each vertex
vertex_costs = dict((v, set()) for v in vertices)
vertex_costs[source].add(0) # source can be reached with cost 0
#vertex_costs_from stores for each the possible costs for each vertex the previous vertex
vertex_costs_from = dict()
# vertex_gotos is a convenience structure mapping a vertex to all ends of outgoing edges and their cost
vertex_gotos = dict((v, []) for v in vertices)
for (u, v), c in edges.items():
vertex_gotos[u].append((v, c))
max_c = len(vertices) ** 2 # the crucial number: maximal cost that's possible for an optimal path
todo = [(source, 0)] # which vertices to look at
while todo:
u, c0 = todo.pop(0)
for v, c1 in vertex_gotos[u]:
c = c0 + c1
if 0 <= c <= max_c and c not in vertex_costs[v]:
vertex_costs[v].add(c)
vertex_costs_from[v, c] = u
todo.append((v, c))
if not vertex_costs[dest]: # destination not reachable
return None # or raise some Exception
cost = min(vertex_costs[dest])
path = [(dest, cost)] # build in reverse order
v, c = dest, cost
while (v, c) != (source, 0):
u = vertex_costs_from[v, c]
c -= edges[u, v]
v = u
path.append((v, c))
return path[::-1] # return the reversed path
And the output for some graphs (edges and their weight / path / cost at each point of the path; sorry, no nice images):
AB+ BC+ CD+ DA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-
A B C D A X Y H I J K L M H
0 1 2 3 4 5 6 7 6 5 4 3 2 1
AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-
A B C D E F G A B C D E F G A B C D E F G A X Y H I J K L M H I J K L M H I J K L M H I J K L M H
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NH-
A X Y H
0 1 2 3
AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NO- OP- PH-
A B C D E F G A B C D E F G A B C D E F G A B C D E F G A B C D E F G A B C D E F G A X Y H I J K L M N O P H I J K L M N O P H I J K L M N O P H I J K L M N O P H I J K L M N O P H
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Here's the code to produce that output:
def find_path(edges, source, path):
from itertools import chain
print(edges)
edges = dict(((u, v), 1 if c == "+" else -1) for u, v, c in edges.split())
vertices = set(chain(*edges))
path = f(vertices, edges, source, dest)
path_v, path_c = zip(*path)
print(" ".join("%2s" % v for v in path_v))
print(" ".join("%2d" % c for c in path_c))
source, dest = "AH"
edges = "AB+ BC+ CD+ DA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-"
# uv+ means edge from u to v exists and has cost 1, uv- = cost -1
find_path(edges, source, path)
edges = "AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MH-"
find_path(edges, source, path)
edges = "AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NH-"
find_path(edges, source, path)
edges = "AB+ BC+ CD+ DE+ EF+ FG+ GA+ AX+ XY+ YH+ HI- IJ- JK- KL- LM- MN- NO- OP- PH-"
find_path(edges, source, path)
As Kaganar notes, we basically have to make some assumption in order to get a polytime algorithm. Let's assume that the edge lengths are in {-1, 1}. Given the graph, construct a weighted context-free grammar that recognizes valid paths from source to destination with weight equal to the number of excess 1 edges (it generalizes the grammar for balanced parentheses). Compute, for each nonterminal, the cost of the cheapest production by initializing everything to infinity or 1, depending on whether there is a production whose RHS has no nonterminal, and then relaxing n - 1 times, where n is the number of nonterminals.
I would use recursion brute forcing here:
something like (pseudo code to make sure it's not language specific)
you will need:
2D array of bools showing where you CAN and where you CAN'T go, this should NOT include "forbidden values", like before negative edge, you can choose to add a vertical and horizontal 'translation' to make sure it starts at [0][0]
an integer (static) containing the shortest path
a 1D array of 2 slots, showing the goal. [0] = x, [1] = y
you will do:
function(int xPosition, int yPosition, int steps)
{
if(You are at target AND steps < Shortest Path)
Shortest Path = steps
if(This Position is NOT legal)
/*exit function*/
else
/*try to move in every legal DIRECTION, not caring whether the result is legal or not
but always adding 1 to steps, like using:*/
function(xPosition+1, yPosition, steps+1);
function(xPosition-1, yPosition, steps+1);
function(xPosition, yPosition+1, steps+1);
function(xPosition, yPosition-1, steps+1);
}
then just run it with
function(StartingX, StartingY, 0);
the shortest path will be contained in the static external int
I would like to clarify a few points :
yes there can be negative weight cycles.
n is the number of edges.
weights are arbitrary not just +1/-1.
Assume that a O(n) length path exists if the problem has a solution. (n is number of edges)
Although people have shown that no fast solution exists (unless P=NP)..
I think for most graphs (95%+) you should be able to find a solution fairly quickly.
I take advantage of the fact that if there are cycles then there are usually many solutions and we only need to find one of them. There are probably some glaring holes in my ideas so please let me know.
Ideas:
1. find the negative cycle that is closest to the destination. denote the shortest distance between the cycle and destination as d(end,negC)
(I think this is possible, one way might be to use floyds to detect (i,j) with a negative cycle, and then breadth first search from the destination until you hit something that is connected to a negative cycle.)
2. find the closest positive cycle to the start node, denote the distance from the start as d(start,posC)
(I argue in 95% of graphs you can find these cycles easily)
Now we have cases:
a) there is both the positive and negative cycles were found:
The answer is d(end,negC).
b) no cycles were found:
simply use shortest path algorithm?
c) Only one of the cycles was found. We note in both these cases the problem is the same due to symmetry (e.g. if we swap the weights and start/end we get the same problem). I'll just consider the case that there was a positive cycle found.
find the shortest path from start to end without going around the positive cycle. (perhaps using modified breadth first search or something). If no such path exists (without going positive).. then .. it gets a bit tricky.. we have to do laps of the positive cycle (and perhaps some percentage of a lap).
If you just want an approximate answer, work out shortest path from positive cycle to end node which should usually be some negative number. Calculate number of laps required to overcome this negative answer + the distance from the entry point to the cycle to the exit point of the cycle. Now to do better perhaps there was another node in the cycle you should have exited the cycle from... To do this you would need to calculate the smallest negative distance of every node in the cycle to the end node.. and then it sort of turns into a group theory/ random number generator type problem... do as many laps of the cycle as you want till you get just above one of these numbers.
good luck and hopefully my solutions would work for most cases.
The current assumptions are:
yes there can be negative weight cycles.
n is the number of edges.
Assume that a O(n) length path exists if the problem has a solution.
+1/-1 edge weights.
We may assume without loss of generality that the number of vertices is at most n.
Recursively walk the graph and remember the cost values for each vertex. Stop if the cost was already remembered for the vertex, or if the cost would be negative.
After O(n) steps, either the destination has not been reached and there is no solution.
Otherwise, for each of the O(n) vertices we have remembered at most O(n) different cost values, and for each of these O(n ^ 2) combinations there might have been up to n unsuccessful attempts to walk to other vertices. All in all, it's O(n ^ 3). q.e.d.
Update: Of course, there is something fishy again. What does assumption 3 mean: an O(n) length path exists if the problem has a solution? Any solution has to detect that, because it also has to report if there is no solution. But it's impossible to detect that, because that's not a property of the individual graph the algorithm works on (it is asymptotic behaviour).
(It is also clear that not all graphs for which the destination can be reached have a solution path of length O(n): Take a chain of m edges of weight -1, and before that a simple cycle of m edges and total weight +1).
[I now realize that most of the Python code from my other answer (attempt for the first version of the problem) can be reused.]
Step 1: Note that your answer will be at most 2*n (if it exists).
Step 2: Create a new graph with vertexes that are a pairs of [vertex][cost]. (2*n^2 vertexes)
Step 3: Note that new graph will have all edges equal to one, and at most 2*n for each [vertex][cost] pair.
Step 4: Do a dfs over this graph, starting from [start][0]
Step 5: Find minimum k, such that [finish][k] is accesible.
Total complexity is at most O(n^2)*O(n) = O(n^3)
EDIT: Clarification on Step 1.
If there is a positive cycle, accesible from start, you can go all the way up to n. Now you can walk to any accesible vertex, over no more than n edges, each is either +1 or -1, leaving you with [0;2n] range.
Otherwise you'll walk either through negative cycles, or no more than n +1, that aren't in negative cycle, leaving you with [0;n] range.