Does a polynomial time shortest path algorithm exist given that both source and target verteces are reachable from a negative cycle? - algorithm

I'm not asking for an algorithm to check the existence of a negative cycle in a graph(Bellman Ford or Floyd Warshall can do that), rather whether or not there exists a polynomial time algorithm to find the shortest path between two points when the graph contains at least one negative cycle that is reachable from the source vertex and the target vertex can be reached from the negative cycle.

If you are looking for a path (no repeating vertices) then this problem is NP hard.
You can reduce longest path problem to this one by simply multiplying weights by -1.
The main difference between classical (only positive weights on edges) shortest path (which is in P) and longest path problem (which is NP-complete) is following:
All polynomial shortest path algorithms are basically calculating shortest walk, but since you have positive weight on edges, the shortest walk is also shortest path (repeating edges does not decrease the length of walk).
In longest path problem you have to somehow check for uniqueness of vertices on the path.
When you have negative cycle, then the Bellman-Ford algorithm calculates the length of the shortest walk which has at most N edges.

Related

Which of the following problems can be reduced to the Hamiltonian path problem?

I'm taking the Algorithms: Design and Analysis II class, one of the questions asks:
Assume that P ≠ NP. Consider undirected graphs with nonnegative edge
lengths. Which of the following problems can be solved in polynomial
time?
Hint: The Hamiltonian path problem is: given an undirected graph with
n vertices, decide whether or not there is a (cycle-free) path with n
- 1 edges that visits every vertex exactly once. You can use the fact that the Hamiltonian path problem is NP-complete. There are relatively
simple reductions from the Hamiltonian path problem to 3 of the 4
problems below.
For a given source s and destination t, compute the length of a shortest s-t path that has exactly n - 1 edges (or +∞, if no such path
exists). The path is allowed to contain cycles.
Amongst all spanning trees of the graph, compute one with the smallest-possible number of leaves.
Amongst all spanning trees of the graph, compute one with the minimum-possible maximum degree. (Recall the degree of a vertex is the
number of incident edges.)
For a given source s and destination t, compute the length of a shortest s-t path that has exactly n - 1 edges (or +∞, if no such path
exists). The path is not allowed to contain cycles.
Notice that a Hamiltonian path is a spanning tree of a graph and only has two leaf nodes, and that any spanning tree of a graph with exactly two leaf nodes must be a Hamiltonian path. That means that the NP-Complete problem of determining whether a Hamiltonian path exists in a graph can be solved by finding the minimum-leaf spanning tree of the graph: the path exists if and only if the minimum-leaf spanning tree has exactly two leaves. Thus, problem 2 is NP-Complete.
Problem 3 is NP-Hard; here is a paper that proves that.
That means, between 1 and 4, one is NP-Complete, another is in P. It seems like problem 4 reduces trivially to the the Hamiltonian path problem, but I'm not able to understand how having a cycle makes it solvable? Or is it the other way?
For the first one you can use Dijkstra to get shortest even and odd distances possible. To this end for every vertex you need to store not a single minimum number, but two of them. One is minimum weight of an odd path, another one is for minimum weight of an even path. After you have these two lengths you can easily increase path length by even number of edges if cycles are allowed. So, the first problem is from P. Step-be-step algorithm would be:
Find shortest even and odd length paths.
Increase length of one of these paths which has the same parity as n-1 to n-1 by adding cycle of length 2 required number of times.

Modified Dijkstra's Algorithm

We are given a directed graph with edge weights W lying between 0 and 1. Cost of a path from source to target node is the product of the weights of edges lying on the path from source to target node. I wanted to know of an algorithm which can find the minimum cost path in polynomial time or using any other heuristic.
I thought along the lines of taking the log values of the edges weights (taking mod values) and then applying dijkstra for this graph but think there will be precision problems which can't be calculated.
Is there any other better way or can I improve upon the log approach.
In Dijkstra's algorithm, when you visit a node you know that there is no shorter road to this node. This is not true if you multiply the edges with weights between 0..1 as if you visit more vertices you will get a smaller number.
Basically this is equivalent of finding the longest path in a graph. This can be seen also by using your idea of taking logarithms, as the logarithm of a number between 0 and 1 is negative. If you take absolute values of the logarithms of the weights, the longest path corresponds to the shortest path in the multiplicative graph.
If your graph is acyclic there is a straightforward algorithm (modified from Longest path problem).
Find a Topological ordering of the DAG.
For each vertex you need to store the cost of path. Initialize this to one at the beginning.
Travel through the DAG in topological order starting from your start vertex. In each vertex check all the children and if the cost is smaller than previously, update it. Store also the vertex where you arrive at this vertex with the lowest cost.
After you reach your final vertex, you can find the "shortest" path by travelling back from the end vertex using the stored vertices.
Of course, if you graph is not acyclic you can always reach a zero end cost by repeating a loop infinitely.

Is it possible to use Dijkstra's Shortest Path Algorithm to find the shortest Hamiltonian path? (in Polynomial Time)

I've read that the problem of finding whether a Hamiltonian path exists in a graph is NP-Complete, and since Dijkstra's Shortest Path Algorithm runs in Polynomial Time, it cannot be modified to find the shortest Hamiltonian path. (Is this logic valid?)
But what if you are given two nodes (say A and Z) on an undirected graph (with all edges having non-negative costs), and it is given that there is at least one Hamiltonian path with the given nodes (A and Z) as end points. Given these specifications, would it now be possible to modify Dijkstra's algorithm to find the shortest Hamiltonian path with A and Z as endpoints? (in Polynomial Time)
Note: I'm only concerned with finding the shortest Hamiltonian path from two nodes specifically. For example, if there is a graph containing 26 nodes (labelled A to Z), what is the shortest path that passes through all points but starts at A and ends at Z. (I'm not concerned with finding other Hamiltonian paths with different endpoints, just A and Z)
Additional question: If the answer is "No" but there is another algorithm that can be used to solve this, what algorithm is it, and what is its time complexity?
(Note: This question has "hamiltonian-cycle" as a tag, even though I'm looking for a Hamiltonian PATH, because I do not have enough rep to make the tag "hamiltonian-path". However, let's say A and Z is connected by exactly one edge, then the shortest Hamiltonian path can be found by finding the shortest Hamiltonian cycle and then removing the edge connecting A and Z)
No, this is not possible. Your simplified problem is still NP-hard. A reduction from travelling salesman:
Given a graph (V, E), find the shortest path that visits each v in V exactly once. Take an arbitrary vertex v in V. Split v into two vertices v_source and v_sink. Use your algorithm to find the shortest hamiltonian path P from v_source to v_sink. P is the shortest cycle starting and ending at v which visits each v in V. Since P is a cycle, the 'starting' vertex is irrelevant. Therefore, P is also the solution to the travelling salesman problem.
The reduction is obviously polynomial time (constant, actually), so your problem is NP-hard.
But what if you are given two nodes (say A and Z) on an undirected
graph (with all edges having non-negative costs), and it is given that
there is at least one Hamiltonian path with the given nodes (A and Z)
as end points. Given these specifications, would it now be possible to
modify Dijkstra's algorithm to find the shortest Hamiltonian path with
A and Z as endpoints? (in Polynomial Time)
How do you propose to modify it? This only works if there is a single path between A and Z and it visits all the other points on the graph. Otherwise, Dijkstra would terminate some shorter path that only visits some subset of the nodes. If there is a Hamiltonian path between A and Z, you could solve the longest path problem, but this is also NP-hard.

Why do all-pair shortest path algorithms work with negative weights?

I've been studying all-pair shortest path algorithms recently such as Floyd-Warshall and Johnson's algorithm, and I've noticed that these algorithms produce correct solutions even when a graph contains negative weight edges (but not negative weight cycles). For comparison, Dijkstra's algorithm (which is single-source shortest path) does not work for negative weight edges. What makes the all-pair shortest path algorithms work with negative weights?
Floyd Warshall's all pairs shortest paths algorithm works for graphs with negative edge weights because the correctness of the algorithm does not depend on edge's weight being non-negative, while the correctness of Dijkstra's algorithm is based on this fact.
Correctness of Dijkstra's algorithm:
We have 2 sets of vertices at any step of the algorithm. Set A consists of the vertices to which we have computed the shortest paths. Set B consists of the remaining vertices.
Inductive Hypothesis: At each step we will assume that all previous iterations are correct.
Inductive Step: When we add a vertex V to the set A and set the distance to be dist[V], we must prove that this distance is optimal. If this is not optimal then there must be some other path to the vertex V that is of shorter length.
Suppose this some other path goes through some vertex X in the set B.
Now, since dist[V] <= dist[X] , therefore any other path to V will be atleast dist[V] length, unless the graph has negative edge lengths.
Correctness of Floyd Warshall's algorithm:
Any path from vertex S to vertex T, will go through any other vertex U of the graph. Thus the shortest path from S to T can be computed as the
min( shortest_path(S to U) + shortest_path(U to T)) for all vertices U in the graph.
As you can see there is no dependence on the graph's edges to be non-negative as long as the sub calls compute the paths correctly. And the sub calls compute the paths correctly as long as the base cases have been properly initialized.
Dijkstra's Algorithm doesn't work for negative weight edge because it is based on the greedy strategy(an assumption) that once a vertex v was added to the set S, d[v] contains the minimum distance possible.
But if the last vertex in Q was added to S and it has some outgoing negative weight edges. The effects on the distance that caused by negative edges won't count.
However, all pairs shortest paths algorithm will capture those updates.

Is there any algorithm which can find all critical paths in DAG?

I'm writing a paper about some graph algorithms (which are used in CPM), and I need name of some algorithm which can find all critical paths in a DAG. I have looked at Floyd - Warshall algorithm, and I don't know if it could be helpful for finding all of critical paths in a DAG. If critical path and longest path are the same thing, then Floyd - Warshall algorithm could be modified in a way of finding all longest, not shortest, paths in a graph. And even if it can be modified, is there any better way of finding all critical paths?
For finding one critical path, Floyd--Warshall with minus weights is markedly inferior to the following folklore (?) algorithm, which computes in linear time the length of the longest path from each vertex.
for vertices v in topological order (sinks before sources):
set longest-path(v) := the maximum of 0 and length(v->w) + longest-path(w) for all arcs v->w
The Floyd--Warshall version would set longest-path(v) := the maximum of -distance(v, w) for all vertices w after computing the distance array.
To find all of the critical paths, compute the longest-path array and, retaining only those arcs v->w such that longest-path(v) = length(v->w) + longest-path(w), enumerate all paths in the residual DAG using recursion.
This can be done with Floyd Warshall by just negating all the weights (since it's a DAG, there won't be any negative cycles). However, Floyd Warshall is O(n^3), while a faster linear time algorithm exists.
From Wikipedia
A longest path between two given vertices s and t in a weighted graph
G is the same thing as a shortest path in a graph −G derived from G by
changing every weight to its negation. Therefore, if shortest paths
can be found in −G, then longest paths can also be found in G.[4] For
most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.[4] For instance,
for each vertex v in a given DAG, the length of the longest path
ending at v may be obtained by the following steps:
Find a topological
ordering of the given DAG. For each vertex v of the DAG, in the
topological ordering, compute the length of the longest path ending at
v by looking at its incoming neighbors and adding one to the maximum
length recorded for those neighbors. If v has no incoming neighbors,
set the length of the longest path ending at v to zero. In either
case, record this number so that later steps of the algorithm can
access it.
Once this has been done, the longest path in the whole DAG
may be obtained by starting at the vertex v with the largest recorded
value, then repeatedly stepping backwards to its incoming neighbor
with the largest recorded value, and reversing the sequence of
vertices found in this way.
Note that finding all longest paths is more problematic since there might be an exponentially large number of them. Therefore there is no worst case efficient way to list them all, though they can easily be enumerated or represented implicitly.

Resources