Is there any algorithm which can find all critical paths in DAG? - algorithm

I'm writing a paper about some graph algorithms (which are used in CPM), and I need name of some algorithm which can find all critical paths in a DAG. I have looked at Floyd - Warshall algorithm, and I don't know if it could be helpful for finding all of critical paths in a DAG. If critical path and longest path are the same thing, then Floyd - Warshall algorithm could be modified in a way of finding all longest, not shortest, paths in a graph. And even if it can be modified, is there any better way of finding all critical paths?

For finding one critical path, Floyd--Warshall with minus weights is markedly inferior to the following folklore (?) algorithm, which computes in linear time the length of the longest path from each vertex.
for vertices v in topological order (sinks before sources):
set longest-path(v) := the maximum of 0 and length(v->w) + longest-path(w) for all arcs v->w
The Floyd--Warshall version would set longest-path(v) := the maximum of -distance(v, w) for all vertices w after computing the distance array.
To find all of the critical paths, compute the longest-path array and, retaining only those arcs v->w such that longest-path(v) = length(v->w) + longest-path(w), enumerate all paths in the residual DAG using recursion.

This can be done with Floyd Warshall by just negating all the weights (since it's a DAG, there won't be any negative cycles). However, Floyd Warshall is O(n^3), while a faster linear time algorithm exists.
From Wikipedia
A longest path between two given vertices s and t in a weighted graph
G is the same thing as a shortest path in a graph −G derived from G by
changing every weight to its negation. Therefore, if shortest paths
can be found in −G, then longest paths can also be found in G.[4] For
most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.[4] For instance,
for each vertex v in a given DAG, the length of the longest path
ending at v may be obtained by the following steps:
Find a topological
ordering of the given DAG. For each vertex v of the DAG, in the
topological ordering, compute the length of the longest path ending at
v by looking at its incoming neighbors and adding one to the maximum
length recorded for those neighbors. If v has no incoming neighbors,
set the length of the longest path ending at v to zero. In either
case, record this number so that later steps of the algorithm can
access it.
Once this has been done, the longest path in the whole DAG
may be obtained by starting at the vertex v with the largest recorded
value, then repeatedly stepping backwards to its incoming neighbor
with the largest recorded value, and reversing the sequence of
vertices found in this way.
Note that finding all longest paths is more problematic since there might be an exponentially large number of them. Therefore there is no worst case efficient way to list them all, though they can easily be enumerated or represented implicitly.

Related

Modified Dijkstra's Algorithm

We are given a directed graph with edge weights W lying between 0 and 1. Cost of a path from source to target node is the product of the weights of edges lying on the path from source to target node. I wanted to know of an algorithm which can find the minimum cost path in polynomial time or using any other heuristic.
I thought along the lines of taking the log values of the edges weights (taking mod values) and then applying dijkstra for this graph but think there will be precision problems which can't be calculated.
Is there any other better way or can I improve upon the log approach.
In Dijkstra's algorithm, when you visit a node you know that there is no shorter road to this node. This is not true if you multiply the edges with weights between 0..1 as if you visit more vertices you will get a smaller number.
Basically this is equivalent of finding the longest path in a graph. This can be seen also by using your idea of taking logarithms, as the logarithm of a number between 0 and 1 is negative. If you take absolute values of the logarithms of the weights, the longest path corresponds to the shortest path in the multiplicative graph.
If your graph is acyclic there is a straightforward algorithm (modified from Longest path problem).
Find a Topological ordering of the DAG.
For each vertex you need to store the cost of path. Initialize this to one at the beginning.
Travel through the DAG in topological order starting from your start vertex. In each vertex check all the children and if the cost is smaller than previously, update it. Store also the vertex where you arrive at this vertex with the lowest cost.
After you reach your final vertex, you can find the "shortest" path by travelling back from the end vertex using the stored vertices.
Of course, if you graph is not acyclic you can always reach a zero end cost by repeating a loop infinitely.

Algorithm for finding shortest "k stride" path in graph

The graph is unweighted and undirected.
Given two vertices s and t, and stride k, I want to find the shortest path between them that is:
Of length divisible by k
Every k "steps" in the path, counting from the first, are a simple path.
Meaning, the shortest non-simple path that can only "jump" k vertices at every step (exactly k), and every "jump" must be a simple sub-path.
I think this problem is equal to constructing a second graph, G', where (u, v) are an edge in G' if there exists a path of length k in G, because a BFS scan would give the required path -- but I haven't been able to construct such a graph in reasonable time (apparently it's an NP problem). Is there an alternative algorithm?
The problem you're describing is NP-hard overall because you can reduce the Hamiltonian path problem to it. Specifically, given an n-node graph and a pair of nodes s and t, you can determine whether there's a Hamiltonian path from s to t by checking if the shortest (n-1)-stride path from s to t has length exactly n-1, since in that case there's a simple path from s to t passing through every node once and exactly once. As a result, if k is allowed to be large relative to n, you shouldn't expect there to be a particularly efficient algorithm for solving this problem.
In case it helps, there are some fast algorithms for finding long simple paths in graphs that might work really well for reasonable values of k. Look up the color-coding technique, in particular, as an example of this.

The shortest combination of paths that starts and ends with a single node and covers all points in an undirected graph

I need an algorithm(k, s) where
k is the number of paths
s is the starting and ending node
and given n number of nodes in an undirected graph in which all nodes are linked to each other, returns k paths to traverse all nodes of which the sum of distances covered by the k paths is the shortest.
E.g. Given n = 10, algorithm(2,5) might give me an array of two arrays such that the sum of the distances covered by the two paths are the shortest and that all nodes are traversed.
[[5,1,2,3,10,5],[5,4,6,7,8,9,5]]
Djikstra's algorithm finds the shortest path from one node to another, but not the shortest combination of k paths.
Yen's algorithm finds k number of shortest paths from one node to another, but not the shortest combination of k paths.
What algorithm can help me find the shortest combination of k paths that starts and end with node s such that all n nodes are covered?
What you are describing above, is the classical Traveling Sales Man problem, many optimization techniques. One such is Ant Colony Optimization.
There are many libraries that implement ACO, you could find one for Ruby, as it stands, there are no easy solutions to Traveling Salesman Problem with classical (mathematical) approach.

Does a polynomial time shortest path algorithm exist given that both source and target verteces are reachable from a negative cycle?

I'm not asking for an algorithm to check the existence of a negative cycle in a graph(Bellman Ford or Floyd Warshall can do that), rather whether or not there exists a polynomial time algorithm to find the shortest path between two points when the graph contains at least one negative cycle that is reachable from the source vertex and the target vertex can be reached from the negative cycle.
If you are looking for a path (no repeating vertices) then this problem is NP hard.
You can reduce longest path problem to this one by simply multiplying weights by -1.
The main difference between classical (only positive weights on edges) shortest path (which is in P) and longest path problem (which is NP-complete) is following:
All polynomial shortest path algorithms are basically calculating shortest walk, but since you have positive weight on edges, the shortest walk is also shortest path (repeating edges does not decrease the length of walk).
In longest path problem you have to somehow check for uniqueness of vertices on the path.
When you have negative cycle, then the Bellman-Ford algorithm calculates the length of the shortest walk which has at most N edges.

Longest Simple Path

So, I understand the problem of finding the longest simple path in a graph is NP-hard, since you could then easily solve the Hamiltonian circuit problem by setting edge weights to 1 and seeing if the length of the longest simple path equals the number of edges.
My question is: What kind of path would you get if you took a graph, found the maximum edge weight, m, replaced each edge weight w with m - w, and ran a standard shortest path algorithm on that? It's clearly not the longest simple path, since if it were, then NP = P, and I think the proof for something like that would be a bit more complicated =P.
If you could solve shortest path problems with negative weights you would find a longest path, the two are equivalent, this could be done by putting a weight of -w instead of w
The standard algorithm for negative weights is the Bellman-Ford algorithm. However the algorithm will not work if there is a cycle in the graph such that the sum of edges is negative. In the graph that you create, all such cycles have negative sum weights and so the algorithm won't work. Unless of course you have no cycles, in which case you have a tree (or a forest) and the problem is solvable via dynamic programming.
If we replace a weight of w by m-w, which guarantees that all weights will be positive, then the shortest path can be found via standard algorithms. If the shortest path P in this graph has k edges then the length is k*m-w(P) where w(P) is the length of the path with the original weights. This path is not necessarily the longest one, however, of all paths with k edges, P is the longest one.
alt text http://dl.getdropbox.com/u/317805/path2.jpg
The graph above is transformed to below using your algorithm.
The Longest path is the red line in the above graph.And depending on how ties are broken and algorithm you use, the shortest path in the transformed graph could be the blue line or the red line. So transforming graph edge weights using the constant that you mentioned yields no significant results. This is why you cannot find the longest path using the shortest path algorithms no matter how clever you are. A simpler transformation could be to negate all the edge weights and run the algorithm. I dont know if I have answered your question but as far as the path property goes the transformed graph doesnt have any useful information regarding the distance.
However this particular transformation is useful in other areas. For example you could force the algorithm to select a particular edge weight in bipatrite matching if you have more than one constraint by adding a huge constant.
Edit: I have been told to add this statement: The above graph is not just about the physical distance. They need not hold the triangle inequality. Thanks.

Resources