Longest Simple Path - algorithm

So, I understand the problem of finding the longest simple path in a graph is NP-hard, since you could then easily solve the Hamiltonian circuit problem by setting edge weights to 1 and seeing if the length of the longest simple path equals the number of edges.
My question is: What kind of path would you get if you took a graph, found the maximum edge weight, m, replaced each edge weight w with m - w, and ran a standard shortest path algorithm on that? It's clearly not the longest simple path, since if it were, then NP = P, and I think the proof for something like that would be a bit more complicated =P.

If you could solve shortest path problems with negative weights you would find a longest path, the two are equivalent, this could be done by putting a weight of -w instead of w
The standard algorithm for negative weights is the Bellman-Ford algorithm. However the algorithm will not work if there is a cycle in the graph such that the sum of edges is negative. In the graph that you create, all such cycles have negative sum weights and so the algorithm won't work. Unless of course you have no cycles, in which case you have a tree (or a forest) and the problem is solvable via dynamic programming.
If we replace a weight of w by m-w, which guarantees that all weights will be positive, then the shortest path can be found via standard algorithms. If the shortest path P in this graph has k edges then the length is k*m-w(P) where w(P) is the length of the path with the original weights. This path is not necessarily the longest one, however, of all paths with k edges, P is the longest one.

alt text http://dl.getdropbox.com/u/317805/path2.jpg
The graph above is transformed to below using your algorithm.
The Longest path is the red line in the above graph.And depending on how ties are broken and algorithm you use, the shortest path in the transformed graph could be the blue line or the red line. So transforming graph edge weights using the constant that you mentioned yields no significant results. This is why you cannot find the longest path using the shortest path algorithms no matter how clever you are. A simpler transformation could be to negate all the edge weights and run the algorithm. I dont know if I have answered your question but as far as the path property goes the transformed graph doesnt have any useful information regarding the distance.
However this particular transformation is useful in other areas. For example you could force the algorithm to select a particular edge weight in bipatrite matching if you have more than one constraint by adding a huge constant.
Edit: I have been told to add this statement: The above graph is not just about the physical distance. They need not hold the triangle inequality. Thanks.

Related

Find a positive simple s-t Path on a Graph is NP-Complete?

I'm trying to find something on the literature to help me to solve this problem:
Given a graph with non negative edge weights, find a simple s-t positive path of any size, i.e., a path that goes from s to t with length greater than 0. I was trying to use dijkstra algorithm to find a positive shortest path by avoiding relax the edges that have cost zero, but this is wrong. I don't want to believe that this problem is NP-Complete =/. Sometimes it seems to be NP-Complete, because there may be a case where the only positive path is the longest path. But this is such a specific case. I think that on this kind of instance the longest path problem is polynomially solvable.
Can someone help me identifying this problem or showing that it is NP-Complete ?
Observations: The only requirement is to be some positive path (not necessarily the lowest or longest). In case of multiple positive paths it can be anyone. In case of non existence of such path, the algorithm should flag that the graph has no positive path.
Dijkstra's algorithm produces an answer in polynomial time (so in P rather than NP) and provides a shortest path between any 2 points on the graph.
Please refer to Dijkstra's Algorithm Wiki for further details.
You don't really need any more proof.
I'm not quite sure how "relax the edges that have cost 0" has any impact on this question at all.
The problem is NP-complete even with only one single edge having value 1, and all the others having value 0. Reduce from Two Directed Paths:
Two Directed Paths
Input: A directed graph G and two pairs of vertices (s1, t1) and (s2, t2)
Output: Two vertex-disjoint paths, one from s1 to t1 and from from s2 to t2.
Create a new instance with all edges having weight 0, and create an edge from t1 to s2 with weight 1.

Shortest path on graph with changing weights

I was trying to solve a local programming contest question. The problem is basically about finding the shortest path in a weighted graph. I am pretty new to these types of problems and I thought I could use Dijkstra's algorithm. However, there is a small complication - certain values are different, depending on the situation of this current path.
Problem
There are two types of weights: normal weights and weights with condition (let's call them K). The condition is this: once you move through edge with weight K, all other weights of type K have value of 0. This brings a few more problems, because the apparent shortest path can be beaten by a combination of edges with weights of type K.
Example
Below is this type of problem. If no weights would change their value, we could find the shortest path easily with Dijkstra. However, when weights K change their value, we can find a shorter path, because the weight of the edge C-D is 0 after moving through the edge A-C.
Question
How can I find the shortest path?
Can I use Dijkstra's algorithm here or is it better to use another algorithm like A* or BFS?
How many K's are there?
I it's only one, Dijkstra is good.
I will add to say that BFS does not handle weighs well.
Reminder: Dijkstra finds the shortest path from a vertex to all vertexes.
Run Dijkstra twice and define a different wight function for each run. First the wight function for K values is infinite. Second wight function for K values is 0.
Than compare the result from run1 to run2+K.
This is true because if the shortest path is without K first run will find it. otherwise it's with K and the second run will find it. Either way the algorithm will find it.

Given a weighted undirected graph, how do I find a path which has the total weights close to a given value?

Suppose I have a weighted, undirected graph. Each edge has a positive weight. I would like to find a simple path (no vertices appear in the path twice) from a given source node (s) to a target node (t) which has the total sum of weights close to a given value (P).
Even though it sounds like a well-studied problem, I couldn't find a satisfying solution. Many graph algorithms are aiming to find the shortest path (in a sense of steps or cost), but not to find the "matched" cost path.
A naive solution would be finding all paths from s to t, compute sum of weights for each path and select the one that is close to P. However, finding all paths between two nodes in a graph is known to be #P-hard.
A possible solution could be modified the A* algorithm, so that for each node in the frontier we get the cost from the root to that node (g), and estimate the cost from that node to the goal (h). Then instead of choosing a node with the smallest g+h, we choose a node with the smallest |P - (g+h)|. However, I am not sure if this is the best solution.
Another thought is inspired from the linear programming since the objective function of this problem is sum(weights of a path from s to t) - P = 0. I know the shortest path problem can be formed as a linear programming task but not sure how to formulate this problem as a one.
Please help, thanks in advance!
This problem is NP-hard via a reduction from the Hamiltonian path problem. In that problem, you are given a graph and a pair of nodes s and t and are asked whether there's a simple path from s to t that passes through all the nodes in the graph. You can solve the Hamiltonian path problem via your problem as follows:
Assign each edge in the graph weight 1.
Find the s-t simple path whose weight is as close to n-1 as possible, where n is the number of nodes in the graph.
Returns whether this path has cost exactly n-1.
If the graph has a Hamiltonian path, then that path will have cost n-1. Otherwise, it doesn't, and the best path found will have a cost that's lower than n-1.

Algorithm for finding shortest "k stride" path in graph

The graph is unweighted and undirected.
Given two vertices s and t, and stride k, I want to find the shortest path between them that is:
Of length divisible by k
Every k "steps" in the path, counting from the first, are a simple path.
Meaning, the shortest non-simple path that can only "jump" k vertices at every step (exactly k), and every "jump" must be a simple sub-path.
I think this problem is equal to constructing a second graph, G', where (u, v) are an edge in G' if there exists a path of length k in G, because a BFS scan would give the required path -- but I haven't been able to construct such a graph in reasonable time (apparently it's an NP problem). Is there an alternative algorithm?
The problem you're describing is NP-hard overall because you can reduce the Hamiltonian path problem to it. Specifically, given an n-node graph and a pair of nodes s and t, you can determine whether there's a Hamiltonian path from s to t by checking if the shortest (n-1)-stride path from s to t has length exactly n-1, since in that case there's a simple path from s to t passing through every node once and exactly once. As a result, if k is allowed to be large relative to n, you shouldn't expect there to be a particularly efficient algorithm for solving this problem.
In case it helps, there are some fast algorithms for finding long simple paths in graphs that might work really well for reasonable values of k. Look up the color-coding technique, in particular, as an example of this.

Is there any algorithm which can find all critical paths in DAG?

I'm writing a paper about some graph algorithms (which are used in CPM), and I need name of some algorithm which can find all critical paths in a DAG. I have looked at Floyd - Warshall algorithm, and I don't know if it could be helpful for finding all of critical paths in a DAG. If critical path and longest path are the same thing, then Floyd - Warshall algorithm could be modified in a way of finding all longest, not shortest, paths in a graph. And even if it can be modified, is there any better way of finding all critical paths?
For finding one critical path, Floyd--Warshall with minus weights is markedly inferior to the following folklore (?) algorithm, which computes in linear time the length of the longest path from each vertex.
for vertices v in topological order (sinks before sources):
set longest-path(v) := the maximum of 0 and length(v->w) + longest-path(w) for all arcs v->w
The Floyd--Warshall version would set longest-path(v) := the maximum of -distance(v, w) for all vertices w after computing the distance array.
To find all of the critical paths, compute the longest-path array and, retaining only those arcs v->w such that longest-path(v) = length(v->w) + longest-path(w), enumerate all paths in the residual DAG using recursion.
This can be done with Floyd Warshall by just negating all the weights (since it's a DAG, there won't be any negative cycles). However, Floyd Warshall is O(n^3), while a faster linear time algorithm exists.
From Wikipedia
A longest path between two given vertices s and t in a weighted graph
G is the same thing as a shortest path in a graph −G derived from G by
changing every weight to its negation. Therefore, if shortest paths
can be found in −G, then longest paths can also be found in G.[4] For
most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.[4] For instance,
for each vertex v in a given DAG, the length of the longest path
ending at v may be obtained by the following steps:
Find a topological
ordering of the given DAG. For each vertex v of the DAG, in the
topological ordering, compute the length of the longest path ending at
v by looking at its incoming neighbors and adding one to the maximum
length recorded for those neighbors. If v has no incoming neighbors,
set the length of the longest path ending at v to zero. In either
case, record this number so that later steps of the algorithm can
access it.
Once this has been done, the longest path in the whole DAG
may be obtained by starting at the vertex v with the largest recorded
value, then repeatedly stepping backwards to its incoming neighbor
with the largest recorded value, and reversing the sequence of
vertices found in this way.
Note that finding all longest paths is more problematic since there might be an exponentially large number of them. Therefore there is no worst case efficient way to list them all, though they can easily be enumerated or represented implicitly.

Resources