Shortest path on graph with changing weights - algorithm

I was trying to solve a local programming contest question. The problem is basically about finding the shortest path in a weighted graph. I am pretty new to these types of problems and I thought I could use Dijkstra's algorithm. However, there is a small complication - certain values are different, depending on the situation of this current path.
Problem
There are two types of weights: normal weights and weights with condition (let's call them K). The condition is this: once you move through edge with weight K, all other weights of type K have value of 0. This brings a few more problems, because the apparent shortest path can be beaten by a combination of edges with weights of type K.
Example
Below is this type of problem. If no weights would change their value, we could find the shortest path easily with Dijkstra. However, when weights K change their value, we can find a shorter path, because the weight of the edge C-D is 0 after moving through the edge A-C.
Question
How can I find the shortest path?
Can I use Dijkstra's algorithm here or is it better to use another algorithm like A* or BFS?

How many K's are there?
I it's only one, Dijkstra is good.
I will add to say that BFS does not handle weighs well.
Reminder: Dijkstra finds the shortest path from a vertex to all vertexes.
Run Dijkstra twice and define a different wight function for each run. First the wight function for K values is infinite. Second wight function for K values is 0.
Than compare the result from run1 to run2+K.
This is true because if the shortest path is without K first run will find it. otherwise it's with K and the second run will find it. Either way the algorithm will find it.

Related

Find a positive simple s-t Path on a Graph is NP-Complete?

I'm trying to find something on the literature to help me to solve this problem:
Given a graph with non negative edge weights, find a simple s-t positive path of any size, i.e., a path that goes from s to t with length greater than 0. I was trying to use dijkstra algorithm to find a positive shortest path by avoiding relax the edges that have cost zero, but this is wrong. I don't want to believe that this problem is NP-Complete =/. Sometimes it seems to be NP-Complete, because there may be a case where the only positive path is the longest path. But this is such a specific case. I think that on this kind of instance the longest path problem is polynomially solvable.
Can someone help me identifying this problem or showing that it is NP-Complete ?
Observations: The only requirement is to be some positive path (not necessarily the lowest or longest). In case of multiple positive paths it can be anyone. In case of non existence of such path, the algorithm should flag that the graph has no positive path.
Dijkstra's algorithm produces an answer in polynomial time (so in P rather than NP) and provides a shortest path between any 2 points on the graph.
Please refer to Dijkstra's Algorithm Wiki for further details.
You don't really need any more proof.
I'm not quite sure how "relax the edges that have cost 0" has any impact on this question at all.
The problem is NP-complete even with only one single edge having value 1, and all the others having value 0. Reduce from Two Directed Paths:
Two Directed Paths
Input: A directed graph G and two pairs of vertices (s1, t1) and (s2, t2)
Output: Two vertex-disjoint paths, one from s1 to t1 and from from s2 to t2.
Create a new instance with all edges having weight 0, and create an edge from t1 to s2 with weight 1.

Given a weighted undirected graph, how do I find a path which has the total weights close to a given value?

Suppose I have a weighted, undirected graph. Each edge has a positive weight. I would like to find a simple path (no vertices appear in the path twice) from a given source node (s) to a target node (t) which has the total sum of weights close to a given value (P).
Even though it sounds like a well-studied problem, I couldn't find a satisfying solution. Many graph algorithms are aiming to find the shortest path (in a sense of steps or cost), but not to find the "matched" cost path.
A naive solution would be finding all paths from s to t, compute sum of weights for each path and select the one that is close to P. However, finding all paths between two nodes in a graph is known to be #P-hard.
A possible solution could be modified the A* algorithm, so that for each node in the frontier we get the cost from the root to that node (g), and estimate the cost from that node to the goal (h). Then instead of choosing a node with the smallest g+h, we choose a node with the smallest |P - (g+h)|. However, I am not sure if this is the best solution.
Another thought is inspired from the linear programming since the objective function of this problem is sum(weights of a path from s to t) - P = 0. I know the shortest path problem can be formed as a linear programming task but not sure how to formulate this problem as a one.
Please help, thanks in advance!
This problem is NP-hard via a reduction from the Hamiltonian path problem. In that problem, you are given a graph and a pair of nodes s and t and are asked whether there's a simple path from s to t that passes through all the nodes in the graph. You can solve the Hamiltonian path problem via your problem as follows:
Assign each edge in the graph weight 1.
Find the s-t simple path whose weight is as close to n-1 as possible, where n is the number of nodes in the graph.
Returns whether this path has cost exactly n-1.
If the graph has a Hamiltonian path, then that path will have cost n-1. Otherwise, it doesn't, and the best path found will have a cost that's lower than n-1.

Bellman-Ford algorithm's intermediate optimality property, is it correct?

Bellman-Ford algorithm is famously known to solve the single source shortest path problem (SSSPP) for any arbitrary connected graph G(V,E) with additive edge weights, whenever one exists.
The basic implementation version of the algorithm for e.g.: Bellman-Ford-Wiki-page and its proof of correctness, when used parallel relaxation of all edges, as per my understanding, implies an interesting by-product, which I call as "intermediate optimality property", (which might be very helpful for some applications like this question) is stated as below:
After k iterations, we have every node identified with its shortest path from the same source, under the constraint of #edges in the path is <= k
This, under the assumption of simple shortest-path existence, will ensure producing the shortest path solution to SSSPP, for every destination node, after at most |V|-1 iterations.
Is the above property correct?
According to some people (for e.g. comments just below this question), it is not correct and I fail to understand why!!
(UPDATED: I am using a parallel update on all the vertices.)
Suppose we have a simple graph A->B->C->D with all weights equal to 1.
If we visit the vertices in the order A,B,C,D then during the first iteration we will relax all of the following:
A->B, finds shortest path to B is 1
B->C, finds shortest path to C is 2
C->D, finds shortest path to D is 3.
So in the first iteration we have found the shortest path to D despite this path needing 3 edges.
However, if the vertices were visited in the order D,C,B,A it would take more iterations to find the shortest path to D.
In other words, after k iterations we will have certainly found any shortest paths with #edges <= k, however we may also have found a better route that uses more edges.

What modifications could you make to a graph to allow Dijkstra's algorithm to work on it?

So I've been thinking, without resorting to another algorithm, what modifications could you make to a graph to allow Dijkstra's algorithm to work on it, and still get the correct answer at the end of the day? If it's even possible at all?
I first thought of adding a constant equal to the most negative weight to all weights, but I found that that will mess up everthing and change the original single source path.
Then, I thought of traversing through the graph, putting all weights that are less than zero into an array or somwthing of the sort and then multiplying it by -1. I think his would work (disregarding running time constraints) but maybe I'm looking at the wrong way.
EDIT:
Another idea. How about permanently setting all negative weights to infinity. that way ensuring that they are ignored?
So I just want to hear some opinions on this; what do you guys think?
Seems you looking for something similar to Johnson's algorithm:
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path
from q to v. If this step detects a negative cycle, the algorithm is
terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v,
having length w(u,v), is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the
reweighted graph.
By any algorithm, you should check for negative cycles, and if there isn't negative cycle, find the shortest path.
In your case you need to run Dijkstra's algorithm one time. Also note that in Johnson's algorithm semi Bellman–Ford algorithm runs just for new added node. (not all vertices).

Longest Simple Path

So, I understand the problem of finding the longest simple path in a graph is NP-hard, since you could then easily solve the Hamiltonian circuit problem by setting edge weights to 1 and seeing if the length of the longest simple path equals the number of edges.
My question is: What kind of path would you get if you took a graph, found the maximum edge weight, m, replaced each edge weight w with m - w, and ran a standard shortest path algorithm on that? It's clearly not the longest simple path, since if it were, then NP = P, and I think the proof for something like that would be a bit more complicated =P.
If you could solve shortest path problems with negative weights you would find a longest path, the two are equivalent, this could be done by putting a weight of -w instead of w
The standard algorithm for negative weights is the Bellman-Ford algorithm. However the algorithm will not work if there is a cycle in the graph such that the sum of edges is negative. In the graph that you create, all such cycles have negative sum weights and so the algorithm won't work. Unless of course you have no cycles, in which case you have a tree (or a forest) and the problem is solvable via dynamic programming.
If we replace a weight of w by m-w, which guarantees that all weights will be positive, then the shortest path can be found via standard algorithms. If the shortest path P in this graph has k edges then the length is k*m-w(P) where w(P) is the length of the path with the original weights. This path is not necessarily the longest one, however, of all paths with k edges, P is the longest one.
alt text http://dl.getdropbox.com/u/317805/path2.jpg
The graph above is transformed to below using your algorithm.
The Longest path is the red line in the above graph.And depending on how ties are broken and algorithm you use, the shortest path in the transformed graph could be the blue line or the red line. So transforming graph edge weights using the constant that you mentioned yields no significant results. This is why you cannot find the longest path using the shortest path algorithms no matter how clever you are. A simpler transformation could be to negate all the edge weights and run the algorithm. I dont know if I have answered your question but as far as the path property goes the transformed graph doesnt have any useful information regarding the distance.
However this particular transformation is useful in other areas. For example you could force the algorithm to select a particular edge weight in bipatrite matching if you have more than one constraint by adding a huge constant.
Edit: I have been told to add this statement: The above graph is not just about the physical distance. They need not hold the triangle inequality. Thanks.

Resources