All pairs shortest path with varying weights - algorithm

Imagine you are given a weighted undirected complete graph with n nodes with non-negative weights Cij, where for i = j Cii = 0, and for i != j Cij > 0. Suppose you have to find the maximal shortest path between any two nodes i and j. Here, you can easily use Floyd-Warshall, or use Dijkstra n times, or whatever, and then just to find the maximum among all n^2 shortest paths.
Now assume that Cij are not constant, but rather can take two values, Aij and Bij, where 0 <= Aij <= Bij. We also have Aii = Bii = 0. Assume you also need to find the maximal shortest path, but with the constraint that m edges must take value Bij, and other Aij. And, if m > n^2, then all edges are equal to Bij. But, when finding shortest path i -> p1 -> ... -> pk -> j, you are intesrested in the worst case, in the sense that on that path you need to choose those edges to take value of Bij, such that path value is maximal if you have fixed nodes on its direction.
For example, a if you have path of length four i-k-l-j, and, in optimal solution on that path only one weight is changed to Bij, and other take value of Aij. And let m1 = Bik + Akl + Alj, m2 = Aik + Bkl + Alj, m3 = Aik + Akl + Blj, the value of that path is max{m1, m2, m3}. So, among all paths between i and j, you have to choose one such that maximal value (described as in this example) is minimal (which is a variant of definition of shortest path). And you have to do it for all pairs i and j.
You are not given the constraint how many you need to vary on each path, but rather value of m, a number of weights that should be varied in the complete graph. And the problem is to find the maximum value of the shortest path, as described.
Also, my question is: is this NP-hard problem, or there exists some polynomial solution?

Related

DAG Kth shortest path dynamic programming

This is not for homework. I am working through a practice test (not graded) in preparation for a final in a couple of weeks. I have no idea where to go with this one.
Let G = (V;E) be a DAG (directed-acyclic-graph) of n vertices and m edges.
Each edge (u; v) of E has a weight w(u; v) that is an arbitrary value (positive, zero, or negative).
Let k be an input positive integer.
A path in G is called a k-link path if the path has no more than k edges. Let s and t be two vertices of G. A k-link shortest path from s to t is defined as a k-link path from s to t that has the minimum total sum of edge weights among all possible k-link s-to-t paths in G.
Design an O(k(m+ n)) time algorithm to compute a k-link shortest path from s to t.
Any help on the algorithm would be greatly appreciated.
Let dp[amount][currentVertex] give us the length of the shortest path in G which starts from s, ends at currentVertex and consists of amount edges.
make all values of dp unset
dp[0][s] = 0
for pathLength in (0, 1, .. k-1) // (1)
for vertex in V
if dp[pathLength][vertex] is set
for each u where (vertex, u) is in E // (2), other vertex of the edge
if dp[pathLength+1][u] is unset or greater than dp[pathLength][vertex] + cost(vertex, u)
set dp[pathLength+1][u] = dp[pathLength][vertex] + cost(vertex, u)
best = dp[k][t]
for pathLength in (0, 1, .. k)
if dp[pathLength][t] < best
best = dp[pathLength][t]
The algorithm above will give you the length of the k-link shortest path from s to t in G. Its time complexity is dominated by the complexity for the loop (1). The loop (1) alone has complecity O(k), while its inner part - (2) simply traverses the graph. If you use an adjacency list, (2) can be implemented in O(n+m). Therefore the overall complexity is O(k*(n+m)).
However, this will give you only the length of the path, and not the path itself. You can modify this algorithm by storing the previous vertex for each value of dp[][]. Thus, whenever you set the value of dp[pathLength+1][u] with the value of dp[pathLength][vertex] + cost(vertex, u) for some variables vertex, u, pathLength you would know that the previous used vertex was vertex. Therefore, you would store it like prev[pathLength+1][u] = vertex.
After that, you can get the path you want like. The idea is to go backwards by using the links you had created in prev:
pLen = pathLength such that dp[pathLength][t] is minimal
curVertex = t
path = [] // empty array
while pLen >= 0
insert curVertex in the beginning of path
curVertex = prev[pLen][curVertex]
pLen = pLen - 1
path is stored the k-link shortest path from s to t in G.

Path finding algorithm on graph considering both nodes and edges

I have an undirected graph. For now, assume that the graph is complete. Each node has a certain value associated with it. All edges have a positive weight.
I want to find a path between any 2 given nodes such that the sum of the values associated with the path nodes is maximum while at the same time the path length is within a given threshold value.
The solution should be "global", meaning that the path obtained should be optimal among all possible paths. I tried a linear programming approach but am not able to formulate it correctly.
Any suggestions or a different method of solving would be of great help.
Thanks!
If you looking for an algorithm in general graph, your problem is NP-Complete, Assume path length threshold is n-1, and each vertex has value 1, If you find the solution for your problem, you can say given graph has Hamiltonian path or not. In fact If your maximized vertex size path has value n, then you have a Hamiltonian path. I think you can use something like Held-Karp relaxation, for finding good solution.
This might not be perfect, but if the threshold value (T) is small enough, there's a simple algorithm that runs in O(n^3 T^2). It's a small modification of Floyd-Warshall.
d = int array with size n x n x (T + 1)
initialize all d[i][j][k] to -infty
for i in nodes:
d[i][i][0] = value[i]
for e:(u, v) in edges:
d[u][v][w(e)] = value[u] + value[v]
for t in 1 .. T
for k in nodes:
for t' in 1..t-1:
for i in nodes:
for j in nodes:
d[i][j][t] = max(d[i][j][t],
d[i][k][t'] + d[k][j][t-t'] - value[k])
The result is the pair (i, j) with the maximum d[i][j][t] for all t in 0..T
EDIT: this assumes that the paths are allowed to be not simple, they can contain cycles.
EDIT2: This also assumes that if a node appears more than once in a path, it will be counted more than once. This is apparently not what OP wanted!
Integer program (this may be a good idea or maybe not):
For each vertex v, let xv be 1 if vertex v is visited and 0 otherwise. For each arc a, let ya be the number of times arc a is used. Let s be the source and t be the destination. The objective is
maximize ∑v value(v) xv .
The constraints are
∑a value(a) ya ≤ threshold
∀v, ∑a has head v ya - ∑a has tail v ya = {-1 if v = s; 1 if v = t; 0 otherwise (conserve flow)
∀v ≠ x, xv ≤ ∑a has head v ya (must enter a vertex to visit)
∀v, xv ≤ 1 (visit each vertex at most once)
∀v ∉ {s, t}, ∀cuts S that separate vertex v from {s, t}, xv ≤ ∑a such that tail(a) ∉ S &wedge; head(a) &in; S ya (benefit only from vertices not on isolated loops).
To solve, do branch and bound with the relaxation values. Unfortunately, the last group of constraints are exponential in number, so when you're solving the relaxed dual, you'll need to generate columns. Typically for connectivity problems, this means using a min-cut algorithm repeatedly to find a cut worth enforcing. Good luck!
If you just add the weight of a node to the weights of its outgoing edges you can forget about the node weights. Then you can use any of the standard algorigthms for the shortest path problem.

All pairs shortest paths with dynamic programming

All,
I am reading about the relationship between all pairs shortest path and matrix multiplication.
Consider the multiplication of the weighted adjacency matrix with
itself - except, in this case, we replace the multiplication operation
in matrix multiplication by addition, and the addition operation by
minimization. Notice that the product of weighted adjacency matrix
with itself returns a matrix that contains shortest paths of length 2
between any pair of nodes.
It follows from this argument that A to power of n contains all shortest paths.
Question number 1:
My question is that in a graph we will be having at most n-1 edges between two nodes in a path, on what basis is the author discussing of path of length "n"?
Following slides
www.infosun.fim.uni-passau.de/br/lehrstuhl/.../Westerheide2.PPT
On slide 10 it is mentioned as below.
dij(1) = cij
dij(m) = min (dij(m-1), min1≤k≤n {dik(m-1) + ckj}) --> Eq 1
= min1≤k≤n {dik(m-1) + ckj} ------------------> Eq 2
Question 2: how author concluded Eq 2 from Eq 1.
In Cormen et al book on introduction to algorithms, it is mentioned as below:
What are the actual shortest-path weights delta(i, j)? If the graph contains no negative-weight cycles, then all shortest paths are simple and thus contain at most n - 1 edges. A path from vertex i to vertex j with more than n - 1 edges cannot have less weight than a shortest path from i to j. The actual shortest-path weights are therefore given by
delta(i,j) = d(i,j) power (n-1) = (i,j) power (n) = (i,j) power (n+1) = ...
Question 3: in above equation how author came with n, n+1 edges as we have at most n-1, and also how above assignment works?
Thanks!
The n vs n-1 is just an unfortunate variable name choice. He should have used a different letter instead to be more clear.
A^1 has the shortest paths of length up to 1 (trivially)
A^2 has the shortest paths of length up to 2
A^k has the shortest paths of length up to k
Eq 2 does not directly come from Eq1. Eq 2 is just the second term from the first equation. I presume this is a formatting or copy-paste error (I can't check - your ppt link is broken)
The author is just explicitly pointing out that you have nothing to gain by adding more then n-1 edges to the path:
A^(n-1), //the shortest paths of length up tp (n-1)
is equal to A^n //the shortest paths of length up tp (n)
is equal to A^(n+1) //the shortest paths of length up tp (n+1)
...
This is just so that you can safely stop your computations at (n-1) and be sure that you have the minimum paths among all paths of all lengths. (This is kind of obvious but the textbook has a point in being strict here...)
In a graph we will be having atmost n-1 edges between two nodes in a path, on what basis author is discussing of path of length "n"?
You're confusing the multiple measures being discussed:
A^n represents the "shortest paths" (by weight) of length n between vertices.
"at most n-1 edges between two nodes" -- I presume in this case you're thinking of n as the size of the graph.
The graph could have hundreds of vertices but your adjacency matrix A^3 shows the shortest paths of length 3. Different n measures.

shortest path with specified number of edges

i'm looking for an algorithm that finds the shortest path between two vertices (i and j) in a graph that contains a specified number of edges, n. i have a dynamic program that looks at the shortest path to the destination with n-1 edges, but how can i be sure that the shortest path being found starts at i?
I guess the edges have different costs / lengths and that the constraint is that there are n edges, and among all paths from i to j that have exactly n individual edges, the goal is to find the one that has least total cost / length.
If you do this using dynamic programming, the recurrences are
spath(f, t, n): --- returns shortest path from 'f' to 't' that has 'n' edges
spath(x, x, 0) = [x] --- path that has only one vertex
spath(x, y, 0) = NO PATH --- when x != y
spath(f, t, n) =
min cost path over (x is any node that is connected to t):
spath(f, x, n-1) + [t] (x can be appended because there is edge x - t)
Your wording is ambiguous. Is n the number of edges in the graph or the number of hops in the path? If the latter, then it's not a shortest path anymore. If you mean the former then any popular shortest-path algorithm such as Dijkstra's will work. If you mean the latter, then do n iterations of BFS starting at i and locate your j in the nth frontier. If it's not there, then there's no path from i to j in n hops. Walk down the BFS frontiers to read your path.

How to find the shortest simple path in a Tree in a linear time?

Here is a problem from Algorithms book by Vazirani
The input to this problem is a tree T with integer weights on the edges. The weights may be negative,
zero, or positive. Give a linear time algorithm to find the shortest simple path in T. The length of a
path is the sum of the weights of the edges in the path. A path is simple if no vertex is repeated. Note
that the endpoints of the path are unconstrained.
HINT: This is very similar to the problem of finding the largest independent set in a tree.
How can I solve this problem in linear time?
Here is my algorithm but I'm wonder if it is linear time since it is nothing different than depth-first:
Traverse tree (depth-first)
Keep the indexes (nodes)
add the values
do (1) till the end of tree
compare the sum and print the path and sum
this problem is similar this topic but there is no certain answer.
This problem is pretty much equivalent to the minimum sum subsequence problem, and can be solved in a similar manner by dynamic programming.
We will calculate the following arrays by using DF searches:
dw1[i] = minimum sum achievable by only using node i and its descendants.
pw1[i] = predecessor of node i in the path found for dw1[i].
dw2[i] = second minimum sum achevable by only using node i and its descendants,
a path that is edge-disjoint relative to the path found for dw1[i].
If you can calculate these, take min(dw1[k], dw1[k] + dw2[k]) over all k. This is because your path will take one of these basic shapes:
k k
| or / \
| / \
|
All of which are covered by the sums we're taking.
Calculating dw1
Run a DFS from the root node. In the DFS, keep track of the current node and its father. At each node, assume its children are d1, d2, ... dk. Then dw1[i] = min(min{dw1[d1] + cost[i, d1], dw1[d2] + cost[i, d2], ..., dw1[dk] + cost[i, dk]}, min{cost[i, dk]}). Set dw1[i] = 0 for leaf nodes. Don't forget to update pw1[i] with the selected predecessor.
Calculating dw2
Run a DFS from the root node. Do the same thing you did for dw1, except when going from a node i to one of its children k, only update dw2[i] if pw1[i] != k. You call the function recursively for all children however. It would look something like this in pseudocode:
df(node, father)
dw2[node] = inf
for all children k of node
df(k, node)
if pw1[node] != k
dw2[node] = min(dw2[node], dw1[k] + cost[node, k], cost[node, k])

Resources