I have an undirected graph. For now, assume that the graph is complete. Each node has a certain value associated with it. All edges have a positive weight.
I want to find a path between any 2 given nodes such that the sum of the values associated with the path nodes is maximum while at the same time the path length is within a given threshold value.
The solution should be "global", meaning that the path obtained should be optimal among all possible paths. I tried a linear programming approach but am not able to formulate it correctly.
Any suggestions or a different method of solving would be of great help.
Thanks!
If you looking for an algorithm in general graph, your problem is NP-Complete, Assume path length threshold is n-1, and each vertex has value 1, If you find the solution for your problem, you can say given graph has Hamiltonian path or not. In fact If your maximized vertex size path has value n, then you have a Hamiltonian path. I think you can use something like Held-Karp relaxation, for finding good solution.
This might not be perfect, but if the threshold value (T) is small enough, there's a simple algorithm that runs in O(n^3 T^2). It's a small modification of Floyd-Warshall.
d = int array with size n x n x (T + 1)
initialize all d[i][j][k] to -infty
for i in nodes:
d[i][i][0] = value[i]
for e:(u, v) in edges:
d[u][v][w(e)] = value[u] + value[v]
for t in 1 .. T
for k in nodes:
for t' in 1..t-1:
for i in nodes:
for j in nodes:
d[i][j][t] = max(d[i][j][t],
d[i][k][t'] + d[k][j][t-t'] - value[k])
The result is the pair (i, j) with the maximum d[i][j][t] for all t in 0..T
EDIT: this assumes that the paths are allowed to be not simple, they can contain cycles.
EDIT2: This also assumes that if a node appears more than once in a path, it will be counted more than once. This is apparently not what OP wanted!
Integer program (this may be a good idea or maybe not):
For each vertex v, let xv be 1 if vertex v is visited and 0 otherwise. For each arc a, let ya be the number of times arc a is used. Let s be the source and t be the destination. The objective is
maximize ∑v value(v) xv .
The constraints are
∑a value(a) ya ≤ threshold
∀v, ∑a has head v ya - ∑a has tail v ya = {-1 if v = s; 1 if v = t; 0 otherwise (conserve flow)
∀v ≠ x, xv ≤ ∑a has head v ya (must enter a vertex to visit)
∀v, xv ≤ 1 (visit each vertex at most once)
∀v ∉ {s, t}, ∀cuts S that separate vertex v from {s, t}, xv ≤ ∑a such that tail(a) ∉ S ∧ head(a) ∈ S ya (benefit only from vertices not on isolated loops).
To solve, do branch and bound with the relaxation values. Unfortunately, the last group of constraints are exponential in number, so when you're solving the relaxed dual, you'll need to generate columns. Typically for connectivity problems, this means using a min-cut algorithm repeatedly to find a cut worth enforcing. Good luck!
If you just add the weight of a node to the weights of its outgoing edges you can forget about the node weights. Then you can use any of the standard algorigthms for the shortest path problem.
Related
Imagine you are given a weighted undirected complete graph with n nodes with non-negative weights Cij, where for i = j Cii = 0, and for i != j Cij > 0. Suppose you have to find the maximal shortest path between any two nodes i and j. Here, you can easily use Floyd-Warshall, or use Dijkstra n times, or whatever, and then just to find the maximum among all n^2 shortest paths.
Now assume that Cij are not constant, but rather can take two values, Aij and Bij, where 0 <= Aij <= Bij. We also have Aii = Bii = 0. Assume you also need to find the maximal shortest path, but with the constraint that m edges must take value Bij, and other Aij. And, if m > n^2, then all edges are equal to Bij. But, when finding shortest path i -> p1 -> ... -> pk -> j, you are intesrested in the worst case, in the sense that on that path you need to choose those edges to take value of Bij, such that path value is maximal if you have fixed nodes on its direction.
For example, a if you have path of length four i-k-l-j, and, in optimal solution on that path only one weight is changed to Bij, and other take value of Aij. And let m1 = Bik + Akl + Alj, m2 = Aik + Bkl + Alj, m3 = Aik + Akl + Blj, the value of that path is max{m1, m2, m3}. So, among all paths between i and j, you have to choose one such that maximal value (described as in this example) is minimal (which is a variant of definition of shortest path). And you have to do it for all pairs i and j.
You are not given the constraint how many you need to vary on each path, but rather value of m, a number of weights that should be varied in the complete graph. And the problem is to find the maximum value of the shortest path, as described.
Also, my question is: is this NP-hard problem, or there exists some polynomial solution?
I'm trying to come up with a reasonable algorithm for this problem:
Let's say we have bunch of locations. We know the distances between each pair of locations. Each location also has a point. The goal is to maximize the sum of the points while travelling from a starting location to a destination location without exceeding a given amount of distance.
Here is a simple example:
Starting location: C , Destination: B, Given amount of distance: 45
Solution: C-A-B route with 9 points
I'm just curious if there is some kind of dynamic algorithm for this type of problem. What the would be the best, or rather easiest approach for that problem?
Any help is greatly appreciated.
Edit: You are not allowed to visit the same location many times.
EDIT: Under the newly added restriction that every node can be visited only once, the problem is most definitely NP-hard via reduction to Hamilton path: For a general undirected, unweighted graph, set all edge weights to zero and every vertex weight to 1. Then the maximum reachable score is n iif there is a Hamilton path in the original graph.
So it might be a good idea to look into integer linear programming solvers for instance families that are not constructed specifically to be hard.
The solution below assumes that a vertex can be visited more than once and makes use of the fact that node weights are bounded by a constant.
Let p(x) be the point value for vertex x and w(x,y) be the distance weight of the edge {x,y} or w(x,y) = ∞ if x and y are not adjacent.
If we are allowed to visit a vertex multiple times and if we can assume that p(x) <= C for some constant C, we might get away with the following recurrence: Let f(x,y,P) be the minimum distance we need to get from x to y while collecting P points. We have
f(x,y,P) = ∞ for all P < 0
f(x,x,p(x)) = 0 for all x
f(x,y,P) = MIN(z, w(x, z) + f(z, y, P - p(x)))
We can compute f using dynamic programming. Now we just need to find the largest P such that
f(start, end, P) <= distance upper bound
This P is the solution.
The complexity of this algorithm with a naive implementation is O(n^4 * C). If the graph is sparse, we can get O(n^2 * m * C) by using adjacency lists for the MIN aggregation.
We are given a directed graph G (possibly with cycles) with positive edge weights, and the minimum distance D[v] to every vertex v from a source s is also given (D is an array this way).
The problem is to find the array N[v] = number of paths of length D[v] from s to v,
in linear time.
Now this is a homework problem that I've been struggling with for quite long. I was working along the following thought : I'm trying to remove the cycles by suitably choosing an acyclic subgraph of G, and then try to find shortest paths from s to v in the subgraph.
But I cannot figure out explicitly what to do, so I'd appreciate any help, as in a qualitative idea on what to do.
You can use dynamic programming approach in here, and fill up the number of paths as you go, if D[u] + w(u,v) = D[v], something like:
N = [0,...,0]
N[s] = 1 //empty path
For each vertex v, in *ascending* order of `D[v]`:
for each edge (u,v) such that D[u] < D[v]:
if D[u] + w(u,v) = D[v]: //just found new shortest paths, using (u,v)!
N[v] += N[u]
Complexity is O(VlogV + E), assuming the graph is not sparsed, O(E) is dominanting.
Explanation:
If there is a shortest path v0->v1->...->v_(k-1)->v_k from v0 to v_k, then v0->...->v_(k-1) is a shortest path from v0 to v_k-1, thus - when iterating v_k - N[v_(k-1)] was already computed fully (remember, all edges have positive weights, and D[V_k-1] < D[v_k], and we are iterating by increasing value of D[v]).
Therefor, the path v0->...->v_(k-1) is counted in the number N[V_(k-1)] at this point.
Since v0->...->v_(k-1)-v_k is a shortest path - it means D[v_(k-1)] + w(v_k-1,v_k) = D[v_k] - thus the condition will hold, and we will add the count of this path to N[v_k].
Note that the proof for this algorithm will basically be induction that will follow the guidelines from this explanation more formally.
Suppose we have a DIRECTED, WEIGHTED and CYCLIC graph.
Suppose we are only interested in paths with a total weight of less than MAX_WEIGHT
What is the most appropriate (or any) algorithm to find the number of distinct paths between two nodes A and B that have a total weight of less than MAX_WEIGHT?
P.S: It's not my homework. Just personal, non-commercial project.
If the number of nodes and MAX_WEIGHT aren't too large (and all weights are integers), you can use dynamic programming
unsigned long long int num_of_paths[MAX_WEIGHT+1][num_nodes];
initialize to 0, except num_of_paths[0][start] = 1;.
for(w = 0; w < MAX_WEIGHT; ++w){
for(n = 0; n < num_nodes; ++n){
if (num_of_paths[w][n] > 0){
/* for each child c of node n
* if w + weight(n->c) <= MAX_WEIGHT
* num_of_paths[w+weight(n->c)][c] += num_of_paths[w][n];
*/
}
}
}
solution is sum of num_of_paths[w][target], 0 <= w <= MAX_WEIGHT .
Simple recursion. You have it in exponential time. Obviously, no zero-weight cycles allowed.
function noe(node N, limit weight W)
no. of path is zero if all outgoing edges have weight > W
otherwise no. of path is sum of numbers of path obtained by sum(noe(C1,W-W1),noe(C2,W-W2),... noe(Cn,W-Wn)) where C1 ... Cn are the nodes connected to N for which W-Wi is not negative where Wi is weight of the connecting edge, written in your favorite language.
More eficient solution should exist, along the lines of Dijkstra's algorithm, but I think this is enough for homework.
Your problem is more general case of K-Disjoint Path In directed planar graphs, with not fixed K.
K disjoint paths problem for directed planar graphs is as this:
given: a directed planar graph G = (V;E) and k pairs (r1; s1); .... ; (rk; sk) of vertices of G;
find: k pairwise vertex-disjoint directed paths P1; ... ; Pk in G, where Pi runs from ri to si (i = 1; .... ; k).
In k-disjoint path you can draw arc from all si to B, and Also arc from A to all ri by this way you create graph G' from G.
Now if you can solve your problem in G' in P you can solve k-disjoint Path in G, So P=NP.
But if you read the paper linked it gives some idea for general graph (solving k-disjoint path with fixed k) and you can use it to have some good approximation.
Also there is more complicated algorithm which solves this problem in P (for fixed k) in general graphs. but in all it's not easy (it's by Seymour ).
So your best choice currently is to use brute force algorithms.
Edit: Because MAXWEIGHT is independent to your input size (your graph size) It doesn't affect to this problem, Also because it's NP-Hard for undirected unweighted graph, still you simply can conclude it's NP-Hard.
Find the shortest path through a graph in efficient time, with the additional constraint that the path must contain exactly n nodes.
We have a directed, weighted graph. It may, or may not contain a loop. We can easily find the shortest path using Dijkstra's algorithm, but Dijkstra's makes no guarantee about the number of edges.
The best we could come up with was to keep a list of the best n paths to a node, but this uses a huge amount of memory over vanilla Dijkstra's.
It is a simple dynamic programming algorithm.
Let us assume that we want to go from vertex x to vertex y.
Make a table D[.,.], where D[v,k] is the cost of the shortest path of length k from the starting vertex x to the vertex v.
Initially D[x,1] = 0. Set D[v,1] = infinity for all v != x.
For k=2 to n:
D[v,k] = min_u D[u,k-1] + wt(u,v), where we assume that wt(u,v) is infinite for missing edges.
P[v,k] = the u that gave us the above minimum.
The length of the shortest path will then be stored in D[y,n].
If we have a graph with fewer edges (sparse graph), we can do this efficiently by only searching over the u that v is connected to. This can be done optimally with an array of adjacency lists.
To recover the shortest path:
Path = empty list
v = y
For k= n downto 1:
Path.append(v)
v = P[v,k]
Path.append(x)
Path.reverse()
The last node is y. The node before that is P[y,n]. We can keep following backwards, and we will eventually arrive at P[v,2] = x for some v.
The alternative that comes to my mind is a depth first search (as opposed to Dijkstra's breadth first search), modified as follows:
stop "depth"-ing if the required vertex count is exceeded
record the shortest found (thus far) path having the correct number of nodes.
Run time may be abysmal, but it should come up with the correct result while using a very reasonable amount of memory.
Interesting problem. Did you discuss using a heuristic graph search (such as A*), adding a penalty for going over or under the node count? This may or may not be admissible, but if it did work, it may be more efficient than keeping a list of all the potential paths.
In fact, you may be able to use backtracking to limit the amount of memory being used for the Dijkstra variation you discussed.
A rough idea of an algorithm:
Let A be the start node, and let S be a set of nodes (plus a path). The invariant is that at the end of step n, S will all nodes that are exactly n steps from A and the paths will be the shortest paths of that length. When n is 0, that set is {A (empty path)}. Given such a set at step n - 1, you get to step n by starting with an empty set S1 and
for each (node X, path P) in S
for each edge E from X to Y in S,
If Y is not in S1, add (Y, P + Y) to S1
If (Y, P1) is in S1, set the path to the shorter of P1 and P + Y
There are only n steps, and each step should take less than max(N, E), which makes the
entire algorithm O(n^3) for a dense graph and O(n^2) for a sparse graph.
This algorith was taken from looking at Dijkstra's, although it is a different algorithm.
let say we want shortest distance from node x to y of k step
simple dp solution would be
A[k][x][y] = min over { A[1][i][k] + A[t-1][k][y] }
k varies from 0 to n-1
A[1][i][j] = r[i][j]; p[1][i][j]=j;
for(t=2; t<=n; t++)
for(i=0; i<n; i++) for(j=0; j<n; j++)
{
A[t][i][j]=BG; p[t][i][j]=-1;
for(k=0; k<n; k++) if(A[1][i][k]<BG && A[t-1][k][j]<BG)
if(A[1][i][k]+A[t-1][k][j] < A[t][i][j])
{
A[t][i][j] = A[1][i][k]+A[t-1][k][j];
p[t][i][j] = k;
}
}
trace back the path
void output(int a, int b, int t)
{
while(t)
{
cout<<a<<" ";
a = p[t][a][b];
t--;
}
cout<<b<<endl;
}