Algorithms - Shortest Path with Contingent Costs - algorithm

I am looking for an efficient algorithm to solve the following problem :
Given a directed, weighted graph G = (V,E), a source vertex S, a destination vertex T, and M, a subset of V, It's required to find the shortest path from S to T.
A special feature present in M is that each vertex in M, once 'visited', the weight of a certain edge changes to another value. Both the edge and the new weight will be given for each vertex in M.
To help in understanding the problem, I have drawn an example using mspaint. (sorry for the quality).
In this example, the 'regular' shortest path from S to T is 1000.
However, visiting the vertex C will reduce the edge weight from 1000 to just 500, so the shortest path in this case is 200+100+500=800.

This problem is NP-hard and it is clearly in NP. The proof is a relatively straightforward gadget reduction.
This more or less rules out making significant improvements on the trivial, brute force algorithm for this problem. So what exactly are you hoping for when you say "efficient" here?
===
Proof
It might be that the problem statement has been unclear somehow so the version OP cares about isn't actually NP-complete. So I'll give some details of the proof.
For technical reasons, usually when we want to show a search problem is NP-hard we actually do it for an associated decision problem which is interreducible to the search problem. The decision problem here is "Given a directed weighted graph as described with associated edge-weight changing data, and a numeric value V, does the shortest path have value at most V?". Clearly, if you have an algorithm for the search problem, you can solve the decision problem easily, and if you have an algorithm for the decision problem, you can use it for the search problem -- you could use binary search essentially to determine the optimal value of V to precision greater than the input numbers, then alter the problem instance by deleting edges and checking if the optimal solution value changed in order to determine if an edge is in the path. So in the sequel I talk about the decision version of the problem.
The problem is in NP
First to see that it is in NP, we want to see that "yes" instances of the decision problem are certifiable in polynomial time. The certificate here is simply the shortest path. It is easy to see that the shortest path does not take more bits to describe than the graph itself. It is also easy to calculate the value of any particular path, you just go through the steps of the path and check what the value of the next edge was at that time. So the problem is in NP.
The problem is NP-hard
To see that it is NP-hard we reduce from 3SAT to the decision problem. That is the problem of determining the satisfiability of a boolean formula in CNF form in which each clause has at most 3 literals. For a complete definition of 3SAT see here: https://en.wikipedia.org/wiki/Boolean_satisfiability_problem
The reduction which I will describe is a transformation which takes an instance of 3SAT and produces an input to the decision problem, with the property that the 3SAT instance is satisfiable if and only if the shortest path value is less than the specified threshold.
For any given 3SAT formula, the graph which we will produce has the following high-level structure. For each variable, there will be a "cloud" of vertices associated to it which are connected in certain ways, and some of those vertices are in M. The graph is arranged so that the shortest path must pass through each cloud exactly once, passing through the cloud for x1 first, then the cloud for x2, and so on. Then, there is also a (differently arranged) cloud for each clause of the formula. After passing through the last variable's cloud, the path must pass through the clouds of the clauses in succession. Then it reaches the terminal.
The basic idea is that, when going through the cloud for variable xi, there are exactly two different possible paths, and the choice represents a commitment to a truth value of xi. All of the costs of the edges for the variable clouds are the same, so it doesn't directly affect the path value. But, since some of them are in M, the choice of path there changes what the costs will later in the clause clouds. The clause clouds enforce that if the variables we picked don't satisfy the clause, then we pay a price.
The variable cloud looks like this:
*_*_*_*
/ \
Entry * * Exit
\ /
*_*_*_*
Where, the stars are vertices, and the lines are edges, and all the edges are directed to the right, and the have same cost, we can take it to be zero, or if thats a problem they could all be one, it doesn't really matter. Here I showed 4 vertices on the two paths, but actually the number of vertices will depend on some things, as we will see.
The clause cloud looks like this:
*
/ \
Entry *_*_* Exit
\ /
*
Where, again all edges are directed to the right.
Each of the 3 central vertices is "labelled" (in our minds) and corresponds to one of the three literals in the clause. All of these edges again have cost 0.
The idea is that when I go through the variable cloud, and pick a value for that variable, if I didn't satisfy the literal of some clause, then the cost of the corresponding edge in the clause cloud should go up. Thus, as long as at least I actually satisfied the clause I have a path from the entry to the exit which costs 0. And if every one of the literals was missed by this assignment, then I have to pay something larger than zero. Maybe, 100 or something, it doesn't really matter.
Going back to the variable cloud now, the variable cloud for xi has 2m vertices in the middle part where m is the number of clauses that xi appears in. Then, depending whether it appears positively or negatively in the k'th such clause, the k'th vertex of the top or the bottom path is in M and changes the edge in the corresponding clause cloud, to have cost 100 or whatever other fixed value.
The overall graph is made by simply pasting together the variable and clause clouds at their entry - exit nodes in succession. The threshold value is, say, 50.
The point is that, if there is a satisfying assignment to the 3SAT instance, then we can construct from it a path through the graph instance of cost 0, since we never pay anything in the vertex clouds, and we always can pick a path in each clause cloud where the clause was satsified and we don't pay anything there either. If there is no satisfying assignment to the 3SAT instance, then any path must get a clause wrong at some point and then pay at least 100. So if we set the threshold to 50 and make that part of the instance, the reduction is sound.
If the problem statement doesn't actually allow 0 cost edges, we can easily change the solution so that it only has positive cost edges. Because, the total number of edges in any path from start to finish is the same for every path. So if we add 1 to every edge cost, and take the threshold to be 50 + length instead, nothing is changed.
That same trick of adding a fixed value to all of the edges and adjusting the threshold can be used to see that the problem is very strongly inapproximable also, as pointed out by David Eisenstat in comments.
Running time implications
If you are economical in how many vertices you assign to each variable cloud, the reduction takes a 3SAT instance with n variables (and hence input length O(n)) also to a graph instance of O(n) vertices, and the graph is sparse. (100n vertices should be way more than sufficient.) As a result, if you can give an algorithm for the stated problem with running time less than 2^{o(n)} on sparse graphs with n vertices, it implies a 2^{o(n)} algorithm for 3SAT which would be a major breakthrough in algorithms, and would disprove the "Exponential Time Hypothesis" of Impagliazzo and Paturi. So you can't really hope for more than a constant-factor-in-the-exponent improvement over the trivial algorithm in this problem.

Related

Finding a "positive cycle"

Suppose we E is an nxn matrix where E[i,j] represents the exchange rate from currency i to currency j. (How much of currency j can be obtained with 1 of currency i). (Note, E[i,j]*E[j,i] is not necessarily 1).
Come up with an algorithm to find a positive cycle if one exists, where a positive cycle is defined by: if you start with 1 of currency i, you can keep exchanging currency such that eventually you can come back and have more than 1 of currency i.
I've been thinking about this problem for a long time, but I can't seem to get it. The only thing I can come up with is to represent everything as a directed graph with matrix E as an adjacency matrix, where log(E[i,j]) is the weight between vertices i and j. And then you would look for a cycle with a negative path or something. Does that even make sense? Is there a more efficient/easier way to think of this problem?
First, take logs of exchange rates (this is not strictly necessary, but it means we can talk about "adding lengths" as usual). Then you can apply a slight modification of the Floyd-Warshall algorithm to find the length of a possibly non-simple path (i.e. a path that may loop back on itself several times, and in different places) between every pair of vertices that is at least as long as the longest simple path between them. The only change needed is to flip the sign of the comparison, so that we always look for the longest path (more details below). Finally you can look through all O(n^2) pairs of vertices u and v, taking the sum of the lengths of the 2 paths in each direction (from u to v, and from v to u). If any of these are > 0 then you have found a (possibly non-simple) cycle having overall exchange rate > 1. Overall the FW part of the algorithm dominates, making this O(n^3)-time.
In general, the problem with trying to use an algorithm like FW to find maximum-weight paths is that it might join together 2 subpaths that share one or more vertices, and we usually don't want this. (This can't ever happen when looking for minimum-length paths in a graph with no negative cycles, since such a path would necessarily contain a positive-weight cycle that could be removed, so it would never be chosen as optimal.) This would be a problem if we were looking for the maximum-weight simple cycle; in that case, to get around this we would need to consider a separate subproblem for every subset of vertices, which pushes the time and space complexity up to O(2^n). Fortunately, however, we are only concerned with finding some positive-weight cycle, and it's reasonably easy to see that if the path found by FW happens to use some vertex more than once, then it must contain a nonnegative-weight cycle -- which can either be removed (if it has weight 0), or (if it has weight > 0) is itself a "right answer"!
If you care about finding a simple cycle, this is easy to do in a final step that is linear in the length of the path reported by FW (which, by the way, may be O(2^|V|) -- if all paths have positive length then all "optimal" lengths will double with each outermost iteration -- but that's pretty unlikely to happen here). Take the optimal path pair implied by the result of FW (each path can be calculated in the usual way, by keeping a table of "optimal predecessor" values of k for each vertex pair (i, j)), and simply walk along it, assigning to each vertex you visit the running total of the length so far, until you hit a vertex that you have already visited. At this point, either currentTotal - totalAtAlreadyVisitedVertex > 0, in which case the cycle you just found has positive weight and you're finished, or this difference is 0, in which case you can delete the edges corresponding to this cycle from the path and continue as usual.

Solving a TSP-related task

I have a problem similar to the basic TSP but not quite the same.
I have a starting position for a player character, and he has to pick up n objects in the shortest time possible. He doesn't need to return to the original position and the order in which he picks up the objects does not matter.
In other words, the problem is to find the minimum-weight (distance) Hamiltonian path with a given (fixed) start vertex.
What I have currently, is an algorithm like this:
best_total_weight_so_far = Inf
foreach possible end vertex:
add a vertex with 0-weight edges to the start and end vertices
current_solution = solve TSP for this graph
remove the 0 vertex
total_weight = Weight (current_solution)
if total_weight < best_total_weight_so_far
best_solution = current_solution
best_total_weight_so_far = total_weight
However this algorithm seems to be somewhat time-consuming, since it has to solve the TSP n-1 times. Is there a better approach to solving the original problem?
It is a rather minor variation of TSP and clearly NP-hard. Any heuristic algorithm (and you really shouldn't try to do anything better than heuristic for a game IMHO) for TSP should be easily modifiable for your situation. Even the nearest neighbor probably wouldn't be bad -- in fact for your situation it would probably be better that when used in TSP since in Nearest Neighbor the return edge is often the worst. Perhaps you can use NN + 2-Opt to eliminate edge crossings.
On edit: Your problem can easily be reduced to the TSP problem for directed graphs. Double all of the existing edges so that each is replaced by a pair of arrows. The costs for all arrows is simply the existing cost for the corresponding edges except for the arrows that go into the start node. Make those edges cost 0 (no cost in returning at the end of the day). If you have code that solves the TSP for directed graphs you could thus use it in your case as well.
At the risk of it getting slow (20 points should be fine), you can use the good old exact TSP algorithms in the way John describes. 20 points is really easy for TSP - instances with thousands of points are routinely solved and instances with tens of thousands of points have been solved.
For example, use linear programming and branch & bound.
Make an LP problem with one variable per edge (there are more edges now because it's directed), the variables will be between 0 and 1 where 0 means "don't take this edge in the solution", 1 means "take it" and fractional values sort of mean "take it .. a bit" (whatever that means).
The costs are obviously the distances, except for returning to the start. See John's answer.
Then you need constraints, namely that for each node the sum of its incoming edges is 1, and the sum of its outgoing edges is one. Also the sum of a pair of edges that was previously one edge must be smaller or equal to one. The solution now will consist of disconnected triangles, which is the smallest way to connect the nodes such that they all have both an incoming edge and an outgoing edge, and those edges are not both "the same edge". So the sub-tours must be eliminated. The simplest way to do that (probably strong enough for 20 points) is to decompose the solution into connected components, and then for each connected component say that the sum of incoming edges to it must be at least 1 (it can be more than 1), same thing with the outgoing edges. Solve the LP problem again and repeat this until there is only one component. There are more cuts you can do, such as the obvious Gomory cuts, but also fancy special TSP cuts (comb cuts, blossom cuts, crown cuts.. there are whole books about this), but you won't need any of this for 20 points.
What this gives you is, sometimes, directly the solution. Usually to begin with it will contain fractional edges. In that case it still gives you a good underestimation of how long the tour will be, and you can use that in the framework of branch & bound to determine the actual best tour. The idea there is to pick an edge that was fractional in the result, and pick it either 0 or 1 (this often turns edges that were previously 0/1 fractional, so you have to keep all "chosen edges" fixed in the whole sub-tree in order to guarantee termination). Now you have two sub-problems, solve each recursively. Whenever the estimation from the LP solution becomes longer than the best path you have found so far, you can prune the sub-tree (since it's an underestimation, all integral solutions in this part of the tree can only be even worse). You can initialize the "best so far solution" with a heuristic solution but for 20 points it doesn't really matter, the techniques I described here are already enough to solve 100-point problems.

Minimum spanning tree with two edges tied

I'd like to solve a harder version of the minimum spanning tree problem.
There are N vertices. Also there are 2M edges numbered by 1, 2, .., 2M. The graph is connected, undirected, and weighted. I'd like to choose some edges to make the graph still connected and make the total cost as small as possible. There is one restriction: an edge numbered by 2k and an edge numbered by 2k-1 are tied, so both should be chosen or both should not be chosen. So, if I want to choose edge 3, I must choose edge 4 too.
So, what is the minimum total cost to make the graph connected?
My thoughts:
Let's call two edges 2k and 2k+1 a edge set.
Let's call an edge valid if it merges two different components.
Let's call an edge set good if both of the edges are valid.
First add exactly m edge sets which are good in increasing order of cost. Then iterate all the edge sets in increasing order of cost, and add the set if at least one edge is valid. m should be iterated from 0 to M.
Run an kruskal algorithm with some variation: The cost of an edge e varies.
If an edge set which contains e is good, the cost is: (the cost of the edge set) / 2.
Otherwise, the cost is: (the cost of the edge set).
I cannot prove whether kruskal algorithm is correct even if the cost changes.
Sorry for the poor English, but I'd like to solve this problem. Is it NP-hard or something, or is there a good solution? :D Thanks to you in advance!
As I speculated earlier, this problem is NP-hard. I'm not sure about inapproximability; there's a very simple 2-approximation (split each pair in half, retaining the whole cost for both halves, and run your favorite vanilla MST algorithm).
Given an algorithm for this problem, we can solve the NP-hard Hamilton cycle problem as follows.
Let G = (V, E) be the instance of Hamilton cycle. Clone all of the other vertices, denoting the clone of vi by vi'. We duplicate each edge e = {vi, vj} (making a multigraph; we can do this reduction with simple graphs at the cost of clarity), and, letting v0 be an arbitrary original vertex, we pair one copy with {v0, vi'} and the other with {v0, vj'}.
No MST can use fewer than n pairs, one to connect each cloned vertex to v0. The interesting thing is that the other halves of the pairs of a candidate with n pairs like this can be interpreted as an oriented subgraph of G where each vertex has out-degree 1 (use the index in the cloned bit as the tail). This graph connects the original vertices if and only if it's a Hamilton cycle on them.
There are various ways to apply integer programming. Here's a simple one and a more complicated one. First we formulate a binary variable x_i for each i that is 1 if edge pair 2i-1, 2i is chosen. The problem template looks like
minimize sum_i w_i x_i (drop the w_i if the problem is unweighted)
subject to
<connectivity>
for all i, x_i in {0, 1}.
Of course I have left out the interesting constraints :). One way to enforce connectivity is to solve this formulation with no constraints at first, then examine the solution. If it's connected, then great -- we're done. Otherwise, find a set of vertices S such that there are no edges between S and its complement, and add a constraint
sum_{i such that x_i connects S with its complement} x_i >= 1
and repeat.
Another way is to generate constraints like this inside of the solver working on the linear relaxation of the integer program. Usually MIP libraries have a feature that allows this. The fractional problem has fractional connectivity, however, which means finding min cuts to check feasibility. I would expect this approach to be faster, but I must apologize as I don't have the energy to describe it detail.
I'm not sure if it's the best solution, but my first approach would be a search using backtracking:
Of all edge pairs, mark those that could be removed without disconnecting the graph.
Remove one of these sets and find the optimal solution for the remaining graph.
Put the pair back and remove the next one instead, find the best solution for that.
This works, but is slow and unelegant. It might be possible to rescue this approach though with a few adjustments that avoid unnecessary branches.
Firstly, the edge pairs that could still be removed is a set that only shrinks when going deeper. So, in the next recursion, you only need to check for those in the previous set of possibly removable edge pairs. Also, since the order in which you remove the edge pairs doesn't matter, you shouldn't consider any edge pairs that were already considered before.
Then, checking if two nodes are connected is expensive. If you cache the alternative route for an edge, you can check relatively quick whether that route still exists. If it doesn't, you have to run the expensive check, because even though that one route ceased to exist, there might still be others.
Then, some more pruning of the tree: Your set of removable edge pairs gives a lower bound to the weight that the optimal solution has. Further, any existing solution gives an upper bound to the optimal solution. If a set of removable edges doesn't even have a chance to find a better solution than the best one you had before, you can stop there and backtrack.
Lastly, be greedy. Using a regular greedy algorithm will not give you an optimal solution, but it will quickly raise the bar for any solution, making pruning more effective. Therefore, attempt to remove the edge pairs in the order of their weight loss.

Algorithm for determining largest covered area

I'm looking for an algorithm which I'm sure must have been studied, but I'm not familiar enough with graph theory to even know the right terms to search for.
In the abstract, I'm looking for an algorithm to determine the set of routes between reachable vertices [x1, x2, xn] and a certain starting vertex, when each edge has a weight and each route can only have a given maximum total weight of x.
In more practical terms, I have road network and for each road segment a length and maximum travel speed. I need to determine the area that can be reached within a certain time span from any starting point on the network. If I can find the furthest away points that are reachable within that time, then I will use a convex hull algorithm to determine the area (this approximates enough for my use case).
So my question, how do I find those end points? My first intuition was to use Dijkstra's algorithm and stop once I've 'consumed' a certain 'budget' of time, subtracting from that budget on each road segment; but I get stuck when the algorithm should backtrack but has used its budget. Is there a known name for this problem?
If I understood the problem correctly, your initial guess is right. Dijkstra's algorithm, or any other algorithm finding a shortest path from a vertex to all other vertices (like A*) will fit.
In the simplest case you can construct the graph, where weight of edges stands for minimum time required to pass this segment of road. If you have its length and maximum allowed speed, I assume you know it. Run the algorithm from the starting point, pick those vertices with the shortest path less than x. As simple as that.
If you want to optimize things, note that during the work of Dijkstra's algorithm, currently known shortest paths to the vertices are increasing monotonically with each iteration. Which is kind of expected when you deal with graphs with non-negative weights. Now, on each step you are picking an unused vertex with minimum current shortest path. If this path is greater than x, you may stop. There is no chance that you have any vertices with shortest path less than x from now on.
If you need to exactly determine points between vertices, that a vehicle can reach in a given time, it is just a small extension to the above algorithm. As a next step, consider all (u, v) edges, where u can be reached in time x, while v cannot. I.e. if we define shortest path to vertex w as t(w), we have t(u) <= x and t(v) > x. Now use some basic math to interpolate point between u and v with the coefficient (x - t(u)) / (t(v) - t(u)).
Using breadth first search from the starting node seems a good way to solve the problem in O(V+E) time complexity. Well that's what Dijkstra does, but it stops after finding the smallest path. In your case, however, you must continue collecting routes for your set of routes until no route can be extended keeping its weigth less than or equal the maximum total weight.
And I don't think there is any backtracking in Dijkstra's algorithm.

Approximation algorithm for TSP variant, fixed start and end anywhere but starting point + multiple visits at each vertex ALLOWED

NOTE: Due to the fact that the trip does not end at the same place it started and also the fact that every point can be visited more than once as long as I still visit all of them, this is not really a TSP variant, but I put it due to lack of a better definition of the problem.
So..
Suppose I am going on a hiking trip with n points of interest. These points are all connected by hiking trails. I have a map showing all trails with their distances, giving me a directed graph.
My problem is how to approximate a tour that starts at a point A and visits all n points of interest, while ending the tour anywhere but the point where I started and I want the tour to be as short as possible.
Due to the nature of hiking, I figured this would sadly not be a symmetric problem (or can I convert my asymmetric graph to a symmetric one?), since going from high to low altitude is obviously easier than the other way around.
Also I believe it has to be an algorithm that works for non-metric graphs, where the triangle inequality is not satisfied, since going from a to b to c might be faster than taking a really long and weird road that goes from a to c directly. I did consider if triangle inequality still holds, since there are no restrictions regarding how many times I visit each point, as long as I visit all of them, meaning I would always choose the shortest of two distinct paths from a to c and thus never takr the long and weird road.
I believe my problem is easier than TSP, so those algorithms do not fit this problem. I thought about using a minimum spanning tree, but I have a hard time convincing myself that they can be applied to a non-metric asymmetric directed graph.
What I really want are some pointers as to how I can come up with an approximation algorithm that will find a near optimal tour through all n points
To reduce your problem to asymmetric TSP, introduce a new node u and make arcs of length L from u to A and from all nodes but A to u, where L is very large (large enough that no optimal solution revisits u). Delete u from the tour to obtain a path from A to some other node via all others. Unfortunately this reduction preserves the objective only additively, which make the approximation guarantees worse by a constant factor.
The target of the reduction Evgeny pointed out is non-metric symmetric TSP, so that reduction is not useful to you, because the approximations known all require metric instances. Assuming that the collection of trails forms a planar graph (or is close to it), there is a constant-factor approximation due to Gharan and Saberi, which may unfortunately be rather difficult to implement, and may not give reasonable results in practice. Frieze, Galbiati, and Maffioli give a simple log-factor approximation for general graphs.
If there are a reasonable number of trails, branch and bound might be able to give you an optimal solution. Both G&S and branch and bound require solving the Held-Karp linear program for ATSP, which may be useful in itself for evaluating other approaches. For many symmetric TSP instances that arise in practice, it gives a lower bound on the cost of an optimal solution within 10% of the true value.
You can simplify this problem to a normal TSP problem with n+1 vertexes. To do this, take node 'A' and all the points of interest and compute a shortest path between each pair of these points. You can use the all-pairs shortest path algorithm on the original graph. Or, if n is significantly smaller than the original graph size, use single-source shortest path algorithm for these n+1 vertexes. Also you can set length of all the paths, ending at 'A', to some constant, larger than any other path, which allows to end the trip anywhere (this may be needed only for TSP algorithms, finding a round-trip path).
As a result, you get a complete graph, which is metric, but still asymmetric. All you need now is to solve a normal TSP problem on this graph. If you want to convert this asymmetric graph to a symmetric one, Wikipedia explains how to do it.

Resources