Algorithm for determining largest covered area - algorithm

I'm looking for an algorithm which I'm sure must have been studied, but I'm not familiar enough with graph theory to even know the right terms to search for.
In the abstract, I'm looking for an algorithm to determine the set of routes between reachable vertices [x1, x2, xn] and a certain starting vertex, when each edge has a weight and each route can only have a given maximum total weight of x.
In more practical terms, I have road network and for each road segment a length and maximum travel speed. I need to determine the area that can be reached within a certain time span from any starting point on the network. If I can find the furthest away points that are reachable within that time, then I will use a convex hull algorithm to determine the area (this approximates enough for my use case).
So my question, how do I find those end points? My first intuition was to use Dijkstra's algorithm and stop once I've 'consumed' a certain 'budget' of time, subtracting from that budget on each road segment; but I get stuck when the algorithm should backtrack but has used its budget. Is there a known name for this problem?

If I understood the problem correctly, your initial guess is right. Dijkstra's algorithm, or any other algorithm finding a shortest path from a vertex to all other vertices (like A*) will fit.
In the simplest case you can construct the graph, where weight of edges stands for minimum time required to pass this segment of road. If you have its length and maximum allowed speed, I assume you know it. Run the algorithm from the starting point, pick those vertices with the shortest path less than x. As simple as that.
If you want to optimize things, note that during the work of Dijkstra's algorithm, currently known shortest paths to the vertices are increasing monotonically with each iteration. Which is kind of expected when you deal with graphs with non-negative weights. Now, on each step you are picking an unused vertex with minimum current shortest path. If this path is greater than x, you may stop. There is no chance that you have any vertices with shortest path less than x from now on.
If you need to exactly determine points between vertices, that a vehicle can reach in a given time, it is just a small extension to the above algorithm. As a next step, consider all (u, v) edges, where u can be reached in time x, while v cannot. I.e. if we define shortest path to vertex w as t(w), we have t(u) <= x and t(v) > x. Now use some basic math to interpolate point between u and v with the coefficient (x - t(u)) / (t(v) - t(u)).

Using breadth first search from the starting node seems a good way to solve the problem in O(V+E) time complexity. Well that's what Dijkstra does, but it stops after finding the smallest path. In your case, however, you must continue collecting routes for your set of routes until no route can be extended keeping its weigth less than or equal the maximum total weight.
And I don't think there is any backtracking in Dijkstra's algorithm.

Related

least cost path, destination unknown

Question
How would one going about finding a least cost path when the destination is unknown, but the number of edges traversed is a fixed value? Is there a specific name for this problem, or for an algorithm to solve it?
Note that maybe the term "walk" is more appropriate than "path", I'm not sure.
Explanation
Say you have a weighted graph, and you start at vertex V1. The goal is to find a path of length N (where N is the number of edges traversed, can cross the same edge multiple times, can revisit vertices) that has the smallest cost. This process would need to be repeated for all possible starting vertices.
As an additional heuristic, consider a turn-based game where there are rooms connected by corridors. Each corridor has a cost associated with it, and your final score is lowered by an amount equal to each cost 'paid'. It takes 1 turn to traverse a corridor, and the game lasts 10 turns. You can stay in a room (self-loop), but staying put has a cost associated with it too. If you know the cost of all corridors (and for staying put in each room; i.e., you know the weighted graph), what is the optimal (highest-scoring) path to take for a 10-turn (or N-turn) game? You can revisit rooms and corridors.
Possible Approach (likely to fail)
I was originally thinking of using Dijkstra's algorithm to find least cost path between all pairs of vertices, and then for each starting vertex subset the LCP's of length N. However, I realized that this might not give the LCP of length N for a given starting vertex. For example, Dijkstra's LCP between V1 and V2 might have length < N, and Dijkstra's might have excluded an unnecessary but low-cost edge, which, if included, would have made the path length equal N.
It's an interesting fact that if A is the adjacency matrix and you compute Ak using addition and min in place of the usual multiply and sum used in normal matrix multiplication, then Ak[i,j] is the length of the shortest path from node i to node j with exactly k edges. Now the trick is to use repeated squaring so that Ak needs only log k matrix multiply ops.
If you need the path in addition to the minimum length, you must track where the result of each min operation came from.
For your purposes, you want the location of the min of each row of the result matrix and corresponding path.
This is a good algorithm if the graph is dense. If it's sparse, then doing one bread-first search per node to depth k will be faster.

Can I use Dijkstra's shortest path algorithm in my graph?

I have a directed graph that has all non-negative edges except the edge(s) that leave the source (S). There are no edges from any other vertices to the source. To find the shortest distance from source (S) to a vertex (T) in the graph, can I use Dijkstra's shortest path algorithm even though the edges leaving the source is negative?
Assuming only source-adjecent edges can have negative weights and there is no path back to the source from any of the source-adjecent nodes (as mentioned in the comment), you can just add a constant C onto all edges leaving the source to make them all non-negative. Then subtract C from the final result.
On a more general note, Dijkstra can be used to solve shortest-path in any graph with negative edge weights (but no negative cycles) after applying Johnson's reweighting algorithm (which is essentially Bellman-Ford, but needs to be performed only once).
Yes, you can use Dijkstra on that type of directed graph.
If you use already finished alghoritm for Dijsktra and it cannot use negative values, it can be good practise to find the lowest negative edge and add that number to all starting edges, therefore there is no-negative number at all. You substract that number after finishing.
If you code it yourself (which is acutally pretty easy and I recommend it to you), you almost does not change anything, just start with lowest value (as usual for Dijkstra) and allow it, that lowest value can be negative. It will work in your case.
The reason you generally can't use Dijkstra's algorithm for (directed) graphs with negative links is that Dijkstra's algorithm is greedy. It assumes that once you pick a vertex with minimum distance, there is no way it can later be reached by a smaller paths.
In your particular graph, after the very first step, you traverse all possible negative edges and Dijkstra's assumption actually holds from now on. Regardless of the fact that those vertices directly connected to start now have negative values, once you identify which has the minimum distance, it can never be reached again with a smaller distance (since all edges you would traverse from this point on would have a positive distance).
If you think about the conditions that dijkstra's algorithm puts upon the edges for the algorithm to work it is only that they are never decreasing after initialisation.
Thus, it actually doesn't matter if the first step is negative as from those several points onwards the function is constantly increasing and thus the correct output will be found (provided there is no way to get back to the start square.).

Metric travelling salesman, force an edge into solution

Normally the TSP solution is the one so that the total cost on edges is minimal.
However in my case I need a specific edge on the solution, it does not matter if it the solution is not optimal anymore.
It does matter, however, that of all Hamiltonian cycles containing that edge the obtained solution is optimal. Or at least bounded.
More formally the problem would be: given a complete metric graph and a specific edge, what is the Hamiltonian cycle which cost is minimal passing through that specific edge?
Edit:
transform the graph is probably a good idea. But keep in mind the resulting graph must still be metric and complete. A non-complete graph is equivalent to a non-metric one in this case, just think that the missing edge is actually an overly expensive one.
This is important because there cannot be polinomial-time algorithm for general distances.
If you are curious the proof of this fact is in "P-complete approximation problems" of S. Sahni and T. Gonzalez (1976).
How about making that edge's cost low enough that no Hamiltonian cycle that contains it could be more costly than a Hamiltonian cycle that does not contain it?
Let S be the sum of all distances in the graph. Add 2*S to every edge's cost, except the fixed one. That way every Hamiltonian cycle that contains the fixed edge will have cost at most (N-1)*2*S+S, and every cycle that does not contain it will have cost at least N*2*S.
The triangle inequality is also preserved, since every triangle (x, y, z) becomes either (x+2*S, y+2*S, z+2*S) or (x, y+2*S, z+2*S).
If X-Y is the edge, you can introduce a new vertex Z such that Z is connected only to X and Y, and remove X-Y. Distance(X,Z) + Distance(Z,Y) = Distance(X,Y).

Correctness of Bellman-Ford Algorithm, can we still do better?

I learned that the Bellman-Ford Algorithm has a running time of O(|E|*|V|), in which the E is the number of edges and V the number of vertices. Assume the graph does not have any negative weighted cycles.
My first question is that how do we prove that within (|V|-1) iterations (every iteration checks every edge in E), it updates the shortest path to every possible node, given a particular start node? Is it possible that we have iterated (|V|-1) times but still not ending up with shortest paths to every node?
Assume the correctness of the algorithm, can we actually do better than that? It occurs to me that not all edges are negatively weighted in a particular graph. The Bellman-Ford Algorithm seems expensive, as every iteration it goes through every edges.
The longest possible path from the source to any vertice would involve at most all the other vertices in the graph. In other words - you won't have a path that goes through the same vertice more than once, since that would necessarily increase the weights (this is true only thanks to the fact there are no negative cycles).
On each iteration you would update the shortest path weight on the next vertice in this path, until after |V|-1 iterations your updates would have to reach the end of that path. After that there won't be any vertices with non-tight values, since your update has covered all shortest paths up to that length.
This complexity is tight (at least for BF), think of a long line of connected vertices. Pick the leftmost as the source - your updating process would have to work its way from there to the other side once vertice at a time. Now you might argue that you don't have to check each edge that way, so let's throw in a few random edges with a very large weight (N > |V|*max-weight) - they can't help you, but your algorithm can't know that for sure, so if has to go through the process of updating the vertices with these weights (they're still better than the initial infinity).

What modifications could you make to a graph to allow Dijkstra's algorithm to work on it?

So I've been thinking, without resorting to another algorithm, what modifications could you make to a graph to allow Dijkstra's algorithm to work on it, and still get the correct answer at the end of the day? If it's even possible at all?
I first thought of adding a constant equal to the most negative weight to all weights, but I found that that will mess up everthing and change the original single source path.
Then, I thought of traversing through the graph, putting all weights that are less than zero into an array or somwthing of the sort and then multiplying it by -1. I think his would work (disregarding running time constraints) but maybe I'm looking at the wrong way.
EDIT:
Another idea. How about permanently setting all negative weights to infinity. that way ensuring that they are ignored?
So I just want to hear some opinions on this; what do you guys think?
Seems you looking for something similar to Johnson's algorithm:
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path
from q to v. If this step detects a negative cycle, the algorithm is
terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v,
having length w(u,v), is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the
reweighted graph.
By any algorithm, you should check for negative cycles, and if there isn't negative cycle, find the shortest path.
In your case you need to run Dijkstra's algorithm one time. Also note that in Johnson's algorithm semi Bellman–Ford algorithm runs just for new added node. (not all vertices).

Resources