I have a weighted graph G and a pair of nodes s and t. I want to find, of all the paths from s to t with the fewest number of edges, the one that has the lowest total cost. I'm not sure how to do this. Here are my thoughts:
I am thinking of finding the shortest path and if there are more than one path then i should compare the number of steps of these paths.
I think I can find the number of steps by setting the weights of all edges to 1 and calculate the distance.
A reasonable first guess for a place to start here is Dijkstra's algorithm, which can solve each individual piece of this problem (minimize number of edges, or minimize total length). The challenge is getting it to do both at the same time.
Normally, when talking about shortest paths, we think of paths as having a single cost. However, you could imagine assigning paths two different costs: one cost based purely on the number of edges, and one cost based purely on the weights of those edges. You could then represent the cost of a path as a pair (length, weight), where length is the number of edges in the path and weight is the total weight of all of those edges.
Imagine running Dijkstra's algorithm on a graph with the following modifications. First, instead of tracking a candidate distance to each node in the graph, you track a pair of candidate distances to each node: a candidate length and a candidate weight. Second, whenever you need to fetch the lowest-code node, pick the node that has the shortest length (not weight). If there's a tie between multiple nodes with the same length, break the tie by choosing the one with the lowest weight. (If you've heard about lexicographical orderings, you could consider this as taking the node whose (length, weight) is lexicographically first). Finally, whenever you update a distance by extending a path by one edge, update both the candidate length and the candidate weight to that node. You can show that this process will compute the best path to each node, where "best" means "of all the paths with the minimum number of edges, the one with the lowest cost."
You could alternatively implement the above technique by modifying all the costs of the edges in the graph. Suppose that the maximum-cost edge in the graph has cost U. Then do the following: Add U+1 to all the costs in the graph, then run Dijkstra's algorithm on the result. The net effect of this is that the shortest path in this new graph will be the one that minimizes the number of edges used. Why? Well, every edge adds U+1 to the cost of the path, and U+1 is greater than the cost of any edge in the graph, so if one path is cheaper than another, it either uses at least one fewer edge, or it uses the same number of edges but has cheaper weights. In fact, you can prove that this approach is essentially identical to the one above using pairs of weights - it's a good exercise!
Overall, both of these approaches will run in the same time as a normal Dijkstra's algorithm (O(m + n log n) with a Fibonacci heap, O(m log n) with another type of heap), which is pretty cool!
One node to another would be a shortest-path-algorithm (e.g. Dijkstra).
It depends on your input whether you use a heuristic function to determine the total distance to the goal-node.
If you consider heuristics, you might want to choose A*-search instead. Here you just have to accumulate the weights to each node and add the heuristic value according to it.
If you want to get all paths from any node to any other node, you might consider Kruskal’s or Prim’s algorithm.
Both to basically the same, incl. pruning.
Related
Let G be a directed weighted graph with nodes colored black or white, and all weights non-negative. No other information is specified--no start or terminal vertex.
I need to find a path (not necessarily simple) of minimal weight which alternates colors at least n times. My first thought is to run Kosaraju's algorithm to get the component graph, then find a minimal path between the components. Then you could select nodes with in-degree equal to zero since those will have at least as many color alternations as paths which start at components with in-degree positive. However, that also means that you may have an unnecessarily long path.
I've thought about maybe trying to modify the graph somehow, by perhaps making copies of the graph that black-to-white edges or white-to-black edges point into, or copying or deleting edges, but nothing that I'm brain-storming seems to work.
The comments mention using Dijkstra's algorithm, and in fact there is a way to make this work. If we create an new "root" vertex in the graph, and connect every other vertex to it with a directed edge, we can run a modified Dijkstra's algorithm from the root outwards, terminating when a given path's inversions exceeds n. It is important to note that we must allow revisiting each vertex in the implementation, so the key of each vertex in our priority queue will not be merely node_id, but a tuple (node_id, inversion_count), representing that vertex on its ith visit. In doing so, we implicitly make n copies of each vertex, one per potential visit. Visually, we are effectively making n copies of our graph, and translating the edges between each (black_vertex, white_vertex) pair to connect between the i and i+1th inversion graphs. We run the algorithm until we reach a path with n inversions. Alternatively, we can connect each vertex on the nth inversion graph to a "sink" vertex, and run any conventional path finding algorithm on this graph, unmodified. This will run in O(n(E + Vlog(nV))) time. You could optimize this quite heavily, and also consider using A* instead, with the smallest_inversion_weight * (n - inversion_count) as a heuristic.
Furthermore, another idea hit me regarding using knowledge of the inversion requirement to speedup the search, but I was unable to find a way to implement it without exceeding O(V^2) time. The idea is that you can use an addition-chain (like binary exponentiation) to decompose the shortest n-inversion path into two smaller paths, and rinse and repeat in a divide and conquer fashion. The issue is you would need to construct tables for the shortest i-inversion path from any two vertices, which would be O(V^2) entries per i, and O(V^2logn) overall. To construct each table, for every entry in the preceding table you'd need to append V other paths, so it'd be O(V^3logn) time overall. Maybe someone else will see a way to merge these two ideas into a O((logn)(E + Vlog(Vlogn))) time algorithm or something.
Question
How would one going about finding a least cost path when the destination is unknown, but the number of edges traversed is a fixed value? Is there a specific name for this problem, or for an algorithm to solve it?
Note that maybe the term "walk" is more appropriate than "path", I'm not sure.
Explanation
Say you have a weighted graph, and you start at vertex V1. The goal is to find a path of length N (where N is the number of edges traversed, can cross the same edge multiple times, can revisit vertices) that has the smallest cost. This process would need to be repeated for all possible starting vertices.
As an additional heuristic, consider a turn-based game where there are rooms connected by corridors. Each corridor has a cost associated with it, and your final score is lowered by an amount equal to each cost 'paid'. It takes 1 turn to traverse a corridor, and the game lasts 10 turns. You can stay in a room (self-loop), but staying put has a cost associated with it too. If you know the cost of all corridors (and for staying put in each room; i.e., you know the weighted graph), what is the optimal (highest-scoring) path to take for a 10-turn (or N-turn) game? You can revisit rooms and corridors.
Possible Approach (likely to fail)
I was originally thinking of using Dijkstra's algorithm to find least cost path between all pairs of vertices, and then for each starting vertex subset the LCP's of length N. However, I realized that this might not give the LCP of length N for a given starting vertex. For example, Dijkstra's LCP between V1 and V2 might have length < N, and Dijkstra's might have excluded an unnecessary but low-cost edge, which, if included, would have made the path length equal N.
It's an interesting fact that if A is the adjacency matrix and you compute Ak using addition and min in place of the usual multiply and sum used in normal matrix multiplication, then Ak[i,j] is the length of the shortest path from node i to node j with exactly k edges. Now the trick is to use repeated squaring so that Ak needs only log k matrix multiply ops.
If you need the path in addition to the minimum length, you must track where the result of each min operation came from.
For your purposes, you want the location of the min of each row of the result matrix and corresponding path.
This is a good algorithm if the graph is dense. If it's sparse, then doing one bread-first search per node to depth k will be faster.
I have a weighted directed graph, with negative and positive weights, i want to minimize the cost of the arcs with a tree given the root ( node in graph).
Note that covering all the nodes is not important. I want to minimize the cost of branchs/arcs. So it's not a MDST.
What's the know name for this problem?
Want to find the Integer Formulation to make programming easier.
Edit: To clarify more, given a root, i need to generate a tree that minimize the cost of arcs in that tree... In other words, i need to find a path tree that minimize the sum of arcs. Like in examble that i give, the path dont go to right upper corner node cause it cost 100 in both possible paths and this will increase my path value (i want to minimize it).
Analogy: Think a person in an island, in that island there are multiple paths(arcs) that leads to various treasures
(negative numbers), but in some paths that are traps (positive numbers) that costs us to lose some of the treasure. I want to find a path that accumulate the maximum treasure possible.
Keep in mind that we cant avoid all traps, imagine a path that we lose 100 coins but that path is connected with another that give us 10000 coins.
It's like the Minimum Spanning tree problem, but in this case i have negative numbers too, the graph is directed and i dont need to cover all nodes in solution.
I think that you want to find out the sum of the weight from one root to the other root. For the graph without negative weight, it could be solved with Dijkstra's algorithm, and for the graph with negative weight, it could be solved with Bellman–Ford's algorithm. I think this can help you find the answer.
I learned that the Bellman-Ford Algorithm has a running time of O(|E|*|V|), in which the E is the number of edges and V the number of vertices. Assume the graph does not have any negative weighted cycles.
My first question is that how do we prove that within (|V|-1) iterations (every iteration checks every edge in E), it updates the shortest path to every possible node, given a particular start node? Is it possible that we have iterated (|V|-1) times but still not ending up with shortest paths to every node?
Assume the correctness of the algorithm, can we actually do better than that? It occurs to me that not all edges are negatively weighted in a particular graph. The Bellman-Ford Algorithm seems expensive, as every iteration it goes through every edges.
The longest possible path from the source to any vertice would involve at most all the other vertices in the graph. In other words - you won't have a path that goes through the same vertice more than once, since that would necessarily increase the weights (this is true only thanks to the fact there are no negative cycles).
On each iteration you would update the shortest path weight on the next vertice in this path, until after |V|-1 iterations your updates would have to reach the end of that path. After that there won't be any vertices with non-tight values, since your update has covered all shortest paths up to that length.
This complexity is tight (at least for BF), think of a long line of connected vertices. Pick the leftmost as the source - your updating process would have to work its way from there to the other side once vertice at a time. Now you might argue that you don't have to check each edge that way, so let's throw in a few random edges with a very large weight (N > |V|*max-weight) - they can't help you, but your algorithm can't know that for sure, so if has to go through the process of updating the vertices with these weights (they're still better than the initial infinity).
Suppose that the number of edges of a connected graph is known and the weight of each edge is distinct, would it possible to create a minimal spanning tree in linear time?
To do this we must look at each edge; and during this loop there can contain no searches otherwise it would result in at least n log n time. I'm not sure how to do this without searching in the loop. It would mean that, somehow we must only look at each edge once, and decide rather to include it or not based on some "static" previous values that does not involve a growing data structure.
So.. let's say we keep the endpoints of the node in question, then look at the next node, if the next node has the same vertices as prev, then compare the weight of prev and current node and keep the lower one. If the current node's endpoints are not equal to prev, then it is in a different component .. now I am stuck because we cannot create a hash or array to keep track of the component nodes that are already added while look through each edge in linear time.
Another approach I thought of is to find the edge with the minimal weight; since the edge weights are distinct this edge will be part of any MST. Then.. I am stuck. Since we cannot do this for n - 1 edges in linear time.
Any hints?
EDIT
What if we know the number of nodes, the number of edges and also that each edge weight is distinct? Say, for example, there are n nodes, n + 6 edges?
Then we would only have to find and remove the correct 7 edges correct?
To the best of my knowledge there is no way to compute an MST faster by knowing how many edges there are in the graph and that they are distinct. In the worst case, you would have to look at every edge in the graph before finding the minimum-cost edge (which must be in the MST), which takes Ω(m) time in the worst case. Therefore, I'll claim that any MST algorithm must take Ω(m) time in the worst case.
However, if we're already doing Ω(m) work in the worst-case, we could do the following preprocessing step on any MST algorithm:
Scan over the edges and count up how many there are.
Add an epsilon value to each edge weight to ensure the edges are unique.
This can be done in time Ω(m) as well. Consequently, if there were a way to speed up MST computation knowing the number of edges and that the edge costs are distinct, we would just do this preprocessing step on any current MST algorithm to try to get faster performance. Since to the best of my knowledge no MST algorithm actually tries to do this for performance reasons, I would suspect that there isn't a (known) way to get a faster MST algorithm based on this extra knowledge.
Hope this helps!
There's a famous randomised linear-time algorithm for minimum spanning trees whose complexity is linear in the number of edges. See "A randomized linear-time algorithm to find minimum spanning trees" by Karger, Klein, and Tarjan.
The key result in the paper is their "sampling lemma" -- that, if you independently randomly select a subset of the edges with probability p and find the minimum spanning tree of this subgraph, then there are only |V|/p edges that are better than the worst edge in the tree path connecting its ends.
As templatetypedef noted, you can't beat linear-time. That all edge weights are distinct is a common assumption that simplifies analysis; if anything, it makes MST algorithms run a little slower.
The fact that a number of edges (N) is known does not influence the complexity in any way. N is still a finite but unbounded variable, and each graph will have different N. If you place a upper bound on N, say, 1 million, then the complexity is O(1 million log 1 million) = O(1).
The fact that each edge has distinct weight does not influence the program either, because it does not say anything about the graph's structure. Therefore knowledge about current case cannot influence further processing, as we cannot predict how the graph's structure will look like in the next step.
If the number of edges is close to n, like in this case n-6 (after edit), we know that we only need to remove 7 edges as every spanning tree has only n-1 edges.
The Cycle Property shows that the most expensive edge in a cycle does not belong to any Minimum Spanning tree(assuming all edges are distinct) and thus, should be removed.
Now you can simply apply BFS or DFS to identify a cycle and remove the most expensive edge. So, overall, we need to run BFS 7 times. This takes 7*n time and gives us a time complexity of O(n). Again, this is only true if the number of edges is close to the number of nodes.