Given a weighted directed graph with n vertices where edge weights are integers (positive, zero, or negative), determining whether there are paths of arbitrarily large weight can be performed in time -
O(n)
O(n.log(n)) but not O(n)
O(n^1.5) but not O(nlogn)
O(n^3) but not O(n^1.5)
O(2n) but not O(n^3)
I'm not understanding what algorithm to use as finding the longest path is a NP Hard problem. Though, the answer given is O(n^3)
Briefly, you have to negate weights and then run the Floyd-Warshall algorithm. It takes O(n^3).
As mentioned here,
The graphs must be acyclic, otherwise paths can have
arbitrarily large weights. We can find longest paths just by negating all of the
edge weights, and then using a shortest path algorithm. Unfortunately, Dijk
stra’s algorithm does not work if edges are allowed to have negative weights.
However, the Floyd-Warshall algorithm does work, as long as there are no
negative weight cycles, and so it can be used to find longest weight paths in
acyclic graphs.
Related
Given a dense graph (according to ore's theorem, dense means the sum of degrees of any 2 non-adjacent nodes is at least N, where N is the total number of nodes), we can find a Hamiltonian cycle in such a graph using Palmer's algorithm with a time complexity of O(n^2). My question is: Can we do better than this? (in terms of time complexity).
Assume that we have an undirected graph G=(V,E) and we have two nodes S and X.
Can we come up with an algorithm such that the largest weight of the edge on the path from S to X is minimized? Note that it is not the shortest path algorithm since we are not interested in minimizing their sum.
What is the complexity of this algorithm?
Is the minimum spanning tree algorithm (such as Prim) is a solution for the problem?
I don't know if minimum spanning tree will solve it or not, but it's certainly solvable by making some modifications to Dijkstra's algorithm.
Define the "length" of a path as the maximum edge in that path. Now find the shortest path from S to X using Dijkstra's algorithm. This is the path you are looking for.
Complexity is O((N+M)log N) if you use a binary heap and O(N * log N + M) with a Fibonacci heap.
Note that for this new definition of path length, if the length of a path is l, then adding an edge to the end of the path will not decrease it's length, since the maximum edge in that path can only increase. This property is necessary for Dijkstra's algorithm to work correctly.
For instance, if you were looking for the path with the shortest edge, then Dijkstra's algorithm will fail just like it fails when there are negative edges in the graph.
You can use minimum spanning tree (Prim's algorithm) to solve this problem. You will start with vertex S, then continue to build the tree using Prim's algorithm until you find X. Complexity will be O((V+E)*logV).
It will work because in the prim's algorithm you always choose the edge with minimum weight first.
In the Johnson algorithm, it uses Bellman-Ford to convert graphs with negative edge weights(no negative cycles) into a graph with the same shortest paths but all the edge weights being non-negative - in O(mn) time.
Suppose we are given a DAG. How could we use an alternative method to convert the DAG into another graph with the same shortest paths, however in linear time as opposed to the previous O(mn) time.
I'm assuming we could modify Bellman-Ford during execution of the Johnson algorithm, however I'm unsure how to make it linear. Essentially, how could we figure out a way to reweight all edges in a graph to be non-negative in linear O(n) time?
I have a graph, I want to walk through the graph (not necessary through all vertices), always taking path with greates weight. I cannot go through same vertex twice, I stop if there are no more moves I can make. What is the complexity? I assume it's "n" (where n is number of vertices) but I'm not sure.
If you can't go through the same vertice twice, your upper bound for edge traversals is n.
It's easy to think of examples where that would be a tight bound (a single chain of vertices connected for example).
However, keep in mind that complexity is for a given algorithm, not a general task, you haven't described your algorithm or how your graph is organized, so this question doesn't have any meaning.
If for example the graph is a clique, perhaps picking the highest weight edge for each traversal would itself take n computation steps (if the edges are kept in an unsorted list kept in each vertice), making the naive algorithm O(n^2) in this case. Other representations may have different complexity, but require different algorithms.
EDIT
If you're asking about finding the path with greatest overall weight (which may require you in some traversals to pick an edge that doesn't have the highest weight), than the problem is NP-hard. If it had a polynomial algorithm, then you could take an unweighted graph and find the longest path (a known NP hard problem as jimifiki pointed), and solve it with that algorithm.
From Longest Path Problem
This problem is called the Longest Path Problem, and is NP-complete.
For an undirected, unweighted graph, is there any difference in the time complexity of the algorithm to compute its average shortest path length vs, the complexity of the algorithm which computes the diameter of the graph, ie, the longest shortest path between two vertices?
According to Wikipedia, to calculate the diameter of the graph, you should first find the all-pairs shortest path. After calculating the all-pairs shortest path, both algorithms reduce down to a O(V^2) calculation so their complexities are the same.
I haven't read much literature on the subject but I suspect that the two are the same. However if there is a discrepancy, I'd say calculating the diameter of a graph might be asymptotically faster.
My algorithm for both would be to calculate all-pairs shortest path using Dijkstra's algorithm which runs in V*(E+V*log(V)). Then for the mean you would take the arithmetic average over all these values. I don't see a way you could speed this up.
However for calculating the diameter of a graph, there might be some clever tricks you can use to speed up this process. But as an upper bound, you can simply take the supremum over the all-pairs shortest path to get the diameter, which has the same run-time complexity as calculating the mean shortest path.
No there should not be any difference in the time complexity between the two.
You can find the longest path between two vertices by tweaking the shortest path algo.