Dijkstra’s algorithm to compute the shortest paths from a given source vertex s in O ((V+E) log W) time - algorithm

G (V, E) is a weighted, directed graph with non negative weight function W : E -> {0, 1, 2, 3, 4... W } where W is any non negative integer. I want to modify the Dijkstra’s algorithm to compute the shortest paths from a given source vertex s in O ((V+E) log W) time.

Let's take a standard Dijkstra's algorithm with a priority queue and change it a little bit:
We'll have a map from an int (a distance) to a vector to keep all vertices with the given distance.
We'll pop the element from the first vector to get the current vertex at each iteration and delete a key if the vector becomes empty.
We'll push an element to a corresponding vector when we add it to the queue.
There are at most W + 1 different keys in the map at any point. Why? Let v be the current vertex and d[v] be its distance. All keys less than d[v] were deleted. Any undiscovered vertex is not in the map. For any discovered vertex u d[u] <= d[v] + max_edge_lenght = d[v] + W. Thus, any key in the map must be in the [d[v], d[v] + W] range.
The size of the map is O(W) so every operation with works in O(log W) time (pushing/popping an element from the vector is O(1)).
Of course, it's useful only as long as W is relatively small (otherwise, you can go with a standard O((V + E) log V) implementation).

Related

Find the N highest cost vertices that has a path to S, where S is a vertex in an undirected Graph G

I would like to know, what would be the most efficient way (w.r.t., Space and Time) to solve the following problem:
Given an undirected Graph G = (V, E), a positive number N and a vertex S in V. Assume that every vertex in V has a cost value. Find the N highest cost vertices that is connected to S.
For example:
G = (V, E)
V = {v1, v2, v3, v4},
E = {(v1, v2),
(v1, v3),
(v2, v4),
(v3, v4)}
v1 cost = 1
v2 cost = 2
v3 cost = 3
v4 cost = 4
N = 2, S = v1
result: {v3, v4}
This problem can be solved easily by the graph traversal algorithm (e.g., BFS or DFS). To find the vertices connected to S, we can run either BFS or DFS starting from S. As the space and time complexity of BFS and DFS is same (i.e., time complexity: O(V+E), space complexity: O(E)), here I am going to show the pseudocode using DFS:
Parameter Definition:
* G -> Graph
* S -> Starting node
* N -> Number of connected (highest cost) vertices to find
* Cost -> Array of size V, contains the vertex cost value
procedure DFS-traversal(G,S,N,Cost):
let St be a stack
let Q be a min-priority-queue contains <cost, vertex-id>
let discovered is an array (of size V) to mark already visited vertices
St.push(S)
// Comment: if you do not want to consider the case "S is connected to S"
// then, you can consider commenting the following line
Q.push(make-pair(S, Cost[S]))
label S as discovered
while St is not empty
v = St.pop()
for all edges from v to w in G.adjacentEdges(v) do
if w is not labeled as discovered:
label w as discovered
St.push(w)
Q.push(make-pair(w, Cost[w]))
if Q.size() == N + 1:
Q.pop()
let ret is a N sized array
while Q is not empty:
ret.append(Q.top().second)
Q.pop()
Let's first describe the process first. Here, I run the iterative version of DFS to traverse the graph starting from S. During the traversal, I use a priority-queue to keep the N highest cost vertices that is reachable from S. Instead of the priority-queue, we can use a simple array (or even we can reuse the discovered array) to keep the record of the reachable vertices with cost.
Analysis of space-complexity:
To store the graph: O(E)
Priority-queue: O(N)
Stack: O(V)
For labeling discovered: O(V)
So, as O(E) is the dominating term here, we can consider O(E) as the overall space complexity.
Analysis of time-complexity:
DFS-traversal: O(V+E)
To track N highest cost vertices:
By maintaining priority-queue: O(V*logN)
Or alternatively using array: O(V*logV)
The overall time-complexity would be: O(V*logN + E) or O(V*logV + E)

very hard and elegant question on shortest path

Given a weighed, connected and directed graph G=(V,E) with n vertexes and m edges, and given a pre-calculated shortest path distance's matrix S where S is n*n S(i,j) denotes the weight of shortest path from vertex i to vertex j.
we know just weight of one edge (u, v) is changed (increased or decreased).
for two specific vertex s and t we want to update the shortest path length between these two vertex.
This can be done in O(1).
How is this possible? what is the trick of this answer?
You certainly can for decreases. I assume S will always refer to the old distances. Let l be the new distance between (u, v). Check if
S(s, u) + l + S(v, t) < S(s, t)
if yes then the left hand side is the new optimal distance between s and t.
Increases are impossible. Consider the following graph (edges in red have zero weight):
Suppose m is the minimum weight edge here, except for (u, v) which used to be lower. Now we update (u, v) to some weight l > m. This means we must find m to find the new optimum length.
Suppose we could do this in O(1) time. Then it means we could find the minimum of any array in O(1) time by feeding it into this algorithm after adding (u, v) with weight -BIGNUMBER and then 'updating' it to BIGNUMBER (we can lazily construct the distance matrix because all distances are either 0, inf or just the edge weights). That is clearly not possible, thus we can't solve this problem in O(1) either.

Pairs of vertices in graph having specific distance

Given a tree with N vertices and a positive number K. Find the number of distinct pairs of the vertices which have a distance of exactly K between them. Note that pairs (v, u) and (u, v) are considered to be the same pair (1 ≤ N ≤ 50000, 1 ≤ K ≤ 500).
I am not able to find an optimum solution for this. I can do BFS from each vertex and count the no of vertices that is reachable from that and having distance less than or equal to K. But then in worst case the complexity will be order of 2. Is there any faster way around??
You can achieve that in more simple way.
Run DFS on tree and for each vertex calculate the distance from the root - save those in array (access by o(1)).
For each pair of vertex in your graph:
Find their LCA (Lowest common ancestor there are algorithm to do that in 0(1)).
Assume u and v are 2 arbitrary vertices and w is their LCA -> subtract the distance from the w to the root from u to the root - now you have the distance between u and w. Do the same for v -> with o(1) you got the distance for (v,w) and (u,w) -> sum them together and you get the (v,u) distance - now all you have to do is compare to K.
Final complexity is o(n^2)
Improving upon the other answer's approach, we can make some key observations.
To calculate distances between two nodes you need their LCA(Lowest Common Ancestor) and depths as the other answer is trying to do. The formula used here is:
Dist(u, v) = depth[u] + depth[v] - 2 * depth[ lca(u, v) ]
Depth[x] denotes distance of x from root, precomputed using DFS once starting from root node.
Now here comes the key observation, you already have distance value K, assume that dist(u, v) = K using this assumption calculate(predict?) depth of LCA. By putting K in above formula we get:
depth[ lca(u, v) ] = (depth[u] + depth[v] - K) / 2
Now that you have depth of LCA you know that distance between u and lca is depth[u] - depth[ lca(u, v) ] and between v and lca is depth[v] - depth[ lca(u, v) ], let this be X and Y respectively.
Now we know that LCA is the lowest common ancestor thus, the Xth parent of u and Yth parent of v should be the LCA, so now if Xth parent of u and Yth parent of v is indeed the same node then we can say that our pre-assumption about distances between the nodes was true and the distance between the two nodes is K.
You can caluculate the Xth and Yth ancestor of the nodes in O(logN) complexity using Binary Lifting Algorithm with a preprocessing of O(NLogN) time, this preprocessing can be included directly in your DFS when a node is visited for the first time.
Corner Cases:
Calcuated depth of LCA should not be a fraction or negative.
If depth of any node u or v matches the calculated depth of the node then that node is the ancestor of the other node.
Consider this tree:
Assuming K = 4, we get that depth[lca] = 1 using the formula above, and if we get the Xth and Yth ancestor of u and v we will get the same node 1, which should validate our assumption but this is not true since the distance between u and v is actually 2 as visible in the picture above. This is because LCA in this case is actually 2, to handle this case calcuate X-1th and Y-1th ancestor of u and v too, respectively and check if they are different.
Final Complexity: O(NlogN)

Maximum weighted path between two vertices in a directed acyclic Graph

Love some guidance on this problem:
G is a directed acyclic graph. You want to move from vertex c to vertex z. Some edges reduce your profit and some increase your profit. How do you get from c to z while maximizing your profit. What is the time complexity?
Thanks!
The problem has an optimal substructure. To find the longest path from vertex c to vertex z, we first need to find the longest path from c to all the predecessors of z. Each problem of these is another smaller subproblem (longest path from c to a specific predecessor).
Lets denote the predecessors of z as u1,u2,...,uk and dist[z] to be the longest path from c to z then dist[z]=max(dist[ui]+w(ui,z))..
Here is an illustration with 3 predecessors omitting the edge set weights:
So to find the longest path to z we first need to find the longest path to its predecessors and take the maximum over (their values plus their edges weights to z).
This requires whenever we visit a vertex u, all of u's predecessors must have been analyzed and computed.
So the question is: for any vertex u, how to make sure that once we set dist[u], dist[u] will never be changed later on? Put it in another way: how to make sure that we have considered all paths from c to u before considering any edge originating at u?
Since the graph is acyclic, we can guarantee this condition by finding a topological sort over the graph. topological sort is like a chain of vertices where all edges point left to right. So if we are at vertex vi then we have considered all paths leading to vi and have the final value of dist[vi].
The time complexity: topological sort takes O(V+E). In the worst case where z is a leaf and all other vertices point to it, we will visit all the graph edges which gives O(V+E).
Let f(u) be the maximum profit you can get going from c to u in your DAG. Then you want to compute f(z). This can be easily computed in linear time using dynamic programming/topological sorting.
Initialize f(u) = -infinity for every u other than c, and f(c) = 0. Then, proceed computing the values of f in some topological order of your DAG. Thus, as the order is topological, for every incoming edge of the node being computed, the other endpoints are calculated, so just pick the maximum possible value for this node, i.e. f(u) = max(f(v) + cost(v, u)) for each incoming edge (v, u).
Its better to use Topological Sorting instead of Bellman Ford since its DAG.
Source:- http://www.utdallas.edu/~sizheng/CS4349.d/l-notes.d/L17.pdf
EDIT:-
G is a DAG with negative edges.
Some edges reduce your profit and some increase your profit
Edges - increase profit - positive value
Edges - decrease profit -
negative value
After TS, for each vertex U in TS order - relax each outgoing edge.
dist[] = {-INF, -INF, ….}
dist[c] = 0 // source
for every vertex u in topological order
if (u == z) break; // dest vertex
for every adjacent vertex v of u
if (dist[v] < (dist[u] + weight(u, v))) // < for longest path = max profit
dist[v] = dist[u] + weight(u, v)
ans = dist[z];

Counting the sum of every nodes' neighbors' degrees?

For each node u in an undirected graph, let twodegree[u] be the sum of the degrees of u's neighbors. Show how to compute the entire array of twodegree[.] values in linear time, given a graph in adjacency list format.
This is the solution
for all u ∈ V :
degree[u] = 0
for all (u; w) ∈ E:
degree[u] = degree[u] + 1
for all u ∈ V :
twodegree[u] = 0
for all (u; w) ∈ E:
twodegree[u] = twodegree[u] + degree[w]
can someone explain what degree[u] does in this case and how twodegree[u] = twodegree[u] + degree[w] is supposed to be the sum of the degrees of u's neighbors?
Here, degree[u] is the degree of the node u (that is, the number of nodes adjacent to it). You can see this computed by the first loop, which iterates over all edges in the graph and increments degree[u] for each edge in the graph.
The second loop then iterates over every node in the graph and computes the sum of all its neighbors' degrees. It uses the fact that degree[u] is precomputed in order to run in O(m + n) time.
Hope this helps!
In addition to what #templatetypedef has said, the statement twodegree[u] = twodegree[u] + degree[w] simply keeps track of the twodegree of u while it keeps iteratively (or cumulatively) adding the degrees of its neighbors (that are temporarily stored in w)

Resources