In a directed graph with V nodes and E edges, the Bellman-Ford algorithm relaxes every vertex (or rather, the edges going out of every vertex) (V - 1) times. This is because the shortest path from the source to any other node contains at most (V - 1) edges. In the V-th iteration, if an edge can be relaxed, it indicates the presence of a negative cycle.
Now, I need to find the other nodes "ruined" by this negative cycle. That is, some nodes not on the negative cycle now have a distance of negative infinity from the source because of one or more nodes on the path from the source to the node that lie in a negative cycle.
One way to accomplish this is to run Bellman-Ford and take note of the nodes on negative cycles. Then, run DFS/BFS from these nodes to mark other nodes.
However, why can't we run the Bellman-Ford 2 * (V - 1) times to detect such nodes without resorting to DFS/BFS? If my understanding is right, relaxing all vertices 2 * (V - 1) times should allow the negative cycles to "propagate" their values to all other connected nodes.
Additional Details: I encountered this situation when solving this online problem: https://open.kattis.com/problems/shortestpath3
The Java code that I used (along with BFS/DFS that is not shown here) is as follows:
// Relax all vertices n - 1 times.
// And relax one more time to find negative cycles
for (int vv = 1; vv <= n; vv++) {
// Relax each vertex
for (int v = 0; v < n; v++) {
// For each edge
if (distTo[v] != (int) 1e9) {
for (int i = 0; i < adjList[v].size(); i++) {
int dest = adjList[v].get(i).fst;
int wt = adjList[v].get(i).snd;
if (distTo[v] + wt < distTo[dest]) {
distTo[dest] = distTo[v] + wt;
if (vv == n) {
isInfinite[v] = true;
isInfinite[dest] = true;
}
}
}
}
}
}
Consider a graph with N=4, M=5:
A -> B weight 1000
A -> C weight 1000
C -> D weight -1
D -> C weight -1
D -> B weight 1000
Let A be our source and B be the destination.
Now obviously there is a negative cycle (C <-> D). But whether we run the algorithm N times or 2N times or even 3N times, the shortest path from A to B is still 1000. Since the negative cycle only reduces the distance by a small amount every time it is used, it does not propogate to the other nodes as we expect it to.
A solution would be to mark the distance as negative infinity once a cycle affecting a node is identified. That way the negative cycle "takes precendence" over other shortest paths through other nodes.
Yours sincerely,
A fellow coder who has spent lots of time on this problem.
In a classical situation all nodes "on" a negative length cycle have an arbitrary small distance to the source.
So in each iteration after the v-1th the path from source to such nodes gets smaller.
The task requires you to return -infinity for all such nodes.
You could use a modified version of Bellman-Ford algorithm to mark the distance for all such nodes as -infinity and run it v-1 times to get the -infinity propagated to all other nodes connected to the cycle. But this takes a lot of extra time compared to just run DFS or BFS from the nodes on the cycle.
How can we count number of node - disjoint paths between any two nodes such that distance between two nodes is maximum K?
Details about node - disjoint path can be found here.
We are given a directed graph where we have to count number of node - disjoint path from vertex u to v such that maximum number of nodes between them is K - 2 (u and v are decremented from K , therefore K - 2). Number of vertices in graph can be up to than 10^5 and edges can be 6 * 10^5. I thought of implementing BFS for every node until maximum distance from source node is less than K. But I am not getting idea for implementation. Anybody please help me?
If anybody have idea to solve it using DFS, please share it.
DFS is the key to solve such problems. We can easily enumerate all the possible paths between 2 vertices using DFS, but we have to take care of the distance constraint.
My algorithm considers the number of edges traversed as a constraint. You can easily convert it to number of nodes traversed. Take that as an excercise.
We keep track of the number of edges traversed by variable e. If e becomes greater than K - 2, we terminate that recursive DFS call.
To maintain that a vertex has been visited we keep a boolean array visited. But if a recursive call terminates without finding a successful path, we discard any changes made to the array visited.
Only if a recursive DFS call is successful in finding a path, then we retain the visited array for the rest of the program.
So the pseudocode for the algorithm would be:
main function()
{
visited[source] = 1
e = 0//edges traversed so far.
sum = 0// the answer
found = false// found a path.
dfs(source,e)
print sum
.
.
.
}
dfs(source,e)
{
if(e > max_distance)
{
return
}
if(e <= max_distance and v == destination)
{
found = true
sum++
return
}
for all unvisited neighbouring vertices X of v
{
if(found and v != source)
return;
if(found and v == source)
{
found = false
visited[destination] = 0
}
visited[X] = 1
dfs(X , e + 1)
if(!found)
visited[X] = 1
}
}
Given an undirected weighted graph G(V,E). and any three vertices let u, v and w. Find a vertex x of G. such that dist(u,x) + dist(v,x) + dist(w,x) is minimum.
x could be any vertex in G (u, v and w included). is there exits any particular algorithm for this problem?
You can do it with stack algorithm like the pseudo-code below:
void FindNeigh(node node1, node node2,int graphsize)
{
byte[graphsize] isGraphProcessed; // 0 array
stack nodes1, nodes2; //0 arrays
nodes1.push(node1);
nodes2.push(node2);
bool found = false;
while(!nodes1.empty && !nodes2.empty())
{
stack tmp = null;
for(node: nodes1)
for(neigh : node.neighbors)
if(!isGraphProcessed[neigh.id])
{
tmp.push(neigh.id);
isGraphProcessed[neigh.id] = 1; // Flags for node 1
}
else if(isGraphProcessed[neigh.id] == 2) // The flag of node 2 is set
return neigh;
nodes1 =tmp;
tmp = null;
for(node: nodes2)
for(neigh : node.neighbors)
if(!isGraphProcessed[neigh.id])
{
tmp.push(neigh.id);
isGraphProcessed[neigh.id] = 2; // Flags for node 2
}
else if(isGraphProcessed[neigh.id] == 1) // The flag of node 1 is set
return neigh;
nodes2 = tmp;
}
return NULL; // don't exist
}
How does it work
You start from both edges of the graph
You check neighbors in a stack
If a neighbor have already been added in the stack of the other node, that mean that it have already been reached by the other node --> He is the closest node. We return it.
If nothing is found, we do the same thing with the neighbour of the neigbors (and so on recursively) until something is found.
If node2 can't be reached from node1 it returns 0.
Note : This algorithm works to find the minimal distance between 2 edges. If you want to do it for 3 edges you can add a 3rd stack and look for the first node having the 3 flags (e.g. 1, 2 and 4).
Hope it helps :)
If k is large and there are no negative edge cost cycles then Floyd Warshall's Algorithm can work. It runs in O(|V|^3) time and after its completion we have the entire shortest distance matrix and we can get the shortest distances between any two vertices in O(1) time. Then just scan and look for the best vertex x that gives the least sum of total distance value from the k vertices.
This was asked in an exam. Can you please help me find a solution?
Design an algorithm to find the number of ancestors of a given node (of a tree or a graph) using:
O(m) memory
Unlimited memory size
Assuming no cycles in graph (in which case the ancestors make no sense) DFS-loop can be used to calculate ancestors of any node k,just mark counted nodes in each iteration and donot count visited nodes twice.
for i in graph visited[i] = false // for DFS
for i in graph counted[i] = false // for ancestors
int globalcount = 0; // count the no of ancestors
for i in graph DFS(i,k) //DFS-loop
def bool DFS(u,k) { //K is the node whos ancestors want to find
if(!visited[u]) {
visited[u] = true // prevent re-entering
totalret = false // whether there is path from u to k
for each edge(u,v) {
retval = DFS(v)
if(retval&&!counted[u]&&u!=k) { //there is path from u to k & u is not counted
globalcount++
counted[u] = true
totalret = true
}
}
if(u == k) return true
else return totalret
}
return counted[u] // if visited already and whether ancestor(k)?
}
print globalcount // total ancestor(k)
space complexity : O(V) V : no of vertices
time complexity : O(E) E : no of edges in graph
The algorithm will be dependent on the design of the tree. The simplest example is for nodes containing their parent in which case it reduces to
int ancestors = 0;
while( node = node->parent) ancestors++;
Constraints does not pose an issue in any reasonable implementation.
If the node does not contain a parent it depends on the structure of the tree.
In the most complex case for an unordered tree it entails a full tree search, counting ancestors.
For a search tree all you need is to do a search.
I have a graph with n nodes as an adjacency matrix.
Is it possible to detect a sink in less than O(n) time?
If yes, how? If no, how do we prove it?
Sink vertex is a vertex that has incoming edges from other nodes and no outgoing edges.
Reading the link provided by SPWorley I was reminded of tournament tree algo for finding the minimum element in an array of numbers. The node at the top of the tree is a minimum element. Since the algorithm in the link also speaks about competition between two nodes (v,w) which is succeeded by w if v->w otherwise by v. We can use an algorithm similar to finding minimum element to find out a sink. However, a node is returned irrespective of the existence of a sink. Though, if a sink exists it is always returned. Hence, we finally have to check that the returned node is a sink.
//pseudo code
//M -> adjacency matrix
int a=0
for(int i=1;i<vertices;++i)
{
if(M[a,i]) a=i;
}
//check that a is sink by reading out 2V entries from the matrix
return a; //if a is a sink, otherwise -1
This page answers your exact question.
The linear time algorithm is
def find-possible-sink(vertices):
if there's only one vertex, return it
good-vertices := empty-set
pair vertices into at most n/2 pairs
add any left-over vertex to good-vertices
for each pair (v,w):
if v -> w:
add w to good-vertices
else:
add v to good-vertices
return find-possible-sink(good-vertices)
def find-sink(vertices):
v := find-possible-sink(vertices)
if v is actually a sink, return it
return "there is no spoon^H^H^H^Hink"
Suppose to the contrary that there exists an algorithm that queries fewer than (n-2)/2 edges, and let the adversary answer these queries arbitrarily. By the Pigeonhole Principle, there exist (at least) two nodes v, w that are not an endpoint of any edge queried. If the algorithm outputs v, then the adversary makes it wrong by putting in every edge with sink w, and similarly if the algorithm outputs w.
In the case of a general directed graph, no, and I don't think it needs a formal proof. At best, detecting a sink requires you to either identify the node and check that it has no outgoing edges, or inspect every other node and see that none of them have connections coming from it. In practice, you combine the two in an elimination algorithm, but there is no shortcut.
By the way, there is disagreement over the definition of sink. It's not usual to require all other nodes connect to the sink, because you can have multiple sinks. For instance, the bottom row in this diagram are all sinks, and the top row are all sources. However, it allows you to reduce the complexity to O(n). See here for some slightly garbled discussion.
I've been working on this problem and I believe this does it:
int graph::containsUniversalSink() {
/****************************************************************
Returns: Universal Sink, or -1 if it doesn't exist
Paramters: Expects an adjacency-matrix to exist called matrix
****************************************************************/
//a universal sink is a Vertex with in-degree |V|-1 and out-degree 0
//a vertex in a graph represented as an adjacency-matrix is a universal sink
//if and only if its row is all 0s and its column is all 1s except the [i,i] entry - path to itself (hence out-degree |V|-1)
//for i=0..|V|-1, j=0..|V|-1
//if m[i][j]==0 then j is not universal sink (unless i==j) - column is not all 1s
//if m[i][j]==1 then i is not universal sink - row is not all 0s
int i=0,j=1;
while (i<numVertices && j<numVertices) {
if (j>i && matrix[i][j]==true) {
//we found a 1, disqualifying vertex i, and we're not in the last row (j>i) so we move to that row to see if it's all 0s
i=j;
if (j<numVertices-1) {
//if the row we're moving to is not the last row
//we want to start checking from one element after the identity element
//to avoid the possibility of an infinite loop
j++;
}
continue;
}
if (j==numVertices-1 && matrix[i][j]==false) {
//the last element in a row is a 0
//thus this is the only possible universal sink
//we have checked the row for 0s from i+1 (or i if we're in the last row) to numVertices-1 (inclusive)
//we need to check from 0 to i (inclusive)
for (j=0; j<i+1; j++) {
if (matrix[i][j]==true || (matrix[j][i]==false && i!=j)) {
//this row is not all 0s, or this column is not all 1s so return -1 (false)
return -1;
}
}
//row is all 0s, but we don't know that column is all 1s
//because we have only checked the column from 0 to i (inclusive), so if i<numVertices-1
//there could be a 0 in the column
//so we need to check from i+1 to numVertices-1 (inclusive)
for (j=i+1; j<numVertices; j++) {
if (matrix[j][i]==false) {
return -1;
}
}
//if we get here we have a universal sink, return it
return i;
}
j++;
}
//if we exit the loop there is no universal sink
return -1;
/********************
Runtime Analysis
The while loop will execute at most |V| times: j is incremented by 1 on every iteration
If we reach the end of a row - this can only happen once - then the first for loop will
execute i times and the second will execute numVertices-i times, for a combined numVertices iterations
So we have 2|V| loop executions for a run-time bound of O(|V|)
********************/
}
I have figured out a solution to this.
I'm assuming arrays are initialized with all 0's (otherwise N needs to be filled with 0) and that M is a adjacency matrix for the graph. I let n be the number of nodes (n = |V|).
j,i = 1;
N = new int[n]
while (j <= n && i <= n) {
if (N[i] == 1) {
i++
} else if (N[j] == 1) {
j++;
} else if (M[i,j] == 1) {
N[i] = 1
i++
} else if (i == j) {
j++
} else {
N[j] = 1
j++
}
}
for (z = 1 to n) {
if (N[z] == 0) {
return z
}
}
return NULL
Why this works (not formal proof):
Any node with any edges going from it is not a universal sink. Thus, if M[i,j] is 1 for any j, i can not be a sink.
If M[i,j] is 0 for any i, then i does not have an edge to j, and j can not be a universal sink.
A 1 at N[i] designates that I know it isn't a sink, and any node that I know isn't a sink can be skipped on both i and j. I stop when either exeeds n.
This way I keep checking any nodes, that I still don't know isn't a sink, until 1 or 0 possible sinks remain.
Thus any node that is still 0 at the end of the loop must be the sink, and there will only be either 1 or 0 of those.
Why it is O(n):
This always increments either i or j. It stops when either exeeds n. Thus the worst case for the loop is 2n. The work in the loop is constant. The last loop is worst case n. Hence the algorithm is O(3n) = O(n).
This solution is based on the idea of the celebrity problem, which is a way of considering the same problem.
There are so many algorithms that shows how to find the universal sink in O(n) but they are so complex and couldn't be understood easily. I have found it on internet in paper that shows how to find a universal sink in O(n) very smoothly.
1) first create a "SINK" set consisting of all vertices of the graph also
create an adjacency list of the graph.
2) now choose first 2 elements of the set.
3) if(A(x,y)==1){
remove x; // remove x from "SINK" set.
}else{
remove y; } //remove y from "SINK" set.B
By this algo you will end up with the sink node in your SINK set in "n-1" time. that is O(n) time.