Is connected graph a requirement for Dijkstra algorithm? - algorithm

Apart from the graph having non-negative weights, does Dijkstra algorithm require connectedness? E.g. Would dijkstra algorithm work for a graph that is disconnected, where 3 vertices are connected in a component and 2 other vertices are in another component?

Here is Dijkstra's algorithm from Wikipedia:
Mark all nodes unvisited. Create a set of all the unvisited nodes called the unvisited set.
Assign to every node a tentative distance value: set it to zero for our initial node and to infinity for all other nodes. During the run of the algorithm, the tentative distance of a node v is the length of the shortest path discovered so far between the node v and the starting node. Since initially no path is known to any other vertex than the source itself (which is a path of length zero), all other tentative distances are initially set to infinity. Set the initial node as current.[15]
For the current node, consider all of its unvisited neighbors and calculate their tentative distances through the current node. Compare the newly calculated tentative distance to the one currently assigned to the neighbor and assign it the smaller one. For example, if the current node A is marked with a distance of 6, and the edge connecting it with a neighbor B has length 2, then the distance to B through A will be 6 + 2 = 8. If B was previously marked with a distance greater than 8 then change it to 8. Otherwise, the current value will be kept.
When we are done considering all of the unvisited neighbors of the current node, mark the current node as visited and remove it from the unvisited set. A visited node will never be checked again (this is valid and optimal in connection with the behavior in step 6.: that the next nodes to visit will always be in the order of 'smallest distance from initial node first' so any visits after would have a greater distance).
If the destination node has been marked visited (when planning a route between two specific nodes) or if the smallest tentative distance among the nodes in the unvisited set is infinity (when planning a complete traversal; occurs when there is no connection between the initial node and remaining unvisited nodes), then stop. The algorithm has finished.
Otherwise, select the unvisited node that is marked with the smallest tentative distance, set it as the new current node, and go back to step 3.
Note the part I emphasized. Laid out this way, if there is no path between the initial node and the destination node, the algorithm still stops once all nodes in the component of the initial node are visited.
So no, the graph being connected is not a requirement. The algorithm will tell you whether the path exists.

According to the Djikstra's algorithm given in psuedocode on Wikipedia (copied below for convenience), the disconnected vertices distance from the source would remain as INFINITY. The answer to your question is no, Dijkstra's algorithm still returns a correct result even if the graph is disconnected.
1 function Dijkstra(Graph, source):
2
3 for each vertex v in Graph.Vertices:
4 dist[v] ← INFINITY
5 prev[v] ← UNDEFINED
6 add v to Q
7 dist[source] ← 0
8
9 while Q is not empty:
10 u ← vertex in Q with min dist[u]
11 remove u from Q
12
13 for each neighbor v of u still in Q:
14 alt ← dist[u] + Graph.Edges(u, v)
15 if alt < dist[v]:
16 dist[v] ← alt
17 prev[v] ← u
18
19 return dist[], prev[]

Related

Path with least maximum edge weight

Let's say we have a directed non-negative-weighted graph.
We have to find the least cost path between (u, v).
The cost of a path is defined as the maximum cost of the second most expensive edge that the path contains.
Here's an example.
Graph with 4 nodes and 4 edges:
from 1 to 2 at cost 3
from 1 to 3 at cost 7
from 2 to 3 at cost 5
from 3 to 4 at cost 2
The optimal path between 1 and 4 should be 1 - 3 - 4 with total cost 2 (costs are 2 and 7, the second highest one is 2).
Dijkstra standard SSSP (reconstructing the path and finding the second highest edge) obviously doesn't work.
I've thought at MST (which should be OK) but it's not guaranteed to cover the best path (u,v).
We can get O(E + V log V), which is o(E log E) for sufficiently dense graphs. Using Dijkstra with a Fibonacci heap, compute two max-weight (as opposed to second max-weight) shortest path trees, one directed leafward from the root u, one directed rootward to the root v. For each edge s->t, consider the path consisting of the max-weight shortest path from u to s, the edge s->t, and the max-weight shortest path from t to v, whose second max-weight is bounded by taking the maximum of the u->s and t->v segments.
Consider binary search for the optimum cost. Sort weights of all edges, and search for the least value X satisfying the condition:
There is a u -> v path which has at most one edge with weight greater than X.
How to check the condition? For a given X:
Run DFS from u and find set U of vertices reachable from u using edges of weight at most X. If v is in U, condition is satisfied.
Otherwise find according set V with DFS from v.
The condition is satisfied if and only if there exist an edge with one vertex in U and other in V.
Time complexity: O(E log E).
You can binary search over the answer(sort the edges by their weight before it).
For a fixed answer c, let's call edges with weight > c heavy and other edges light.
So all you need to check is if there is path with at most 1 heavy edge. You can do it by assigning 0 cost to light edges and 1 to heavy ones and running 0-1 bfs. If the distance is <= 1, then it possible to obtain a path with a cost at most c.
The time complexity is O(E log E).

Using BFS or DFS to determine the connectivity in a non connected graph?

How can i design an algorithm using BFS or DFS algorithms in order to determine the connected components of a non connected graph, the algorithm must be able to denote the set of vertices of each connected component.
This is my aproach:
1) Initialize all vertices as not visited.
2) Do a DFS traversal of graph starting from any arbitrary vertex v.
If DFS traversal doesn’t visit all vertices, then return false.
3) Reverse all arcs (or find transpose or reverse of graph)
4) Mark all vertices as not-visited in reversed graph.
5) Do a DFS traversal of reversed graph starting from same vertex v
(Same as step 2). If DFS traversal doesn’t visit all vertices, then
return false. Otherwise return true.
The idea is, if every node can be reached from a vertex v, and every
node can reach v, then the graph is strongly connected. In step 2, we
check if all vertices are reachable from v. In step 4, we check if all
vertices can reach v (In reversed graph, if all vertices are reachable
from v, then all vertices can reach v in original graph).
Any idea of how to improve this solution?.
How about
let vertices = input
let results = empty list
while there are vertices in vertices:
create a set S
choose an arbitrary unexplored vertex, and put it in S.
run BFS/DFS from that vertex, and with each vertex found, remove it from vertices and add it to S.
add S to results
return results
When this completes, you'll have a list of sets of vertices, where each set was made from graph searching from some vertex (making the vertices in each set connected). Assuming an undirected graph, this should work OK (off the top of my head).
This can be done easily using either BFS or DFS in time complexity of O(V+E).
// this is the DFS solution
numCC = 0;
dfs_num.assign(V, UNVISITED); // sets all vertices’ state to UNVISITED
for (int i = 0; i < V; i++) // for each vertex i in [0..V-1]
if (dfs_num[i] == UNVISITED) // if vertex i is not visited yet
printf("CC %d:", ++numCC), dfs(i), printf("\n");
The output of above code for 3 connected components would be something like :
// CC 1: 0 1 2 3 4
// CC 2: 5
// CC 3: 6 7 8
A standard approach for solving this problem is to run DFS starting from each node.
Start by labeling all nodes as unvisited. Then, iterate over the nodes in any order. For each node, if it's not already labeled as being in a connected component, run DFS from that node and mark all reachable nodes as being in the same CC. If the node was already marked, skip it. This then discovers all CC's of the graph one CC at a time.
Moreover, this is very efficient. If there are m edges and n nodes, the runtime is O(n) for the first step (marking all nodes as unvisited) and O(m + n) for the second, since each node and edge are visited at most twice. Thus the overall runtime is O(m + n).
Hope this helps!
Since you seem to be working with a directed graph, and you want to find the connected components (not strongly connected), you have to convert your graph to an undirected graph first. So for each vertex, add a temporary vertex in the opposite direction. Then you can use a simple DFS starting from each vertex which hasn't been visited yet to find the connected components. Finally, you can remove the temporary vertices.

Explanation of Algorithm for finding articulation points or cut vertices of a graph

I have searched the net and could not find any explanation of a DFS algorithm for finding all articulation vertices of a graph. There is not even a wiki page.
From reading around, I got to know the basic facts from here. PDF
There is a variable at each node which is actually looking at back edges and finding the closest and upmost node towards the root node. After processing all edges it would be found.
But I do not understand how to find this down & up variable at each node during the execution of DFS. What is this variable doing exactly?
Please explain the algorithm.
Thanks.
Finding articulation vertices is an application of DFS.
In a nutshell,
Apply DFS on a graph. Get the DFS tree.
A node which is visited earlier is a "parent" of those nodes which are reached by it and visited later.
If any child of a node does not have a path to any of the ancestors of its parent, it means that removing this node would make this child disjoint from the graph.
There is an exception: the root of the tree. If it has more than one child, then it is an articulation point, otherwise not.
Point 3 essentially means that this node is an articulation point.
Now for a child, this path to the ancestors of the node would be through a back-edge from it or from any of its children.
All this is explained beautifully in this PDF.
I'll try to develop an intuitive understanding on how this algorithm works and also give commented pseudocode that outputs Bi-Components as well as bridges.
It's actually easy to develop a brute force algorithm for articulation points. Just take out a vertex, and run BFS or DFS on a graph. If it remains connected, then the vertex is not an articulation point, otherwise it is. This will run in O(V(E+V)) = O(EV) time. The challenge is how to do this in linear time (i.e. O(E+V)).
Articulation points connect two (or more) subgraphs. This means there are no edges from one subgraph to another. So imagine you are within one of these subgraphs and visiting its node. As you visit the node, you flag it and then move on to the next unflagged node using some available edge. While you are doing this, how do you know you are within still same subgraph? The insight here is that if you are within the same subgraph, you will eventually see a flagged node through an edge while visiting an unflagged node. This is called a back edge and indicates that you have a cycle. As soon as you find a back edge, you can be confident that all the nodes through that flagged node to the one you are visiting right now are all part of the same subgraph and there are no articulation points in between. If you didn't see any back edges then all the nodes you visited so far are all articulation points.
So we need an algorithm that visits vertices and marks all points between the target of back edges as currently-being-visited nodes as within the same subgraph. There may obviously be subgraphs within subgraphs so we need to select largest subgraph we have so far. These subgraphs are called Bi-Components. We can implement this algorithm by assigning each bi-component an ID which is initialized as just a count of the number of vertices we have visited so far. Later as we find back edges, we can reset the bi-compinent ID to lowest we have found so far.
We obviously need two passes. In the first pass, we want to figure out which vertex we can see from each vertex through back edges, if any. In the second pass we want to visit vertices in the opposite direction and collect the minimum bi-component ID (i.e. earliest ancestor accessible from any descendants). DFS naturally fits here. In DFS we go down first and then come back up so both of the above passes can be done in a single DFS traversal.
Now without further ado, here's the pseudocode:
time = 0
visited[i] = false for all i
GetArticulationPoints(u)
visited[u] = true
u.st = time++
u.low = v.st //keeps track of highest ancestor reachable from any descendants
dfsChild = 0 //needed because if no child then removing this node doesn't decompose graph
for each ni in adj[i]
if not visited[ni]
GetArticulationPoints(ni)
++dfsChild
parents[ni] = u
u.low = Min(u.low, ni.low) //while coming back up, get the lowest reachable ancestor from descendants
else if ni <> parent[u] //while going down, note down the back edges
u.low = Min(u.low, ni.st)
//For dfs root node, we can't mark it as articulation point because
//disconnecting it may not decompose graph. So we have extra check just for root node.
if (u.low = u.st and dfsChild > 0 and parent[u] != null) or (parent[u] = null and dfsChild > 1)
Output u as articulation point
Output edges of u with v.low >= u.low as bridges
output u.low as bicomponent ID
One fact that seems to be left out of all the explanations:
Fact #1: In a depth first search spanning tree (DFSST), every backedge connects a vertex to one of its ancestors.
This is essential for the algorithm to work, it is why an arbitrary spanning tree won't work for the algorithm. It is also the reason why the root is an articulation point iff it has more than 1 child: there cannot be a backedge between the subtrees rooted at the children of the spanning tree's root.
A proof of the statement is, let (u, v) be a backedge where u is not an ancestor of v, and (WLOG) u is visited before v in the DFS. Let p be the deepest ancestor of both u and v. Then the DFS would have to visit p, then u, then somehow revisit p again before visiting v. But it isn't possible to revisit p before visiting v because there is an edge between u and v.
Call V(c) the set of vertices in the subtree rooted at c in the DFSST
Call N(c) the set of vertices for which that have a neighbor in V(c) (by edge or by backedge)
Fact #2:
For a non root node u,
If u has a child c such that N(c) ⊆ V(c) ∪ {u} then u is an articulation point.
Reason: for every vertex w in V(c), every path from the root to w must contain u. If not, such a path would have to contain a back edge that connects an ancestor of u to a descendant of u due to Fact #1, making N(c) larger than V(c).
Fact #3:
The converse of fact #2 is also true.
Reason: Every descendant of u has a path to the root that doesn't pass through u.
A descendant in V(c) can bypass u with a path through a backedge that connects V(c) to N(c)/V(c).
So for the algorithm, you only need to know 2 things about each non-root vertex u:
The depth of the vertex, say D(u)
The minimum depth of N(u), also called the lowpoint, lets say L(u)
So if a vertex u has a child c, and L(c) is less than D(u), then that mean the subtree rooted at c has a backedge that reaches out to an ancestor of u which makes it not an articulation point by Fact #3. Conversely also by Fact #2.
If low of the descendant of u is greater than the dfsnum of u, then u is said to be the Articulation Point.
int adjMatrix[256][256];
int low[256], num=0, dfsnum[256];
void cutvertex(int u){
low[u]=dfsnum[u]=num++;
for (int v = 0; v < 256; ++v)
{
if(adjMatrix[u][v] && dfsnum[v]==-1)
{
cutvertex(v);
if(low[v]>dfsnum[u])
cout<<"Cut Vertex: "<<u<<"\n";
low[u]=min(low[u], low[v]);
}
else{
low[u]=min(low[u], dfsnum[v]);
}
}
}

Second min cost spanning tree

I'm writing an algorithm for finding the second min cost spanning tree. my idea was as follows:
Use kruskals to find lowest MST.
Delete the lowest cost edge of the MST.
Run kruskals again on the entire graph.
return the new MST.
My question is: Will this work? Is there a better way perhaps to do this?
You can do it in O(V2). First compute the MST using Prim's algorithm (can be done in O(V2)).
Compute max[u, v] = the cost of the maximum cost edge on the (unique) path from u to v in the MST. Can be done in O(V2).
Find an edge (u, v) that's NOT part of the MST that minimizes abs(max[u, v] - weight(u, v)). Can be done in O(E) == O(V2).
Return MST' = MST - {the edge that has max[u, v] weight} + {(u, v)}, which will give you the second best MST.
Here's a link to pseudocode and more detailed explanations.
Consider this case:
------100----
| |
A--1--B--3--C
| |
| 3
| |
2-----D
The MST consists of A-B-D-C (cost 6). The second min cost is A-B-C-D (cost 7). If you delete the lowest cost edge, you will get A-C-B-D (cost 105) instead.
So your idea will not work. I have no better idea though...
You can do this -- try removing the edges of the MST, one at a time from the graph, and run the MST, taking the min from it. So this is similar to yours, except for iterative:
Use Kruskals to find MST.
For each edge in MST:
Remove edge from graph
Calculate MST' on MST
Keep track of smallest MST
Add edge back to graph
Return the smallest MST.
This is similar to Larry's answer.
After finding MST,
For each new_edge =not a edge in MST
Add new_edge to MST.
Find the cycle that is formed.
Find the edge with maximum weight in
cycle that is not the non-MST edge
you added.
Record the weight increase as W_Inc
= w(new_edge) - w(max_weight_edge_in_cycle).
If W_Inc < Min_W_Inc_Seen_So_Far Then
Min_W_Inc_Seen_So_Far = W_Inc
edge_to_add = new_edge
edge_to_remove = max_weight_edge_in_cycle
Solution from following link.
http://web.mit.edu/6.263/www/quiz1-f05-sol.pdf
slight edit to your algo.
Use kruskals to find lowest MST.
for all edges i of MST
Delete edge i of the MST.
Run kruskals again on the entire graph.
loss=cost new edge introduced - cost of edge i
return MST for which loss is minimum
Here is an algorithm which compute the 2nd minimum spanning tree in O(n^2)
First find out the mimimum spanning tree (T). It will take O(n^2) without using heap.
Repeat for every edge e in T. =O(n^2)
Lets say current tree edge is e. This tree edge will divide the tree into two trees, lets say T1 and T-T1. e=(u,v) where u is in T1 and v is in T-T1. =O(n^2)
Repeat for every vertex v in T-T1. =O(n^2)
Select edge e'=(u,v) for all v in T-T1 and e' is in G (original graph) and it is minimum
Calculate the weight of newly formed tree. Lets say W=weight(T)-weight(e)+weight(e')
Select the one T1 which has a minimum weight
Your approach will not work, as it might be the case that min. weight edge in the MST is a bridge (only one edge connecting 2 parts of graph) so deleting this edge from the set will result in 2 new MST as compared to one MST.
based on #IVlad's answer
Detailed explanation of the O(V² log V) algorithm
Find the minimum spanning tree (MST) using Kruskal's (or Prim's) algorithm, save its total weight, and for every node in the MST store its tree neighbors (i.e. the parent and all children) -> O(V² log V)
Compute the maximum edge weight between any two vertices in the minimum spanning tree. Starting from every vertex in the MST, traverse the entire tree with a depth- or breadth-first search by using the tree node neighbor lists computed earlier and store the maximum edge weight encountered so far at every new vertex visited. -> O(V²)
Find the second minimum spanning tree and its total weight. For every edge not belonging to the original MST, try disconnecting the two vertices it connects by removing the tree edge with the maximum weight in between the two vertices, and then reconnecting them with the currently considered vertex (note: the MST should be restored to its original state after every iteration). The total weight can be calculated by subtracting the weight of the removed edge and adding that of the added one. Store the minimum of the total weights obtained.
To practice you could try the competitive programming problem UVa 10600 - ACM Contest and Blackout, which involves finding the second minimum spanning tree in a weighted graph, as asked by the OP. My implementation (in modern C++) can be found here.
MST is a tree which has the minimum weight total of all edges of the graph. Thus, 2nd minimum mst will have the 2nd minimum total weight of all edges in the graph.
let T -> BEST_MST ( sort the edges in the graph , then find MST using kruskal algorithm)
T ' -> 2nd best MST
let's say T has 7 edges , now to find T ' we will one by one remove one of those 7 edges and find a replacement for that edge ( cost of that edge definitely will be greater than the edge we just removed from T ).
let's say original graph has 15 edges
our best MST ( T ) has 7 edges
and 2nd best MST ( T ' ) will also going to have 7 edges only
How to find T '
there are 7 edges in T , now for all those 7 edges remove them one by one and find replacement for those edges .
let's say edges in MST ( T ) --> { a,b,c,d,e,f,g }
let's say our answer will be 2nd_BEST_MST and initially it has infinte value ( i know it doesn't sounds good , let's just assume it for now ).
for all edges in BEST_MST :
current_edge = i
find replacement for that edge, replacement for that edge will definitely going to have have weight more than the ith edge ( one of 7 edges )
how we will going to find the replacement for that edge , using Kruskul algorithm ( we are finding the MST again , so we will use kruskal algorithm only , but this we don't have to sort edges again , because we did it when we were finding the BEST_MST ( T ).
NEW_MST will be generated
2nd_best_MST = min( NEW_MST , 2nd_best_MST )
return 2nd_best_MST
ALGORITHM
let' say orignal graph has 10 edges
find the BEST_MST ( using kruskal algo) and assume BEST_MST has only 6 edges
bow there are 4 edges remaining which is not in the BEST_MST ( because their weight value is large and one of those edges will give us our 2nd_Best_MST
for each edge 'X' not present in the BEST_MST ( i.e. 4 edges left ) add that edge in out BEST_MST which will create the cycle
find the edge 'K' with the maximum weight in the cycle ( other than newly_added_edge 'X' )
remove edge 'K' temporarily which will form a new spanning tree
calculate the difference in weight and map the weight_difference with the edge 'X' .
repeat step 4 for all those 4 edges and return the spanning tree with the smallest weight difference to the BEST_MST.

Algorithm to check if directed graph is strongly connected

I need to check if a directed graph is strongly connected, or, in other words, if all nodes can be reached by any other node (not necessarily through direct edge).
One way of doing this is running a DFS and BFS on every node and see all others are still reachable.
Is there a better approach to do that?
Consider the following algorithm.
Start at a random vertex v of the graph G, and run a DFS(G, v).
If DFS(G, v) fails to reach every other vertex in the graph G, then there is some vertex u, such that there is no directed path from v to u, and thus G is not strongly connected.
If it does reach every vertex, then there is a directed path from v to every other vertex in the graph G.
Reverse the direction of all edges in the directed graph G.
Again run a DFS starting at v.
If the DFS fails to reach every vertex, then there is some vertex u, such that in the original graph there is no directed path from u to v.
On the other hand, if it does reach every vertex, then in the original graph there is a directed path from every vertex u to v.
Thus, if G "passes" both DFSs, it is strongly connected. Furthermore, since a DFS runs in O(n + m) time, this algorithm runs in O(2(n + m)) = O(n + m) time, since it requires 2 DFS traversals.
Tarjan's strongly connected components algorithm (or Gabow's variation) will of course suffice; if there's only one strongly connected component, then the graph is strongly connected.
Both are linear time.
As with a normal depth first search, you track the status of each node: new, seen but still open (it's in the call stack), and seen and finished. In addition, you store the depth when you first reached a node, and the lowest such depth that is reachable from the node (you know this after you finish a node). A node is the root of a strongly connected component if the lowest reachable depth is equal to its own depth. This works even if the depth by which you reach a node from the root isn't the minimum possible.
To check just for whether the whole graph is a single SCC, initiate the dfs from any single node, and when you've finished, if the lowest reachable depth is 0, and every node was visited, then the whole graph is strongly connected.
To check if every node has both paths to and from every other node in a given graph:
1. DFS/BFS from all nodes:
Tarjan's algorithm supposes every node has a depth d[i]. Initially, the root has the smallest depth. And we do the post-order DFS updates d[i] = min(d[j]) for any neighbor j of i. Actually BFS also works fine with the reduction rule d[i] = min(d[j]) here.
function dfs(i)
d[i] = i
mark i as visited
for each neighbor j of i:
if j is not visited then dfs(j)
d[i] = min(d[i], d[j])
If there is a forwarding path from u to v, then d[u] <= d[v]. In the SCC, d[v] <= d[u] <= d[v], thus, all the nodes in SCC will have the same depth. To tell if a graph is a SCC, we check whether all nodes have the same d[i].
2. Two DFS/BFS from the single node:
It is a simplified version of the Kosaraju’s algorithm. Starting from the root, we check if every node can be reached by DFS/BFS. Then, reverse the direction of every edge. We check if every node can be reached from the same root again. See C++ code.
You can calculate the All-Pairs Shortest Path and see if any is infinite.
Tarjan's Algorithm has been already mentioned. But I usually find Kosaraju's Algorithm easier to follow even though it needs two traversals of the graph. IIRC, it is also pretty well explained in CLRS.
test-connected(G)
{
choose a vertex x
make a list L of vertices reachable from x,
and another list K of vertices to be explored.
initially, L = K = x.
while K is nonempty
find and remove some vertex y in K
for each edge (y, z)
if (z is not in L)
add z to both L and K
if L has fewer than n items
return disconnected
else return connected
}
You can use Kosaraju’s DFS based simple algorithm that does two DFS traversals of graph:
The idea is, if every node can be reached from a vertex v, and every node can reach v, then the graph is strongly connected.
In step 2 of the algorithm, we check if all vertices are reachable from v. In step 4, we check if all vertices can reach v (In reversed graph, if all vertices are reachable from v, then all vertices can reach v in original graph).
Algorithm :
1) Initialize all vertices as not visited.
2) Do a DFS traversal of graph starting from any arbitrary vertex v. If DFS traversal doesn’t visit all vertices, then return false.
3) Reverse all arcs (or find transpose or reverse of graph)
4) Mark all vertices as not-visited in reversed graph.
5) Do a DFS traversal of reversed graph starting from same vertex v (Same as step 2). If DFS traversal doesn’t visit all vertices, then return false. Otherwise return true.
Time Complexity: Time complexity of above implementation is same as Depth First Search which is O(V+E) if the graph is represented using adjacency list representation.
One way of doing this would be to generate the Laplacian matrix for the graph, then calculate the eigenvalues, and finally count the number of zeros. The graph is strongly connection if there exists only one zero eigenvalue.
Note: Pay attention to the slightly different method for creating the Laplacian matrix for directed graphs.
The algorithm to check if a graph is strongly connected is quite straightforward. But why does the below algorithm work?
Algorithm: suppose there is a graph with vertices [A, B, C......Z]
Choose any random node, say J, and perform DFS from it. If all the nodes are reachable then continue to step 2.
Reverse the directions of the edges of the graph by doing transpose.
Again run DFS from node J and check if all the nodes are visited. If yes then the graph is strongly connected and return true.
Performing step 1 makes sense because we have to check if we can reach all the nodes from that node. After this, next logical step could be
i) Now do this for all other nodes
ii) or try to reach node J from every other node. Because once you reach node J, you are sure that you can reach every other node because of step 1.
This is what we are trying to do in steps 2 & 3. If in a transposed graph node J is able to reach all other nodes then this implies that in original graph all other nodes can reach J.

Resources