Is the worst time complexity of BFS in a graph traversal n+2E? - algorithm

I understand that time complexity of BFS in a graph traversal is O( V + E ) since every vertex and every edge will be explored in the worst case.
Well,is the exact time complexity v+2E ??
Every vertex is explored once+ Every adjacent vertices
The sum of the degree of all the vertices in a graph= No of edges*2= 2E
Thus the time complexity is n+2E..Am i correct?

For a random graph, the time complexity is O(V+E): Breadth-first search
As stated in the link, according to the topology of your graph, O(E) may vary from O(V) (if your graph is acyclic) to O(V^2) (if all vertices are connected with each other).
Therefore the time complexity varies fromO(V + V) = O(V) to O(V + V^2) = O(V^2) according to the topology of your graph.
Besides, since |V| <= 2 |E|, then O(3E) = O(E) is also correct, but the bound is looser.

Assumptions
Let's assume that G is connected and undirected. If it's not connected, then you can apply the below idea to every connected component of G independently. In addition, let's assume that G is represented as an adjacency lists and for every vertex v, we can decide if v was visited in O(1) time for example using a lookup table.
Analyze
If you want to count the exact number of steps in the BFS you can observe that:
Since G is connected, BFS will visit every vertex exactly once, so we count |V| visits in nodes. Notice that in one visit you may perform more operations, not counting looping over edges, than just marking current vertex visited.
For every vertex v we want to count, how many edges the BFS examines at this vertex.
You have to loop over all edges of v to perform the BFS. If you skip one edge, then it's easy to show that BFS is not correct. So every edge is examined twice.
One question may arise here. Someone could ask, if there is a need to examine the edge (p, v) in vertex v, where p is the parent of v in already constructed BFS tree, i.e. we came to v directly from p. Of course you don't have to consider this edge, but deciding to skip this edge also costs at least one additional operation:
for (v, u) in v.edges:
if u == p: # p is the parent of v in already constructed tree
continue
if not visited[u]:
BFS(u, parent=v)
It examines the same number of edges that the code below, but has higher complexity, because for all but one edge, we run two if-statements rather than one.
for (v, u) in v.edges:
if not visited[u]: # if p is the parent of v, then p is already visited
BFS(u, parent=v)
Conclusion
You may even develop a different method to skip edge (v, p), but it always takes at least one operation, so it's a wasteful effort.

Related

Time Complexity Analysis of BFS

I know that there are a ton of questions out there about the time complexity of BFS which is : O(V+E)
However I still struggle to understand why is the time complexity O(V+E) and not O(V*E)
I know that O(V+E) stands for O(max[V,E]) and my only guess is that it has something to do with the density of the graph and not with the algorithm itself unlike say Merge Sort where it's time complexity is always O(n*logn).
Examples I've thought of are :
A Directed Graph with |E| = |V|-1 and yeah the time complexity will be O(V)
A Directed Graph with |E| = |V|*|V-1| and the complexity would in fact be O(|E|) = O(|V|*|V|) as each vertex has an outgoing edge to every other vertex besides itself
Am I in the right direction? Any insight would be really helpful.
Your "examples of thought" illustrate that the complexity is not O(V*E), but O(E). True, E can be a large number in comparison with V, but it doesn't matter when you say the complexity is O(E).
When the graph is connected, then you can always say it is O(E). The reason to include V in the time complexity, is to cover for the graphs that have many more vertices than edges (and thus are disconnected): the BFS algorithm will not only have to visit all edges, but also all vertices, including those that have no edges, just to detect that they don't have edges. And so we must say O(V+E).
The complexity comes off easily if you walk through the algorithm. Let Q be the FIFO queue where initially it contains the source node. BFS basically does the following
while Q not empty
pop u from Q
for each adjacency v of u
if v is not marked
mark v
push v into Q
Since each node is added once and removed once then the while loop is done O(V) times. Also each time we pop u we perform |adj[u]| operations where |adj[u]| is the number of
adjacencies of u.
Therefore the total complexity is Sum (1+|adj[u]|) over all V which is O(V+E) since the sum of adjacencies is O(E) (2E for undirected graph and E for a directed one)
Consider a situation when you have a tree, maybe even with cycles, you start search from the root and your target is the last leaf of your tree. In this case you will traverse all the edges before you get into your destination.
E.g.
0 - 1
1 - 2
0 - 2
0 - 3
In this scenario you will check 4 edges before you actually find a node #3.
It depends on how the adjacency list is implemented. A properly implemented adjacency list is a list/array of vertices with a list of related edges attached to each vertex entry.
The key is that the edge entries point directly to their corresponding vertex array/list entry, they never have to search through the vertex array/list for a matching entry, they can just look it up directly. This insures that the total number of edge accesses is 2E and the total number of vertex accesses is V+2E. This makes the total time O(E+V).
In improperly implemented adjacency lists, the vertex array/list is not directly indexed, so to go from an edge entry to a vertex entry you have to search through the vertex list which is O(V), which means that the total time is O(E*V).

Find an O(n) algorithm that returns all interesting vertices of the graph

Problem : A directed graph G with n vertices and a special vertex u is provided. We call a vertex v ‘interesting’ if there is a path from v to a vertex w such that there is a cycle containing the vertices w and u. Write an O(n) time algorithm which takes G (the whole graph) and the node u as input and returns all the interesting vertices.
Ineffiecient Algorithm : My idea initially was to consider the node u and compute all the cycles that contain u. (This itself seems like traversing through the nodes using DFS and then forward-tracking as well when you encounter a visited node) Now from each vertex on these cycles we can compute the number of nodes on the graph that do not belong to the cycle(s) but is connected with each particular vertex w not equal to u on a cycle. Add all these values to get the desired answer. This isn't an O(n) algorithm.
There are two cases:
If there are no cycles containing u, then no vertex can be "interesting".
If there are any cycles containing u, then a vertex v is "interesting" if and only if there's a path from v to u. (We don't need to worry about the w in the problem description, because if a cycle contains two vertices u and w, then any path that ends at u can be extended to end at w and vice versa.)
So, the most efficient algorithm would be as follows:
Using DFS, determine if u is in a cycle. (We don't need to find all cycles; we just need to determine whether there are any.)
Using DFS in the "reverse" direction, find all vertices from which u is reachable.
DFS requires O(|V| + |E|) time, which is greater than O(n) = O(|V|) unless |E| is in O(n); but then, there's no way to even read in the entire graph definition in less than |E| time, so this is unavoidable. Whoever gave you this question must not have really thought this through.

Graph In-degree Calculation from Adjacency-list

I came across this question in which it was required to calculate in-degree of each node of a graph from its adjacency list representation.
for each u
for each Adj[i] where i!=u
if (i,u) ∈ E
in-degree[u]+=1
Now according to me its time complexity should be O(|V||E|+|V|^2) but the solution I referred instead described it to be equal to O(|V||E|).
Please help and tell me which one is correct.
Rather than O(|V||E|), the complexity of computing indegrees is O(|E|). Let us consider the following pseudocode for computing indegrees of each node:
for each u
indegree[u] = 0;
for each u
for each v \in Adj[u]
indegree[v]++;
First loop has linear complexity O(|V|). For the second part: for each v, the innermost loop executes at most |E| times, while the outermost loop executes |V| times. Therefore the second part appears to have complexity O(|V||E|). In fact, the code executes an operation once for each edge, so a more accurate complexity is O(|E|).
According to http://www.cs.yale.edu/homes/aspnes/pinewiki/C(2f)Graphs.html, Section 4.2, with an adjacency list representation,
Finding predecessors of a node u is extremely expensive, requiring looking through every list of every node in time O(n+m), where m is the total number of edges.
So, in the notation used here, the time complexity of computing the in-degree of a node is O(|V| + |E|).
This can be reduced at the cost of additional space of using extra space, however. The Wiki also states that
adding a second copy of the graph with reversed edges lets us find all predecessors of u in O(d-(u)) time, where d-(u) is u's in-degree.
An example of a package which implements this approach is the Python package Networkx. As you can see from the constructor of the DiGraph object for directional graphs, networkx keeps track of both self._succ and self._pred, which are dictionaries representing the successors and predecessors of each node, respectively. This allows it to compute each node's in_degree efficiently.
O(|V|+|E|) is the correct answer, because you visit each vertex in O(|V|) and each time you visit a fraction of the edges so O(|E|) in total, also usually |E|>>|V| so O(|E|) is also correct

Why is the complexity of BFS O(V+E) instead of O(V*E)?

Some pseudocode here (disregard my style)
Starting from v1(enqueued):
function BFS(queue Q)
v2 = dequeue Q
enqueue all unvisited connected nodes of v2 into Q
BFS(Q)
end // maybe minor problems here
Since there are V vertices in the graph, and these V vertices are connected to E edges, and visiting getting connected nodes (equivalent to visiting connected edges) is in the inner loop (the outer loop is the recursion itself), it seems to me that the complexity should be O(V*E) rather than O(V+E). Can anyone explain this for me?
E is not the number of edges adjacent to each vertex - its actually the total number of edges in the graph. Defining it this way is useful because you don't necessarily have the same number of edges on every single vertex.
Since each edge gets visited once by the time the DFS ends, you get O(E) complexity from that part. Then you add the O(V) for visiting each vertex once and get O(V + E) on total.

Prim and Kruskal's algorithms complexity

Given an undirected connected graph with weights. w:E->{1,2,3,4,5,6,7} - meaning there is only 7 weights possible.
I need to find a spanning tree using Prim's algorithm in O(n+m) and Kruskal's algorithm in O( m*a(m,n)).
I have no idea how to do this and really need some guidance about how the weights can help me in here.
You can sort edges weights faster.
In Kruskal algorithm you don't need O(M lg M) sort, you just can use count sort (or any other O(M) algorithm). So the final complexity is then O(M) for sorting and O(Ma(m)) for union-find phase. In total it is O(Ma(m)).
For the case of Prim algorithm. You don't need to use heap, you need 7 lists/queues/arrays/anything (with constant time insert and retrieval), one for each weight. And then when you are looking for cheapest outgoing edge you check is one of these lists is nonempty (from the cheapest one) and use that edge. Since 7 is a constant, whole algorithms runs in O(M) time.
As I understand, it is not popular to answer homework assignments, but this could hopefully be usefull for other people than just you ;)
Prim:
Prim is an algorithm for finding a minimum spanning tree (MST), just as Kruskal is.
An easy way to visualize the algorithm, is to draw the graph out on a piece of paper.
Then you create a moveable line (cut) over all the nodes you have selected. In the example below, the set A will be the nodes inside the cut. Then you chose the smallest edge running through the cut, i.e. from a node inside of the line to a node on the outside. Always chose the edge with the lowest weight. After adding the new node, you move the cut, so it contains the newly added node. Then you repeat untill all nodes are within the cut.
A short summary of the algorithm is:
Create a set, A, which will contain the chosen verticies. It will initially contain a random starting node, chosen by you.
Create another set, B. This will initially be empty and used to mark all chosen edges.
Choose an edge E (u, v), that is, an edge from node u to node v. The edge E must be the edge with the smallest weight, which has node u within the set A and v is not inside A. (If there are several edges with equal weight, any can be chosen at random)
Add the edge (u, v) to the set B and v to the set A.
Repeat step 3 and 4 until A = V, where V is the set of all verticies.
The set A and B now describe you spanning tree! The MST will contain the nodes within A and B will describe how they connect.
Kruskal:
Kruskal is similar to Prim, except you have no cut. So you always chose the smallest edge.
Create a set A, which initially is empty. It will be used to store chosen edges.
Chose the edge E with minimum weight from the set E, which is not already in A. (u,v) = (v,u), so you can only traverse the edge one direction.
Add E to A.
Repeat 2 and 3 untill A and E are equal, that is, untill you have chosen all edges.
I am unsure about the exact performance on these algorithms, but I assume Kruskal is O(E log E) and the performance of Prim is based on which data structure you use to store the edges. If you use a binary heap, searching for the smallest edge is faster than if you use an adjacency matrix for storing the minimum edge.
Hope this helps!

Resources