Lets say I have implemented dijkstras using a PriorityQueue so that adding and removing from the unvisited nodes takes O(log n).
The PQ will contain at most E nodes so to empty it we get O(E). While PQ is not empty we take the best node and remove it, visit if not visited and go through all of its neighbors (potentially adding them to the PQ).
What I do not understand: How can going through all neighbors (at worst V) for at worst E items not have the time-complexity of O(E*V). I have seen so many explanation that we are supposed to just look at the operations and observe how many times they will execute and draw our conclusions from this. I do not see how we can disregard the fact that we are looping through V neighbors, an empty for-loop of n items is still O(n)?
For me the final complexity seems to be O(V + E*V log E) instead of O(V + V log E). I mean there are a lot of variances but the main point is I am missing something trivial :P
First point of terminology that you seem to have confused. E is not the number of items, it is the number of edges between vertices. V is the number of vertices, which (depending on context) is likely to be the number of items.
Next, "this vertex is a neighbor of that vertex" means that there is an edge between them. Each edge contributes 2 neighbor relationships. (One in each direction.) Therefore 2 E is the number of neighbor relationships that can exist, total.
Your intuition that every one of V nodes can have up to V-1 neighbors for a total of V2-V neighbor relationships is correct - but you can tell how close you are to that worst case from the number of edges.
Therefore we wind up with the following potential work:
for each of E edges:
for each vertex on that edge:
O(1) work to check whether it was processed yet
(processing the vertex if needed accounted for separately)
for each of V vertices:
O(log(V)) to add to the priority queue
O(log(V)) to remove from the priority queue
(processing its edges accounted for separately
The first chunk is O(E). The second chunk is O(V log(V)). The total is O(E + V log(V)).
Hopefully this explanation clarifies why the complexity is what it is.
Related
I know that there are a ton of questions out there about the time complexity of BFS which is : O(V+E)
However I still struggle to understand why is the time complexity O(V+E) and not O(V*E)
I know that O(V+E) stands for O(max[V,E]) and my only guess is that it has something to do with the density of the graph and not with the algorithm itself unlike say Merge Sort where it's time complexity is always O(n*logn).
Examples I've thought of are :
A Directed Graph with |E| = |V|-1 and yeah the time complexity will be O(V)
A Directed Graph with |E| = |V|*|V-1| and the complexity would in fact be O(|E|) = O(|V|*|V|) as each vertex has an outgoing edge to every other vertex besides itself
Am I in the right direction? Any insight would be really helpful.
Your "examples of thought" illustrate that the complexity is not O(V*E), but O(E). True, E can be a large number in comparison with V, but it doesn't matter when you say the complexity is O(E).
When the graph is connected, then you can always say it is O(E). The reason to include V in the time complexity, is to cover for the graphs that have many more vertices than edges (and thus are disconnected): the BFS algorithm will not only have to visit all edges, but also all vertices, including those that have no edges, just to detect that they don't have edges. And so we must say O(V+E).
The complexity comes off easily if you walk through the algorithm. Let Q be the FIFO queue where initially it contains the source node. BFS basically does the following
while Q not empty
pop u from Q
for each adjacency v of u
if v is not marked
mark v
push v into Q
Since each node is added once and removed once then the while loop is done O(V) times. Also each time we pop u we perform |adj[u]| operations where |adj[u]| is the number of
adjacencies of u.
Therefore the total complexity is Sum (1+|adj[u]|) over all V which is O(V+E) since the sum of adjacencies is O(E) (2E for undirected graph and E for a directed one)
Consider a situation when you have a tree, maybe even with cycles, you start search from the root and your target is the last leaf of your tree. In this case you will traverse all the edges before you get into your destination.
E.g.
0 - 1
1 - 2
0 - 2
0 - 3
In this scenario you will check 4 edges before you actually find a node #3.
It depends on how the adjacency list is implemented. A properly implemented adjacency list is a list/array of vertices with a list of related edges attached to each vertex entry.
The key is that the edge entries point directly to their corresponding vertex array/list entry, they never have to search through the vertex array/list for a matching entry, they can just look it up directly. This insures that the total number of edge accesses is 2E and the total number of vertex accesses is V+2E. This makes the total time O(E+V).
In improperly implemented adjacency lists, the vertex array/list is not directly indexed, so to go from an edge entry to a vertex entry you have to search through the vertex list which is O(V), which means that the total time is O(E*V).
This question already has answers here:
Why is time complexity for BFS/DFS not simply O(E) instead of O(E+V)?
(2 answers)
Breadth First Search time complexity analysis
(8 answers)
Closed 2 years ago.
This is a generic BFS implementation:
For a connected graph with V nodes and E total number of edges, we know that every edge will be considered twice in the inner loop. So if the total number of iterations in the inner loop of BFS is going to be 2 * number of edges E, isn't the runtime going to be O(E) instead?
This is a case where one needs to look a little deeper at the implementation. In particular, how do I determine if a node is visited or not?
The traditional algorithm does this by coloring the vertices. All vertices are colored white at first, and they get colored black as they are visited. Thus visitation can be determined simply by looking at the color of the vertex. If you use this approach, then you have to do O(V) worth of initialization work setting the color of each vertex to white at the start.
You could manage your colors differently. You could maintain a data structure containing all visited nodes. If you did this, you could avoid the O(V) initialization cost. However, you will pay that cost elsewhere in the data structure. For example, if you stored them all in a balanced tree, each if w is not visited now costs O(log V).
This obviously gives you a choice. You can have O(V+E) using the traditional coloring approach, or you can have O(E log V) by storing this information in your own data structure.
You specify a connected graph in your problem. In this case, O(V+E) == O(E) because the number of vertices can never be more than E+1. However, the time complexity of BFS is typically given with respect to an arbitrary graph, which can include a very sparse graph.
If a graph is sufficiently sparse (say, a million vertices and five edges), the cost of initialization may be great enough that you want to switch to a O(E ln V) algorithm. However, these are pretty rare in a practical setting. In a practical setting, the speed of the traditional approach (giving each vertex a color) is just so blinding fast compared to the more fancy data structures that you choose this traditional coloring scheme for everything except the most extraordinarily sparse graphs.
If you maintained a dedicated color property on your vertices with an invariant rule that all nodes are black between algotihm invocations, you could drop the cost to O(E) by doing each BFS twice. On your first pass, you could set them all to white, and then do a second pass to turn them all black. If you had a very sparse graph, this could be more efficient.
Well, let's break it up into easy pieces...
You've kept a visited array, and by looking it up, you decide whether to push a node into the queue or not. Once visited, you don't push it again. So, how many nodes get pushed into the queue: (of course) V nodes. And it's complexity is O(V).
Now, each time, you take out a node from queue and visit all of its neighboring nodes. Now, following this way, for all of V nodes, how many node you'll come across. Well, it's the number of edges if the graph is unidirectional, or, 2 * number of edges if the graph is bidirectional. So, the complexity would be O(E) for unidirectional and O(2 * E) for bidirectional.
So, the ultimate(i.e. total) complexity would be O(V + E) or O(V + 2 * E) or generally, we may say O(v + E).
Because there might be graph having edges less than number of vertices.
Consider this graph:
1 ---- 2
|
|
3 ---- 4
There are 4 vertices but only 3 edges, and in BFS you have to traverse each and every vertex. Thatswhy time complexity is O(V+E) as it considers both V as well as E.
I have a certain problem understanding the complexity of the Djisktra algorithm and hope someone can correct me.
For my example I took a complete graph with n vertices.
You pick a starting vertex, lets say a1, mark it, and then compute all n-1 weights on the edges. O(n)
You pick the smallest one. Let's say vertex a2 and mark it. O(n)
After that you compute n-2 new weights on the edges and look for the next yet unmarked vertex to add your set of marked vertices.
And so on...
The algorithm runs til you could mark all vertices. Complexity: n-1 + n-2 + ... + n - (n - 1) = Binom(n,2) which is in O(n^2), not O(n*ln(n)) what I want.
I read about many many times people use a heap for optimization, however I still don't see how to avoid Binom(n,2) computations.
I have to be wrong at some point, but do not see where.
If you have a complete graph, then of course you can't do any better than O(n^2) -- because, that's the size of your input.
If you don't have a complete graph, and are storing your edges in an adjacency list, then you can do better. You still need to look at all your edges, but with a priority queue you can manage O(e + n log n) where e is the number of edges in your adjacency list.
Can someone please help me solve this problem?
We have a set E of roads, a set H of highways, and a set V of different cities. We also have a cost x(i) associated to each road i and a cost y(i) associated to each highways i. We want to build the roads to connect the cities, with the conditions that there is always a path between any pair of cities and that we can build at most one highway, which may be cheaper than a road.
Set E and set H are different, and their respective costs are unrelated.
Design an algorithm to build the roads (with at most one highway) that minimize the total cost.
So, what we have is a fully connected graph of edges.
Solution steps:
Find the minimum spanning tree for the roads alone and consider it as the minimum cost.
Add one highway to the roads graph an calculate the minimum spanning cost tree again.
compare step 2 cost with the minimum cost to replace it if its smaller.
remove that high way.
go back to step 2 and go the steps again for each highway.
O(nm) = m*mst_cost(n)
Using Prim's or Kruskal's to build an MST: O(E log V).
The problem is the constraint of at most 1 highway.
1. Naive method to solve this:
For each possible highway, build the MST from scratch.
Time complexity of this solution: O(H E log V)
2. Alternative
Idea: If you build an MST, you can refine the MST with a better MST if you have an additional available edge you have not considered before.
Suppose the new edge connects (u,v). If you use this edge, you can remove the most expensive edge in the path between vertices u and v in the MST. You can find the path naively in O(V) time.
Using this idea, the time complexity is the cost to build the initial MST O(E log V) and the time to try to refine the MST with each of the H highways. The total algorithmic complexity is therefore O(E log V + H V), which is better than the first solution.
3. Optimized refinement
Instead of doing a naive path-searching method with the second method, we can find a faster way to do this. One related problem is LCA (lowest-common ancestor). A good way of solving LCA is using jump pointers. First you root hte tree, then each vertex will have jump pointers towards the root (1 step, 2 steps, 4 steps etc.) Pre-processing might cost O(V log V) time, and finding the LCA of 2 vertices is O(log V) worst case (although it is actually O(log (depth of tree)) which is usually better).
Once you have found the LCA, that implicitly gives you the path between vertices u and v. However, to find the most expensive edge to delete could be expensive since traversing the path is costly.
In 1-dimensional problems, the range-maximum-query (RMQ) can be employed. This uses a segment tree to solve the RMQ in O(log N) time.
Instead of a 1-dimensional space (like an array), we have a tree. However, we can apply the same idea, and build a segment tree-like structure. In fact, this is equivalent to bundling an extra piece of information with each jump pointer. To find the LCA, each vertex in the tree will have log(tree depth) jump pointers towards the root. We can bundle the maximum edge weight of the edges we jump over with the jump pointer. The cost of adding this information is the same as creating the jump pointer in the first place. Therefore, a slight refinement to the LCA algorithm allows us to find the maximum edge weight on the path between vertices u and v in O(log (depth)) time.
Finally, putting it together, the algorithmic complexity of this 3rd solution is O(E log V + H log V) or equivalently O((E+H) log V).
I understand that time complexity of BFS in a graph traversal is O( V + E ) since every vertex and every edge will be explored in the worst case.
Well,is the exact time complexity v+2E ??
Every vertex is explored once+ Every adjacent vertices
The sum of the degree of all the vertices in a graph= No of edges*2= 2E
Thus the time complexity is n+2E..Am i correct?
For a random graph, the time complexity is O(V+E): Breadth-first search
As stated in the link, according to the topology of your graph, O(E) may vary from O(V) (if your graph is acyclic) to O(V^2) (if all vertices are connected with each other).
Therefore the time complexity varies fromO(V + V) = O(V) to O(V + V^2) = O(V^2) according to the topology of your graph.
Besides, since |V| <= 2 |E|, then O(3E) = O(E) is also correct, but the bound is looser.
Assumptions
Let's assume that G is connected and undirected. If it's not connected, then you can apply the below idea to every connected component of G independently. In addition, let's assume that G is represented as an adjacency lists and for every vertex v, we can decide if v was visited in O(1) time for example using a lookup table.
Analyze
If you want to count the exact number of steps in the BFS you can observe that:
Since G is connected, BFS will visit every vertex exactly once, so we count |V| visits in nodes. Notice that in one visit you may perform more operations, not counting looping over edges, than just marking current vertex visited.
For every vertex v we want to count, how many edges the BFS examines at this vertex.
You have to loop over all edges of v to perform the BFS. If you skip one edge, then it's easy to show that BFS is not correct. So every edge is examined twice.
One question may arise here. Someone could ask, if there is a need to examine the edge (p, v) in vertex v, where p is the parent of v in already constructed BFS tree, i.e. we came to v directly from p. Of course you don't have to consider this edge, but deciding to skip this edge also costs at least one additional operation:
for (v, u) in v.edges:
if u == p: # p is the parent of v in already constructed tree
continue
if not visited[u]:
BFS(u, parent=v)
It examines the same number of edges that the code below, but has higher complexity, because for all but one edge, we run two if-statements rather than one.
for (v, u) in v.edges:
if not visited[u]: # if p is the parent of v, then p is already visited
BFS(u, parent=v)
Conclusion
You may even develop a different method to skip edge (v, p), but it always takes at least one operation, so it's a wasteful effort.