Why is the time complexity of Dijkstra O((V + E) logV) - complexity-theory

I was reading about worst case time complexity for the Dijkstra algorithm using binary heap (the graph being represented as adjacency list).
According to wikipedia (https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm#Running_time) and various stackoverflow questions, this is O((V + E) logV) where E - number of edges, V - number of vertices. However I found no explanation as to why it can't be done in O(V + E logV).
With a self-balancing binary search tree or binary heap, the algorithm requires Θ((E+V) logV) time in the worst case
In case E >= V, the complexity reduces to O(E logV) anyway. Otherwise, we have O(E) vertices in the same connected component as the start vertex (and the algorithm ends once we get to them). On each iteration we obtain one of these vertices, taking O(logV) time to remove it from the heap.
Each update of the distance to a connected vertex takes O(logV) and the number of these updates is bound by the number of edges E so in total we do O(E) such updates. Adding O(V) time for initializing distances, we get final complexity O(V + E logV).
Where am I wrong?

Related

What is the time complexity of Dijkstra's alogrithm using an adjacency list and priority queue?

Let's say I have code likes this:
enqueue source vertex
while(queue is not empty){
dequeue min vertex
add to shortest path set
iterate over vertex edges
if not in shortest path set and new distance smaller enqueue
}
What is the time complexity if the while loop runs for all edges in the graph instead of only running V times, for all vertices in the graph? Is it still O(ELogV) since it's O(E+E)*O(LogV)?
Yes, this is pretty much how you implement Dijkstra's algorithm when your priority queue doesn't support a DECREASE_KEY operation. The priority queue contains (cost,vertex) records, and whenever you find a vertex cost that is lower than the previous one, you just insert a new record.
The complexity becomes O(E log E), which is no bigger than O(E log V).
This is the same complexity that Dijkstra's algorithm has when you use a binary heap that does support the DECREASE_KEY operation, because DECREASE_KEY takes O(log V) time. To get down to O(E + V log V), you need to use a Fibonacci heap that can do DECREASE_KEY in constant time.

Dijkstra Time Complexity using Binary Heap

Let G(V, E)be an undirected graph with positive edge weights. Dijkstra’s single source shortest path algorithm can be implemented using the binary heap data structure with time complexity:
1. O(|V|^2)
2. O(|E|+|V|log|V|)
3. O(|V|log|V|)
4. O((|E|+|V|)log|V|)
========================================================================
Correct answer is -
O((|E|+|V|)log|V|)
=========================================================================
My Approach is as follows -
O(V+V+VlogV+ElogV) = O(ElogV)
O(V) to intialize.
O(V) to Build Heap.
VlogV to perform Extract_Min
ElogV to perform Decrease Key
Now, as I get O(ElogV) and when I see options, a part of me says the
correct one is O(VlogV) because for a sparse Graph |V| = |E|, but as I
said the correct answer is O((|E|+|V|)log|V|). So, where am I going
wrong?
Well, you are correct that the complexity is actually O(E log V).
Since E can be up to (V^2 - V)/2, this is not the same as O(V log V).
If every vertex has an edge, then V <= 2E, so in that case, O(E log V) = O( (E+V) log V). That is the usual case, and corresponds to the "correct" answer.
But technically, O(E log V) is not the same as O( (E+V) log V), because there may be a whole bunch of disconnected vertexes in V. When that is the case, however, Dijkstra's algorithm will never see all those vertexes, since it only finds vertexes connected to the single source. So, when the difference between these two complexities is important, you are right and the "correct answer" is not.
Let me put it this way.The correct answer is O((E+V)logV)).If the graph has the source vertex not reachable to all of the other vertices,VlogV could be more than ElogV.But if we assume that the source vertex is reachable to every other vertex, the graph will have at least V-1 edges.So,it will be ElogV.It is more to do with the reachability from the source vertex.

Which one is better O(V+E) or O(ElogE)?

I am trying to develop an algorithm which will be able to find minimum spanning tree from a graph.I know there are already many existing algorithms for it.However I am trying to eliminate the sorting of edges required in Kruskal's Algorithm.The algorithm I have developed so far has a part where counting of disjoint sets is needed and I need a efficient method for it.After a lot of study I came to know that the only possible way is using BFS or DFS which has a complexity of O(V+E) whereas Kruskal's algorithms has a complexity of O(ElogE).Now my question is which one is better,O(V+E) or O(ElogE)?
In general, E = O(V^2), but that bound may not be tight for all graphs. In particular, in a sparse graph, E = O(V), but for an algorithm complexity is usually stated as a worst-case value.
O(V + E) is a way of indicating that the complexity depends on how many edges there are. In a sparse graph, O(V + E) = O(V + V) = O(V), while in a dense graph O(V + E) = O(V + V^2) = O(V^2).
Another way of looking at it is to see that in big-O notation, O(X + Y) means the same thing as O(max(X, Y)).
Note that this is only useful when V and E might have the same magnitude. For Kruskal's algorithm, the dominating factor is that you need to sort the list of edges. Whether you have a sparse graph or a dense graph, that step dominates anything that might be O(V), so one simply writes O(E lg E) instead of O(V + E lg E).

Bellman-Ford Algorithm Space Complexity

I have been searching about Bellman-Ford Algorithm's space complexity but on wikipedia Bellman-Ford Algorithm and it says space complexity is O(V). on this link it says O(V^2) . My question is; what is the true space complexity and why?
It depends on the way we define it.
If we assume that the graph is given, the extra space complexity is O(V) (for an array of distances).
If we assume that the graph also counts, it can be O(V^2) for an adjacency matrix and O(V+E) for an adjacency list.
They both are "true" in some sense. It's just about what we want to count in a specific problem.
There are two cases:-
If we assume that the graph is given, then we have to create 2 arrays (for an array of distances and array of parents) so, the extra space complexity is O(V) .
If we consider storing of graph also then:
a) O(V^2) for adjacency matrix
b) O(V+E) for adjacency list
c) O(E) if we just create edges list which will store all the edges only
It does not matter whether we are using adjacency list or.
adjacency matrix if given graph is complete one then
space complexity = input + extra
1 if we use adjacency matrix, space = input + extra O(V^2)+O(V) ->Using min heap =O(V^2)
2 if we use adjacency list, space = input + extraa
In complite graph E = O(V^2)
O(V + E) + O(V) -> min heap = O(V^2)
Because if we talk about space complexity for an.
algorithm we always go with worst case what can be.
happen .in Dijkstra or bellman ford both have complite
Graph, Space Complexity is = O(V^2)

What is the overall Big O run time of Kruskal’s algorithm if BFS was used to check whether adding an edge creates a cycle?

If Kruskal's algorithm was implemented using BFS to check whether adding an edge with create a cycle, what would the overall Big-O run time of the algorithm be?
It would be O(V * E + E * log E). Each BFS takes O(V) time because there are V - 1 edges in a tree(or less if the tree is not completely build yet) and it is run for each edge(V is the number of vertices, E is the number of edges). So it is O(V * E) in total. E * log E term comes from sorting the edges.

Resources