Graph algorithms time complexities - algorithm

Here are my questions .
1.Prims algorithm using binary heap as priority queue and edges are represented in adjacency list , what is the time complexity :
In my view O(v^2 + ElogV) with extra space (v+E) // scan entire matrix(v^2)and store the entire edges in adjacency list
With out extra space it would be O(V^2 LOG V)???
2.
CAN we find out connected components using prims and krushkal ..
What is the time complexity for union and find for connected components

Related

Find the shortest path from source to target in a weighted-undirected graph in O(V + E) time

I've been tasked with designing an algorithm that finds the shortest path in an weighted-undirected graph with V nodes and E edges in O(V + E) time. The graph weights are all positive integers and no weight is greater than 15.
I believe I can use Dijkstra's algorithm to find the shortest path from a source node to a target node, but I don't think it satisfies the runtime constraints.
Knowing at the runtimes of BFS and DFS, I'm thinking that some sort of modification with those algorithms will get me to O(V + E), but I'm not sure what direction to head in or how I can leverage the <= 15 weight constraint on the edges.
Any help is appreciated.
You can use Dijkstra's algorithm, but you have to be a little careful with the priority queue.
Since all the weights are integers from 1 to 15, there can only be 16 different priorities in the queue at any one time. You can use this fact to implement all your priority queue operations in constant time. That will change the complexity of the algorithm from O(|V| + |E| log |V|) to O(|V| + |E|)
There are lots of ways to make that priority queue. Basically you partition the entries into lists of entries with the same priority, and then you only have to prioritize the 16 lists. It's reasonable to keep those 16 lists in a circular array.
The algorithm that You're looking for is called Dial's Algorithm as it works also in graphs that contain cycles. It's complexity is O(E + WV). In case, W>>V you can replace one bucket per W with buckets for weights 1, 2-3, 4-7, 8-15 etc.
It's an optimization on Dijkstra, which uses the fact, that given the range of weights, You're able to replace the Fibonacci Heap with buckets which will decrease the find_node operation from O(logn) to O(1).
The algorithm in detail is well described on GeeksForGeeks and Wikipedia among others.
You should also be interested in Directed Acyclic Graph Shortest Path in Cormen's Introduction to Algorithms on p. 655 or on GeeksForGeeks

Time Complexity of Dijkstra's Algorithm when using Adjacency Matrix vs Adjacency Linked List

For a graph with v vertices and e edges, and a fringe stored in a binary min heap, the worst case runtime is O((n+e)lg(n)). However, this is assuming we use a adjacency linked list to represent the graph. Using a adjacency matrix takes O(n^2) to traverse, while a linked list representation can be traversed in O(n+e).
Therefore, would using the matrix to represent the graph change the runtime of Dijkstra's to O(n^2lg(n))?
The O(log n) cost is paid for processing edges, not for walking the graph, so if you know the actual number of edges in the graph then Dijkstra's algorithm on an adjacency matrix with min-heap is within O(n^2 + (n+e)log(n)) time.

To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, which data structure should be used?

To implement Dijkstra’s shortest path algorithm on unweighted graphs so that it runs in linear time, the data structure to be used is:
Queue
Stack
Heap
B-Tree
I found below answers:
A Queue because we can find single source shortest path in unweighted graph by using Breadth first search (BFS) algorithm which using "Queue" data structure , which time O(m+n) (i.e. linear with respect to the number of vertices and edges. )
A min heap is required to implement it in linear time because if we delete here a node in min heap it will not take any time in adjustment because all r having same weight so deletion will take O(1) for one node .. so for n-1 node it will be O(n).
Can someone explain which one is the correct answer?
please note that if the graph is unweighted no dijekstra is needed a simple BFS will work perfectly in O(E + V) ==> linear Time
A simple implementation of the algorithm needs A Queue Data Structure .

Bellman-Ford Algorithm Space Complexity

I have been searching about Bellman-Ford Algorithm's space complexity but on wikipedia Bellman-Ford Algorithm and it says space complexity is O(V). on this link it says O(V^2) . My question is; what is the true space complexity and why?
It depends on the way we define it.
If we assume that the graph is given, the extra space complexity is O(V) (for an array of distances).
If we assume that the graph also counts, it can be O(V^2) for an adjacency matrix and O(V+E) for an adjacency list.
They both are "true" in some sense. It's just about what we want to count in a specific problem.
There are two cases:-
If we assume that the graph is given, then we have to create 2 arrays (for an array of distances and array of parents) so, the extra space complexity is O(V) .
If we consider storing of graph also then:
a) O(V^2) for adjacency matrix
b) O(V+E) for adjacency list
c) O(E) if we just create edges list which will store all the edges only
It does not matter whether we are using adjacency list or.
adjacency matrix if given graph is complete one then
space complexity = input + extra
1 if we use adjacency matrix, space = input + extra O(V^2)+O(V) ->Using min heap =O(V^2)
2 if we use adjacency list, space = input + extraa
In complite graph E = O(V^2)
O(V + E) + O(V) -> min heap = O(V^2)
Because if we talk about space complexity for an.
algorithm we always go with worst case what can be.
happen .in Dijkstra or bellman ford both have complite
Graph, Space Complexity is = O(V^2)

Linear Time Algorithm For MST

I was wondering if anyone can point to a linear time algorithm to find the MST of a graph when there is a small number of weights (I.e edges can only have 2 different weights).
I could not find anything on google other than Prim's, Kruskal's, Boruvka's none of which seem to have any properties that would reduce the run time in this special case. I'm guessing to make it linear time it would have to be some sort of modification of BFS (which finds the MST when the weights are uniform).
The cause of the lg V factor in Prim's O(V lg V) runtime is the heap that is used to find the next candidate edge. I'm pretty sure that it is possible to design a priority queue that does insertion and removal in constant time when there's a limited number of possible weights, which would reduce Prim to O(V).
For the priority queue, I believe it would suffice with an array whose indices covers all the possible weights, where each element points to a linked list that contains the elements with that weight. You'd still have a factor of d (the number of distinct weights) for figuring out which list to get the next element out of (the "lowest" non-empty one), but if d is a constant, then you'll be fine.
Elaborating on Aasmund Eldhuset's answer: if the weights in the MST are restricted to numbers in the range 0, 1, 2, 3, ..., U-1, then you can adapt many of the existing algorithms to run in (near) linear time if U is a constant.
For example, let's take Kruskal's algorithm. The first step in Kruskal's algorithm is to sort the edges into ascending order of weight. You can do this in time O(m + U) if you use counting sort or time O(m lg U) if you use radix sort. If U is a constant, then both of these sorting steps take linear time. Consequently, the runtime for running Kruskal's algorithm in this case would be O(m α(m)), where α(m) is the inverse Ackermann function, because the limiting factor is going to be the runtime of maintaining the disjoint-set forest.
Alternatively, look at Prim's algorithm. You need to maintain a priority queue of the candidate distances to the nodes. If you know that all the edges are in the range [0, U), then you can do this in a super naive way by just storing an array of U buckets, one per possible priority. Inserting into the priority queue then just requires you to dump an item into the right bucket. You can do a decrease-key by evicting an element and moving it to a lower bucket. You can then do a find-min by scanning the buckets. This causes the algorithm runtime to be O(m + nU), which is linear if U is a constant.
Barder and Burkhardt in 2019 proposed this approach to find MSTs in linear time given the non-MST edges are given in ascending order of their weights.

Resources