For a given graph G = (V,E) how can you sort its adjacency list representation in O(E+V) time? - algorithm

Because we know that the integers representing a vertex can take values in [0,...,|V|-1] range, we can use counting sort in order to sort each entry of the adjacency list in O(V) time.
Since we have V lists to sort, that would give us a O(V^2) time algorithm. I don't see how we can transform this into an O(V+E) time algorithm...

In fact you need to sort E elements in total - the number of edges. Thus your estimation of O(V^2) is not quite correct. You sort each of the adjacency lists in linear time with respect to the number of edges it contains. And as in total you will have E edges, the complexity of sorting all lists will be O(E). Of course as you have V lists, you can't get lower than O(V) and thus the estimation O(V +E).

Related

Prim's algorithm using heap or unsorted array?

There's a question in my CS class asking that if the graph is a sparse graph, which is n^2 >> m, whether unsorted list with O(n^2) is more efficient than the binary heap O((m+n)logn). I'm a little bit confused with this one because I thought (m+n)logn is always better than n^2. Can anyone give some insights on this?
a dense graph is a graph in which the number of edges is close to the maximal number of edges. The opposite, a graph with only a few edges, is a sparse graph.
from wikipedia.
Suppose we have a sparse graph with n nodes and m edges. Time complexity of the Prim's Algorithm is O( (n + m) log(n)) if we use a binary heap data structure. If we use an unsorted array (assuming you meant an adjacency matrix), then it becomes O(n^2) as you stated.
Compare the time complexities: O((n + m)log(n)) and O(n^2).
If our graph is sparse, then n > m. Therefore n^2 > mlog(n), which means that it's a better idea to implement the algorithm with a binary heap rather than an adjacency matrix.
If we have a dense graph, then we can at most m = n*(n-1) number of edges for a simple graph. It means that n^2 < mlog(n) ~ n^2 log(n). Therefore, now it's logical to use an adjacency matrix. If our graph is not simple, then there may be more than one edges between two nodes - therefore we can have more than n* (n-1) edges. However, the distinction between sparse and dense graphs is rather vague, and depends on the context. Therefore it's not guarantee that in a dense graph we will have m >= n*(n-1) number of nodes.

Time complexity of deleting an edge in an adjacency list

For a graph(V,E) where V is the total number of vertices and E is the total number of edges, what is the time complexity of deleting an edge? I thought it would be O(V) worst case since the max number of edges any vertex can have is V-1. But I have been told the time complexity is O(M) where M is the number of edges a vertex has. Which is correct?
Depends on the structure of your graph.
If you choose to implement the graph as an adjacency list, removing an element from a list is O(V), since you may have to iterate through the list.
However, you can implement the graph as a list of sets (each set being the list of adjacent nodes of a node), and hence the time complexity can be O(logV) if the set is sorted or O(1) if it is a hash set.
If your graph is represented as an adjacency matrix, it is also O(1), since you just have to erase E[u][v] and E[v][u].

Time Complexity of Dijkstra's Algorithm when using Adjacency Matrix vs Adjacency Linked List

For a graph with v vertices and e edges, and a fringe stored in a binary min heap, the worst case runtime is O((n+e)lg(n)). However, this is assuming we use a adjacency linked list to represent the graph. Using a adjacency matrix takes O(n^2) to traverse, while a linked list representation can be traversed in O(n+e).
Therefore, would using the matrix to represent the graph change the runtime of Dijkstra's to O(n^2lg(n))?
The O(log n) cost is paid for processing edges, not for walking the graph, so if you know the actual number of edges in the graph then Dijkstra's algorithm on an adjacency matrix with min-heap is within O(n^2 + (n+e)log(n)) time.

Bellman-Ford Algorithm Space Complexity

I have been searching about Bellman-Ford Algorithm's space complexity but on wikipedia Bellman-Ford Algorithm and it says space complexity is O(V). on this link it says O(V^2) . My question is; what is the true space complexity and why?
It depends on the way we define it.
If we assume that the graph is given, the extra space complexity is O(V) (for an array of distances).
If we assume that the graph also counts, it can be O(V^2) for an adjacency matrix and O(V+E) for an adjacency list.
They both are "true" in some sense. It's just about what we want to count in a specific problem.
There are two cases:-
If we assume that the graph is given, then we have to create 2 arrays (for an array of distances and array of parents) so, the extra space complexity is O(V) .
If we consider storing of graph also then:
a) O(V^2) for adjacency matrix
b) O(V+E) for adjacency list
c) O(E) if we just create edges list which will store all the edges only
It does not matter whether we are using adjacency list or.
adjacency matrix if given graph is complete one then
space complexity = input + extra
1 if we use adjacency matrix, space = input + extra O(V^2)+O(V) ->Using min heap =O(V^2)
2 if we use adjacency list, space = input + extraa
In complite graph E = O(V^2)
O(V + E) + O(V) -> min heap = O(V^2)
Because if we talk about space complexity for an.
algorithm we always go with worst case what can be.
happen .in Dijkstra or bellman ford both have complite
Graph, Space Complexity is = O(V^2)

Linear Time Algorithm For MST

I was wondering if anyone can point to a linear time algorithm to find the MST of a graph when there is a small number of weights (I.e edges can only have 2 different weights).
I could not find anything on google other than Prim's, Kruskal's, Boruvka's none of which seem to have any properties that would reduce the run time in this special case. I'm guessing to make it linear time it would have to be some sort of modification of BFS (which finds the MST when the weights are uniform).
The cause of the lg V factor in Prim's O(V lg V) runtime is the heap that is used to find the next candidate edge. I'm pretty sure that it is possible to design a priority queue that does insertion and removal in constant time when there's a limited number of possible weights, which would reduce Prim to O(V).
For the priority queue, I believe it would suffice with an array whose indices covers all the possible weights, where each element points to a linked list that contains the elements with that weight. You'd still have a factor of d (the number of distinct weights) for figuring out which list to get the next element out of (the "lowest" non-empty one), but if d is a constant, then you'll be fine.
Elaborating on Aasmund Eldhuset's answer: if the weights in the MST are restricted to numbers in the range 0, 1, 2, 3, ..., U-1, then you can adapt many of the existing algorithms to run in (near) linear time if U is a constant.
For example, let's take Kruskal's algorithm. The first step in Kruskal's algorithm is to sort the edges into ascending order of weight. You can do this in time O(m + U) if you use counting sort or time O(m lg U) if you use radix sort. If U is a constant, then both of these sorting steps take linear time. Consequently, the runtime for running Kruskal's algorithm in this case would be O(m α(m)), where α(m) is the inverse Ackermann function, because the limiting factor is going to be the runtime of maintaining the disjoint-set forest.
Alternatively, look at Prim's algorithm. You need to maintain a priority queue of the candidate distances to the nodes. If you know that all the edges are in the range [0, U), then you can do this in a super naive way by just storing an array of U buckets, one per possible priority. Inserting into the priority queue then just requires you to dump an item into the right bucket. You can do a decrease-key by evicting an element and moving it to a lower bucket. You can then do a find-min by scanning the buckets. This causes the algorithm runtime to be O(m + nU), which is linear if U is a constant.
Barder and Burkhardt in 2019 proposed this approach to find MSTs in linear time given the non-MST edges are given in ascending order of their weights.

Resources