Bellman-Ford Algorithm Space Complexity - algorithm

I have been searching about Bellman-Ford Algorithm's space complexity but on wikipedia Bellman-Ford Algorithm and it says space complexity is O(V). on this link it says O(V^2) . My question is; what is the true space complexity and why?

It depends on the way we define it.
If we assume that the graph is given, the extra space complexity is O(V) (for an array of distances).
If we assume that the graph also counts, it can be O(V^2) for an adjacency matrix and O(V+E) for an adjacency list.
They both are "true" in some sense. It's just about what we want to count in a specific problem.

There are two cases:-
If we assume that the graph is given, then we have to create 2 arrays (for an array of distances and array of parents) so, the extra space complexity is O(V) .
If we consider storing of graph also then:
a) O(V^2) for adjacency matrix
b) O(V+E) for adjacency list
c) O(E) if we just create edges list which will store all the edges only

It does not matter whether we are using adjacency list or.
adjacency matrix if given graph is complete one then
space complexity = input + extra
1 if we use adjacency matrix, space = input + extra O(V^2)+O(V) ->Using min heap =O(V^2)
2 if we use adjacency list, space = input + extraa
In complite graph E = O(V^2)
O(V + E) + O(V) -> min heap = O(V^2)
Because if we talk about space complexity for an.
algorithm we always go with worst case what can be.
happen .in Dijkstra or bellman ford both have complite
Graph, Space Complexity is = O(V^2)

Related

Prim's algorithm using heap or unsorted array?

There's a question in my CS class asking that if the graph is a sparse graph, which is n^2 >> m, whether unsorted list with O(n^2) is more efficient than the binary heap O((m+n)logn). I'm a little bit confused with this one because I thought (m+n)logn is always better than n^2. Can anyone give some insights on this?
a dense graph is a graph in which the number of edges is close to the maximal number of edges. The opposite, a graph with only a few edges, is a sparse graph.
from wikipedia.
Suppose we have a sparse graph with n nodes and m edges. Time complexity of the Prim's Algorithm is O( (n + m) log(n)) if we use a binary heap data structure. If we use an unsorted array (assuming you meant an adjacency matrix), then it becomes O(n^2) as you stated.
Compare the time complexities: O((n + m)log(n)) and O(n^2).
If our graph is sparse, then n > m. Therefore n^2 > mlog(n), which means that it's a better idea to implement the algorithm with a binary heap rather than an adjacency matrix.
If we have a dense graph, then we can at most m = n*(n-1) number of edges for a simple graph. It means that n^2 < mlog(n) ~ n^2 log(n). Therefore, now it's logical to use an adjacency matrix. If our graph is not simple, then there may be more than one edges between two nodes - therefore we can have more than n* (n-1) edges. However, the distinction between sparse and dense graphs is rather vague, and depends on the context. Therefore it's not guarantee that in a dense graph we will have m >= n*(n-1) number of nodes.

Space Complexity of DFS and BFS in graph

I am trying to understand what is the space complexity of DFS and BFS in a graph.
I understand that the space complexity of BFS while using adjacency matrix would be O(v^2) where v is the number of vertices.
By using the adjacency list space complexity would be decreased in average case i.e < v^2. But in the worst case, it would be O(v^2).
Even including Queue, it would be O(n^2) (neglecting the lower value)
But, what is the scenario with DFS?
Even if we use the adjacency matrix/list. Space Complexity would be O(v^2). But that seems to be a very loose bound, that too without considering stack frames.
Am I correct, regarding the complexities?
If, not what are the space complexities of BFS/DFS?
and while calculating Space Complexity for DFS, do we consider stack frame or not?
what is the tight bound of space complexity, for BFS and DFS for graph
As shown in Pseudocode 1, the space consumption of the adjacency matrix or adjacency list is not in the BFS algorithm. Adjacency matrix or adjacency list is the input to the BFS algorithm, thus it cannot be included in the calculation of space complexity. So does DFS.
Pseudocode 1
Input: A graph Graph and a starting vertex root of Graph
Output: Goal state. The parent links trace the shortest path back to root
procedure BFS(G,start_v):
let Q be a queue
label start_v as discovered
Q.enqueue(start_v)
while Q is not empty
v = Q.dequeue()
if v is the goal:
return v
for all edges from v to w in G.adjacentEdges(v) do
if w is not labeled as discovered:
label w as discovered
w.parent = v
Q.enqueue(w)
The space complexity of BFS can be expressed as O(|V|), where |V| is the cardinality of the set of vertices. For in the worst case, you would need to hold all vertices in the queue.
The space complexity of DFS depends on the implementation. A non-recursive implementation of DFS with worst-case space complexity O(|E|) is shown as followed, where E is the cardinality of the set of edges:
procedure DFS-iterative(G,v):
let S be a stack
S.push(v)
while S is not empty
v = S.pop()
if v is not labeled as discovered:
label v as discovered
for all edges from v to w in G.adjacentEdges(v) do
S.push(w)
Breadth-first search is complete, while depth-first search is not.

Time Complexity of Dijkstra's Algorithm when using Adjacency Matrix vs Adjacency Linked List

For a graph with v vertices and e edges, and a fringe stored in a binary min heap, the worst case runtime is O((n+e)lg(n)). However, this is assuming we use a adjacency linked list to represent the graph. Using a adjacency matrix takes O(n^2) to traverse, while a linked list representation can be traversed in O(n+e).
Therefore, would using the matrix to represent the graph change the runtime of Dijkstra's to O(n^2lg(n))?
The O(log n) cost is paid for processing edges, not for walking the graph, so if you know the actual number of edges in the graph then Dijkstra's algorithm on an adjacency matrix with min-heap is within O(n^2 + (n+e)log(n)) time.

For a given graph G = (V,E) how can you sort its adjacency list representation in O(E+V) time?

Because we know that the integers representing a vertex can take values in [0,...,|V|-1] range, we can use counting sort in order to sort each entry of the adjacency list in O(V) time.
Since we have V lists to sort, that would give us a O(V^2) time algorithm. I don't see how we can transform this into an O(V+E) time algorithm...
In fact you need to sort E elements in total - the number of edges. Thus your estimation of O(V^2) is not quite correct. You sort each of the adjacency lists in linear time with respect to the number of edges it contains. And as in total you will have E edges, the complexity of sorting all lists will be O(E). Of course as you have V lists, you can't get lower than O(V) and thus the estimation O(V +E).

Breadth-first search algorithm (graph represented by the adjacency list) has a quadratic time complexity?

A friend told me that breadth-first search algorithm (graph represented by the adjacency list) has a quadratic time complexity. But in all the sources says that the complexity of BFS algortim exactly O (|V| + |E|) or O (n + m), from which we obtain a quadratic complexity ?
All the sources are right :-) With BFS you visit each vertex and each edge exactly once, resulting in linear complexity. Now, if it's a completely connected graph, i.e. each pair of vertices is connected by an edge, then the number of edges grows quadratic with the number of vertices:
|E| = |V| * (|V|-1) / 2
Then one might say the complexity of BFS is quadratic in the number of vertices: O(|V|+|E|) = O(|V|^2)
BFS is O(E+V) hence in terms of input given it is linear time algorithm but if vertices of graph are considered then no of edges can be O(|V|^2) in dense graphs hence if we consider time complexity in terms of vertices in graph then BFS is O(|V|^2) hence can be considered quadratic in terms of vertices
O(n + m) is linear in complexity and not quadratic. O(n*m) is quadratic.
0. Initially all the vertices are labelled as unvisited. We start from a given vertex as the current vertex.
1. A BFS will cover (visit) all the adjacent unvisited vertices to the current vertex queuing up these children.
2. It would then label the current vertex as 'visited' so that it might not be visited (queued again).
3 BFS would then take out the first vertex from the queue and would repeat the steps from 1 till no more unvisited vertices remain.
The runtime for the above algorithm is linear in the total no. of vertices and edges together because the algorithm would visit each vertex once and check each of its edges once and thus it would take no. of vertices + no. of edges steps to completely search the graph

Resources