Complexity of non-recursiveDFS code - algorithm

I think the complexity of this code is:
Time : O (v) : v is the vertex
Space: O (v) : v is the vertex
public void dfs() {
Stack<Integer> stack = new Stack<Integer>();
stack.add(source);
while (!stack.empty()) {
int vertex = stack.pop();
System.out.println(" print v: " + vertex);
for (int v : graph.adj(vertex)) {
if (!visited[v]) {
visited[v] = true;
stack.add(v);
edgeTo[v] = vertex;
}
}
}
}
Please correct me if I am wrong

Assuming that graph.adj() always produce a bounded number of vertices (maybe just one), then you are right.
However, if it depends in any way on the total number of vertices present in the system, then it is not. If this dependency is linear, then the algorithm is O(n^2).
Generalizing, if f(n) is the average number of graph.adj() per vertex, then the answer is O(n*f(n)).

You are traversing the adjacency matrix of each unvisited node, and each node is visited exactly once. Thus, you are in effect visiting each edge once, and thus the complexity is O(E), which can be as much as O(v^2) in worst case.

Related

Time Complexity of Printing a Graph in Adjacency List Representation

What is the order of growth of the running time of the following code if the graph uses an adjacency-list representation, where V is the number of vertices, and E is the total number of edges?
// G.V() returns number of vertices, G is the graph.
for (int v = 0; v < G.V(); v++) {
for (w : G.adj(v)) {
System.out.println(v + "-" + w);
}
}
Why is the time complexity of the above code Theta(V+E), where V is the number of vertices and E is the number of edges?
I believe that if we let printing be the cost function, then it should be Theta(sum of degrees of each v) = Theta(2E) = Theta(E) because we enter the inner loop deg(v) times for vertex v.
if we let printing be the cost function, then
Using such assumption, yes, there will be Theta(E) println calls.
However, generally the execution time does not depend only on printing, but also on other instructions such as v++, there will be Theta(V+E) of them all.

Time Complexity of Straight Forward Dijkstra's Algorithm

I am having a hard time seeing the O(mn) bound for the straightforward implementation Dijkstra's algorithm (without a heap). In my implementation and others I have found the main loop iterates n-1 times (for each vertex that is not source, n-1), then in each iteration finding the minimum vertex is O(n) (examining each vertex in the queue and finding min distance to source) and then each discovered minimum vertex would have at most n-1 neighbors, so updating all neighbors is O(n). This would seem to me to lead to a bound of O(n^2). My implementation is provided below
public int[] dijkstra(int s) {
int[] dist = new int[vNum];
LinkedList queue = new LinkedList<Integer>();
for (int i = 0; i < vNum; i++) {
queue.add(i); // add all vertices to the queue
dist[i] = Integer.MAX_VALUE; // set all initial shortest paths to max INT value
}
dist[s] = 0; // the source is 0 away from itself
while (!queue.isEmpty()) { // iterates over n - 1 vertices, O(n)
int minV = getMinDist(queue, dist); // get vertex with minimum distance from source, O(n)
queue.remove(Integer.valueOf(minV)); // remove Integer object, not position at integer
for (int neighbor : adjList[minV]) { // O(n), max n edges
int shortestPath = dist[minV] + edgeLenghts[minV][neighbor];
if (shortestPath < dist[neighbor]) {
dist[neighbor] = shortestPath; // a new shortest path have been found
}
}
}
return dist;
}
I don't think this is correct, but I am having trouble see where m factors in.
Your implementation indeed removes the M factor, at least if we consider only simple graphs (no multiple edges between two vertices). It is O(N^2)! The complexity would be O(N*M) if you would iterate through all the possible edges instead of vertices.
EDIT: Well, it is actually O(M + N^2) to be more specific. Changing value in some vertex takes O(1) time in your algorithm and it might happen each time you consider an edge, in other words, M times. That's why there is M in the complexity.
Unfortunately, if you want to use simple heap, the complexity is going to be O(M* log M) (or M log N). Why? You are not able to quickly change values in heap. So if dist[v] suddenly decreases, because you've found a new, better path to v, you can't just change it in the heap, because you don't really know it's location. You may put a duplicate of v in your heap, but the size of the heap would be then O(M). Even if you store the location and update it cleverly, you might have O(N) items in the heap, but you still have to update the heap after each change, which takes O(size of heap). You may change the value up to O(M) times, what gives you O(M* log M (or N)) complexity

Difference between Prim and Dijkstra graph algorithm

I'm reading graph algorithms from Cormen book. Below is pseudocode from that book
Prim algorithm for MST
MST-PRIM (G, w, r)
for each u in G.V
u.key = infinity
u.p = NIL
r.key = 0
Q = G.V
while Q neq null
u = EXTRACT-MIN(Q)
for each v in G.Adj[u]
if (v in Q) and (w(u,v) < v.key)
v.p = u
v.key = w(u,v)
Dijkstra algorithm to find single source shortest path.
INITIALIZE-SINGLE-SOURCE (G,s)
for each vertex v in G.V
v.d = infinity
v.par = NIL
s.d = 0
DIJKSTRA (G, w, s)
INITIALIZE-SINGLE-SOURCE(G,s)
S = NULL
Q = G.V
while Q neq null
u = EXTRACT-MIN(Q)
S = S U {u}
for each vertex v in G.Adj[u]
RELAX(u,v,w)
My question is, why we are checking if vertex belongs to Q (v in Q), i.e. that vertex doesn't belong to tree, whereas in Dijkstra algorithm we are not checking for that.
Any reason, why?
The algorithms called Prim and Dijkstra solve different problems in the first place. 'Prim' finds a minimum spanning tree of an undirected graph, while 'Disjkstra' solves the single-source shortest path problem for directed graphs with nonnegative edge weights.
In both algorithms queue Q contains all vertices that are not 'done' yet, i.e. white and gray according to common terminology (see here).
In Dijkstra's algorithm, the black vertex cannot be relaxed, because if it could, that would mean that its distance was not correct beforehand (contradicts with property of black nodes). So there is no difference whether you check v in Q or not.
In Prim's algorithm, it is possible to find an edge of small weight, that leads to already black vertex. That's why if we do not check v in Q, then the value in vertex v can change indeed. Normally, it does not matter, because we never read min-weight value for black vertices. However, your pseudocode is using MinHeap data structure. In this case each modification of vertex values must be accompanied with DecreaseKey. Clearly, it is not valid to call DecreaseKey for black vertices, because they are not in heap. That's why I suppose author decided to check for v in Q explicitly.
Speaking generally, the codes for Dijkstra's and Prim's algorithms are usually absolutely same, except for a minor difference:
Prim's algorithm checks w(u, v) for being less than D(v) in RELAX.
Dijkstra's algorithm checks D(u) + w(u, v) for being less D(v) in RELAX.
Take a look at my personal implementation for both Dijkstra and Prim written in C++.
They are very similar and I modified Dijkstra into Prim.
Dijkstra:
const int INF = INT_MAX / 4;
struct node { int v, w; };
bool operator<(node l, node r){if(l.w==r.w)return l.v>r.v; return l.w> r.w;}
vector<int> Dijkstra(int max_v, int start_v, vector<vector<node>>& adj_list) {
vector<int> min_dist(max_v + 1, INF);
priority_queue<node> q;
q.push({ start_v, 0 });
min_dist[start_v] = 0;
while (q.size()) {
node n = q.top(); q.pop();
for (auto adj : adj_list[n.v]) {
if (min_dist[adj.v] > n.w + adj.w) {
min_dist[adj.v] = n.w + adj.w;
q.push({ adj.v, adj.w + n.w });
}
}
}
return min_dist;
}
Prim:
struct node { int v, w; };
bool operator<(node l, node r) { return l.w > r.w; }
int MST_Prim(int max_v, int start_v, vector<vector<node>>& adj_list) {
vector<int> visit(max_v + 1, 0);
priority_queue<node> q; q.push({ start_v, 0 });
int sum = 0;
while (q.size()) {
node n = q.top(); q.pop();
if (visit[n.v]) continue;
visit[n.v] = 1;
sum += n.w;
for (auto adj : adj_list[n.v]) {
q.push({ adj.v, adj.w });
}
}
return sum;
}

Bellman-Ford Algorithm

I know that Bellman-Ford algorithm takes at most |V| - 1 iterations to find the shortest path if the graph does not contain a negative weight cycle. Is there a way to modify Bellman-Ford algorithm so it will find the shortest path in 1 iteration?
No, worst-case running time of Bellman-Ford is O(E*V) which comes because of the necessity to iterate over the graph over V-1 times. However, we can practically improve Bellman-Ford to a running time of O(E+V) by using a queue-based bellman-ford variant.
Here's the queue-based Bellman-Ford implementation. Code inspired from the booksite Algorithms, 4th edition, Robert Sedgewick and Kevin Wayne
private void findShortestPath(int src) {
queue.add(src);
distTo[src] = 0;
edgeTo[src] = -1;
while (!queue.isEmpty()) {
int v = queue.poll();
onQueue[v] = false;
for (Edge e : adj(v)){
int w = e.dest;
if (distTo[w] > distTo[v] + e.weight) {
distTo[w] = distTo[v] + e.weight;
edgeTo[w] = v;
}
if (!onQueue[w]) {
onQueue[w] = true;
queue.add(w);
}
//Find if a negative cycle exists after every V passes
if (cost++ % V == 0) {
if (findNegativeCycle())
return;
}
}
}
}
The worst case running time of this algorithm is still O(E*V) but, in this algorithm typically runs in O(E+V) practically.

Running time of minimum spanning tree? ( Prim method )

I have written a code that solves MST using Prim method. I read that this kind of implementation(using priority queue) should have O(E + VlogV) = O(VlogV) where E is the number of edges and V number of Edges but when I look at my code it simply doesn't look that way.I would appreciate it if someone could clear this up for me.
To me it seems the running time is this:
The while loop takes O(E) times(until we go through all the edges)
Inside that loop we extract an element from the Q which takes O(logE) time.
And the second inner loop takes O(V) time(although we dont run this loop everytime
it is clear that it will be ran V times since we have to add all the vertices )
My conclusion would be that the running time is: O( E(logE+V) ) = O( E*V ).
This is my code:
#define p_int pair < int, int >
int N, M; //N - nmb of vertices, M - nmb of edges
int graph[100][100] = { 0 }; //adj. matrix
bool in_tree[100] = { false }; //if a node if in the mst
priority_queue< p_int, vector < p_int >, greater < p_int > > Q;
/*
keeps track of what is the smallest edge connecting a node in the mst tree and
a node outside the tree. First part of pair is the weight of the edge and the
second is the node. We dont remember the parent node beaceuse we dont need it :-)
*/
int mst_prim()
{
Q.push( make_pair( 0, 0 ) );
int nconnected = 0;
int mst_cost = 0;
while( nconnected < N )
{
p_int node = Q.top(); Q.pop();
if( in_tree[ node.second ] == false )
{
mst_cost += node.first;
in_tree[ node.second ] = true;
for( int i = 0; i < N; ++i )
if( graph[ node.second ][i] > 0 && in_tree[i]== false )
Q.push( make_pair( graph[ node.second ][i], i ) );
nconnected++;
}
}
return mst_cost;
}
You can use adjacency lists to speed your solution up (but not for dense graphs), but even then, you are not going to get O(V log V) without a Fibonacci heap..
Maybe the Kruskal algorithm would be simpler for you to understand. It features no priority queue, you only have to sort an array of edges once. It goes like this basically:
Insert all edges into an array and sort them by weight
Iterate over the sorted edges, and for each edge connecting nodes i and j, check if i and j are connected. If they are, skip the edge, else add the edge into the MST.
The only catch is to be quickly able to say if two nodes are connected. For this you use the Union-Find data structure, which goes like this:
int T[MAX_#_OF_NODES]
int getParent(int a)
{
if (T[a]==-1)return a;
return T[a]=getParent(T[a]);
}
void Unite(int a,int b)
{
if (rand()&1)
T[a]=b;
else
T[b]=a;
}
In the beginning, just initialize T to all -1, and then every time you want to find out if nodes A and B are connected, just compare their parents - if they are the same, they are connected (like this getParent(A)==getParent(B)). When you are inserting the edge to MST, make sure to update the Union-Find with Unite(getParent(A),getParent(B)).
The analysis is simple, you sort the edges and iterate over using the UF that takes O(1). So it is O(E logE + E ) which equals O(E log E).
That is it ;-)
I did not have to deal with the algorithm before, but what you have implemented does not match the algorithm as explained on Wikipedia. The algorithm there works as follows.
But all vertices into the queue. O(V)
While the queue is not empty... O(V)
Take the edge with the minimum weight from the queue. O(log(V))
Update the weights of adjacent vertices. O(E / V), this is the average number of adjacent vertices.
Reestablish the queue structure. O(log(V))
This gives
O(V) + O(V) * (O(log(V)) + O(V/E))
= O(V) + O(V) * O(log(V)) + O(V) * O(E / V)
= O(V) + O(V * log(V)) + O(E)
= O(V * log(V)) + O(E)
exactly what one expects.

Resources