Dropping non-constants in Algorithm Complexity - algorithm

So, basically I'm implementing an algorithm to calculate distances from one source node to every other node in a weighted graph, and if a node is in a negative cycle, it detects and marks that node as such.
My question regards the total time complexity of my algorithm. Assume V is number of nodes and E the number of edges.
The algorithm starts by asking E lines of input to specify the Edges of the graph and inserts it in the corresponding adjacency list. Such operation is O(E)
I apply the Bellman-Ford algorithm V-1 times to know the distances and then I apply the algorithm V-1 times once again to detect the Nodes in a negative cycle. This is 2 * O(VE) = O(VE).
I print a distances vector with the size V to display the distances and/or wether the node is in a negative cycle or not. O(V).
So I guess my total complexity would be O(VE + V + E). Now my question is: Since VE is almost always bigger than V+E (for large numbers, it's always!), can I drop the V+E in the complexity and make it simply O(VE)?

Yes, O(VE + V + E) simplifies to O(VE) given that V and E represent the number of vertices and edges in a graph. For a highly connected graph, E = O(V^2) and so in that case VE + V + E = O(V^3) = O(VE). For a sparse graph, E = O(V) (note, this is not necessarily a tight upper bound) and so VE + V + E = O(V^2) = O(VE). In all cases O(VE) is an appropriate upper bound on the complexity.

Yes, when dealing with asymptotic complexity, you always assume that V and E are very large (in theory, you study complexity by calculating limits when V and E approach infinity). Pretty much the same why you can write n^2 + n = O(n^2), in your case VE + V + E is O(VE).
Note that the worst-case complexity of Bellman-Ford actually is O(VE), which confirms that your reasoning is correct.

Related

Clarkson's 2-approximation Weighted Vertex Cover Algorithm Runtime analysis

A well-known 2-approximation for a Minimum Weighted Vertex Cover Problem is the one proposed by Clarkson:
Clarkson, Kenneth L. "A modification of the greedy algorithm for vertex cover." Information Processing Letters 16.1 (1983): 23-25.
Easy-to-read pseudo code of the algorithm can be found here see section 32.1.2.
The algorithm, according to the paper, has a runtime complexity of O(|E|*log|V|) where E is the set of edges and V the set of vertices. I'm not entirely sure how they get this result.
Let d(v) be the degree of vertex v in a graph, and w(v) be some weight function.
Excluding some of the technicalities from the algorithm, the algorithm looks like this:
while( |E| != 0){ //While there are still edges in the graph
Pick a vertex v \in V for which w(v)/d(v) is minimized;
for( u : (u,v) \in E){
update w(u);
...
}
delete v and all edges incident to it from the graph.
}
The outer loop produces the term |E| in the runtime complexity. That means that picking a vertex out of a list of vertices which minimizes some ratio can be done in log n time. As far as I can tell, finding a minimum value out of a list of values takes n-1 comparisons, not log n. Finally, the inner for loop runs for every neighbor of v, so yields a complexity of d(v) which is dominated by n-1. Hence I would conclude that the algorithm has a runtime complexity of O(|E|*|V|).
What am I missing here?
Keep the vertices in a balanced binary search tree ordered by w(v)/d(v). Finding the min is O(log |V|). Each time we delete an edge uv, we have to update u's key (by removing it and reinserting it into the tree with the new key), which takes time O(log |V|). Each of these steps is done at most |E| times.

Breadth First Search time complexity analysis

The time complexity to go over each adjacent edge of a vertex is, say, O(N), where N is number of adjacent edges. So, for V numbers of vertices the time complexity becomes O(V*N) = O(E), where E is the total number of edges in the graph. Since removing and adding a vertex from/to a queue is O(1), why is it added to the overall time complexity of BFS as O(V+E)?
I hope this is helpful to anybody having trouble understanding computational time complexity for Breadth First Search a.k.a BFS.
Queue graphTraversal.add(firstVertex);
// This while loop will run V times, where V is total number of vertices in graph.
while(graphTraversal.isEmpty == false)
currentVertex = graphTraversal.getVertex();
// This while loop will run Eaj times, where Eaj is number of adjacent edges to current vertex.
while(currentVertex.hasAdjacentVertices)
graphTraversal.add(adjacentVertex);
graphTraversal.remove(currentVertex);
Time complexity is as follows:
V * (O(1) + O(Eaj) + O(1))
V + V * Eaj + V
2V + E(total number of edges in graph)
V + E
I have tried to simplify the code and complexity computation but still if you have any questions let me know.
Considering the following Graph we see how the time complexity is O(|V|+|E|) but not O(V*E).
Adjacency List
V E
v0:{v1,v2}
v1:{v3}
v2:{v3}
v3:{}
Operating How BFS Works Step by Step
Step1:
Adjacency lists:
V E
v0: {v1,v2} mark, enqueue v0
v1: {v3}
v2: {v3}
v3: {}
Step2:
Adjacency lists:
V E
v0: {v1,v2} dequeue v0;mark, enqueue v1,v2
v1: {v3}
v2: {v3}
v3: {}
Step3:
Adjacency lists:
V E
v0: {v1,v2}
v1: {v3} dequeue v1; mark,enqueue v3
v2: {v3}
v3: {}
Step4:
Adjacency lists:
V E
v0: {v1,v2}
v1: {v3}
v2: {v3} dequeue v2, check its adjacency list (v3 already marked)
v3: {}
Step5:
Adjacency lists:
V E
v0: {v1,v2}
v1: {v3}
v2: {v3}
v3: {} dequeue v3; check its adjacency list
Step6:
Adjacency lists:
V E
v0: {v1,v2} |E0|=2
v1: {v3} |E1|=1
v2: {v3} |E2|=1
v3: {} |E3|=0
Total number of steps:
|V| + |E0| + |E1| + |E2| +|E3| == |V|+|E|
4 + 2 + 1 + 1 + 0 == 4 + 4
8 == 8
Assume an adjacency list representation, V is the number of vertices, E the number of edges.
Each vertex is enqueued and dequeued at most once.
Scanning for all adjacent vertices takes O(|E|) time, since sum of lengths of adjacency lists is |E|.
Hence The Time Complexity of BFS Gives a O(|V|+|E|) time complexity.
The other answers here do a great job showing how BFS runs and how to analyze it. I wanted to revisit your original mathematical analysis to show where, specifically, your reasoning gives you a lower estimate than the true value.
Your analysis goes like this:
Let N be the average number of edges incident to each node (N = E / V).
Each node, therefore, spends O(N) time doing operations on the queue.
Since there are V nodes, the total runtime is the O(V) · O(N) = O(V) · O(E / V) = O(E).
You are very close to having the right estimate here. The question is where the missing V term comes from. The issue here is that, weirdly enough, you can't say that O(V) · O(E / V) = O(E).
You are totally correct that the average work per node is O(E / V). That means that the total work done asympotically is bounded from above by some multiple of E / V. If we think about what BFS is actually doing, the work done per node probably looks more like c1 + c2E / V, since there's some baseline amount of work done per node (setting up loops, checking basic conditions, etc.), which is what's accounted for by the c1 term, plus some amount of work proportional to the number of edges visited (E / V, times the work done per edge). If we multiply this by V, we get that
V · (c1 + c2E / V)
= c1V + c2E
= Θ(V + E)
What's happening here is that those lovely lower-order terms that big-O so conveniently lets us ignore are actually important here, so we can't easily discard them. So that's mathematically at least what's going on.
What's actually happening here is that no matter how many edges there are in the graph, there's some baseline amount of work you have to do for each node independently of those edges. That's the setup to do things like run the core if statements, set up local variables, etc.
Performing an O(1) operation L times, results to O(L) complexity.
Thus, removing and adding a vertex from/to the Queue is O(1), but when you do that for V vertices, you get O(V) complexity.
Therefore, O(V) + O(E) = O(V+E)
Will the time complexity of BFS be not O(V) considering we only have to traverse the vertices in the adjacency list? Am I missing something here?
For the below graph represented using adjacency list for ex:
0 ->1->2->null
1->3->4->null
3->null
4->null
While creating the graph we have the adjacency list which is an array of linked lists. So my understanding is during traversal this array is available to us and it's enough if we only traverse all the elements of this array and mark each element as visited to not visit it twice. What am I missing here?
I would just like to add to above answers that if we are using an adjacency matrix instead of a adjacency list, the time complexity will be O(V^2), as we will have to go through a complete row for each vertex to check which nodes are adjacent.
You are saying that total complexity should be O(V*N)=O(E). Suppose there is no edge between any pair of vertices i.e. Adj[v] is empty for all vertex v. Will BFS take a constant time in this case? Answer is no. It will take O(V) time(more accurately θ(V)). Even if Adj[v] is empty, running the line where you check Adj[v] will itself take some constant time for each vertex. So running time of BFS is O(V+E) which means O(max(V,E)).
One of the ways that I grasped the intuition of the time complexity
O ( V + E)
is that when we traverse the graph (let's take BFS pseudocode in Java):
for(v:V){ // segment 1
if(!v.isVisited) {
q = new Queue<>();
q.add(v);
v.isVisited = true
while(!q.isEmpty) {
curr = q.poll()
for(u: curr.adjacencyList ){ //Segment 2
//do some processing
u.isVisited = true
}
}
}
}
As, we can see there are two important segments 1 and 2 which determines the time complexity.
Case 1: Consider a graph with only vertices and a few edges, sparsely connected graph (100 vertices and 2 edges).
In that case, the segment 1 would dominate the course of traversal.
Hence making, O(V) as the time complexity as segment 1 checks all vertices in graph space once.
Therefore, T.C. = O(V) (since E is negligible).
Case 2: Consider a graph with few vertices but a complete graph (6 vertices and 15 edges) (n C 2).
Here the segment 2 will dominate as the number of edges are more and the segment 2 gets evaluated 2|E| times for an undirected graph.
T.C. of first vertex processing would be,
O(1) * O(2|E|) = O(E)
The rest of the vertex will not be evaluated for the segment 1 and would just add V-1 times of processing (since they are already visited in segment 2 which is O(V).
Thus, in this case its better to say T.C. = O(E) + O(V)
So, in the worst/best case of number of edges, we have
TC(taversing) O(E) + O(V) or
= O(E+V)

Understanding Time complexity calculation for Dijkstra Algorithm

As per my understanding, I have calculated time complexity of Dijkstra Algorithm as big-O notation using adjacency list given below. It didn't come out as it was supposed to and that led me to understand it step by step.
Each vertex can be connected to (V-1) vertices, hence the number of adjacent edges to each vertex is V - 1. Let us say E represents V-1 edges connected to each vertex.
Finding & Updating each adjacent vertex's weight in min heap is O(log(V)) + O(1) or O(log(V)).
Hence from step1 and step2 above, the time complexity for updating all adjacent vertices of a vertex is E*(logV). or E*logV.
Hence time complexity for all V vertices is V * (E*logV) i.e O(VElogV).
But the time complexity for Dijkstra Algorithm is O(ElogV). Why?
Dijkstra's shortest path algorithm is O(ElogV) where:
V is the number of vertices
E is the total number of edges
Your analysis is correct, but your symbols have different meanings! You say the algorithm is O(VElogV) where:
V is the number of vertices
E is the maximum number of edges attached to a single node.
Let's rename your E to N. So one analysis says O(ElogV) and another says O(VNlogV). Both are correct and in fact E = O(VN). The difference is that ElogV is a tighter estimation.
Adding a more detailed explanation as I understood it just in case:
O(for each vertex using min heap: for each edge linearly: push vertices to min heap that edge points to)
V = number of vertices
O(V * (pop vertex from min heap + find unvisited vertices in edges * push them to min heap))
E = number of edges on each vertex
O(V * (pop vertex from min heap + E * push unvisited vertices to min heap)). Note, that we can push the same node multiple times here before we get to "visit" it.
O(V * (log(heap size) + E * log(heap size)))
O(V * ((E + 1) * log(heap size)))
O(V * (E * log(heap size)))
E = V because each vertex can reference all other vertices
O(V * (V * log(heap size)))
O(V^2 * log(heap size))
heap size is V^2 because we push to it every time we want to update a distance and can have up to V comparisons for each vertex. E.g. for the last vertex, 1st vertex has distance 10, 2nd has 9, 3rd has 8, etc, so, we push each time to update
O(V^2 * log(V^2))
O(V^2 * 2 * log(V))
O(V^2 * log(V))
V^2 is also a total number of edges, so if we let E = V^2 (as in the official naming), we will get the O(E * log(V))
let n be the number of vertices and m be the number of edges.
Since with Dijkstra's algorithm you have O(n) delete-mins and O(m) decrease_keys, each costing O(logn), the total run time using binary heaps will be O(log(n)(m + n)). It is totally possible to amortize the cost of decrease_key down to O(1) using Fibonacci heaps resulting in a total run time of O(nlogn+m) but in practice this is often not done since the constant factor penalties of FHs are pretty big and on random graphs the amount of decrease_keys is way lower than its respective upper bound (more in the range of O(n*log(m/n), which is way better on sparse graphs where m = O(n)). So always be aware of the fact that the total run time is both dependent on your data structures and the input class.
In dense(or complete) graph, E logV > V^2
Using linked data & binary heap is not always best.
That case, I prefer to use just matrix data and save minimum length by row.
Just V^2 time needed.
In case, E < V / logV.
Or, max edges per vertex is less than some constant K.
Then use binary heap.
I find it easier to think at this complexity in the following way:
The nodes are first inserted in a priority queue and the extracted one by one leading to O(V log V).
Once a node is extracted, we iterate through its edges and update the priority queue accordingly. Note that every edge is explored only once, moreover, updating the priority queue is O(log V), leading to an overall O(E log V).
TLDR. You have V extractions from the priority queue and E updates to the priority queue, leading to an overall O((V + E) log V).
Let's try to analyze the algorithm as given in CLRS book.
for each loop in line 7: for any vertex say 'u' the number of times the loop runs is equal to the number of adjacent vertices of 'u'.
The number of adjacent vertices for a node is always less than or equal to the total number of edges in the graph.
If we take V (because of while loop in line 4) and E (because of for each in line 7) and compute the complexity as VElog(V) it would be equivalent to assuming each vertex has E edges incident on it, but in actual there will be atmost or less than E edges incident on a single vertex. (the atmost E adjacent vertices for a single vertex case happens in case of a star graph for the internal vertex)
V:Number of Vertices,
E:Number of total_edges
Suppose the Graph is dense
The complexity would be O(V*logV) + O( (1+2+...+V)*logV)
1+2+....+(V-1) = (v)*(v+1)/2 ~ V^2 ~ E because the graph is dense
So the complexity would be O(ElogV).
The sum 1+2+...+ V refers to: For each vertex v in G.adj[u] but not in S
If you think about Q before a vertex is extracted has V vertices then it has V-1 then V-2
... then 1.
E is edges and V is vertices. Number of edges
(V *(V-1)) / 2
approximately
V ^ 2
So we can add maximum V^2 edges to the min heap. So sorting the elements in min heap will take
O(Log(V ^ 2))
Every time we insert a new element into min heap, we are going to sort. We will have E edges so we will be sorting E times. so total time complexity
O(E * Log(V ^ 2)= O( 2 * E * Log(V))
Omitting the constant 2:
O( E * Log(V))

Worst case running time in a graph with n vertices and m edges

Let G be a connected graph with n vertices and m edges. Which of the following best corresponds to the notion of "linear time," when this graph is the input to an algorithm?
a) O(n)
b) O(m)
c) O(n^2)
d) O((n+m)^2)
I didn't think this question would trip me up as much as it did, but I have to figure it out now. By definition of linear time, I would assume it is either a or b. If i HAD to choose one, I would go with B, as there may be more edges then there are vertices. But, I know that may not be the case and there may be more vertices than edges, so A doesn't sound too bad either, and D is also the only one that actually takes both n and m into account.
Yes, you are right, the answer is (b).
Note that G is a connected graph, so we have the following basic fact:
m ≥ n-1
Since the input to the algorithm is the graph G, the input size to the algorithm is n + m, and we have:
n + m
≤ m - 1 + m
< 2m
Therefore linear time with respect to the input is O(n + m) = O(m).

Graph Minimum Spanning Tree using BFS

This is a problem from a practice exam that I'm struggling with:
Let G = (V, E) be a weighted undirected connected graph, with positive
weights (you may assume that the weights are distinct). Given a real
number r, define the subgraph Gr = (V, {e in E | w(e) <= r}). For
example, G0 has no edges (obviously disconnected), and Ginfinity = G
(which by assumption is connected). The problem is to find the
smallest r such that Gr is connected.
Describe an O(mlogn)-time algorithm that solves the problem by
repeated applications of BFS or DFS.
The real problem is doing it in O(mlogn). Here's what I've got:
r = min( w(e) ) => O(m)
while true do => O(m)
Gr = G with edges e | w(e) > r removed => O(m)
if | BFS( Gr ).V | < |V| => O(m + n)
r++ (or r = next smallest w(e))
else
return r
That's a whopping O(m^2 + mn). Any ideas for getting it down to O(mlogn)? Thanks!
You are iterating over all possible edge costs which results in the outer loop of O(m). Notice that if the graph is disconnected when you discard all edges >w(e), it is also disconnected for >w(e') where w(e') < w(e). You can use this property to do a binary search over the edge costs and thus do this in O(log(n)).
lo=min(w(e) for e in edges), hi=max(w(e) for e in edges)
while lo<hi:
mid=(lo+hi)/2
if connected(graph after discarding all e where w(e)>w(mid)):
lo=mid
else:
hi=mid-1
return lo
The binary search has a complexity of O(log (max_e-min_e)) (you can actually bring it down to O(log(edges)) and discarding edges and determining connectivity can be done in O(edges+vertices), so this can be done in O((edge+vertices)*log(edges)).
Warning: I have not tested this in code yet, so there may be bugs. But the idea should work.
How about the following algorithm?
First take a list of all edges (or all distinct edge lengths, using ) from the graph and sort them. That takes O(m*log m) = O(m*log n) time: m is usually less than n^2, so O(log m)=O(log n^2)=O(2*log n)=O(log n).
It is obvious that r should be equal to the weight of some edge. So you can do a binary search on the index of the edge in the sorted array.
For each index you try, you take the length of the correspondong edge as r, and check the graph for connectivity, only using the edges of length <= r with BFS or DFS.
Each iteration of the binary search takes O(m), and you have to make O(log m)=O(log n) iterations.

Resources