Suppose that all the edge weights in a graph are integers in the range from 1 to |V|. How fast can you make Prim's algorithm run? What if edge weights are integers in the range 1 to W for some constant W?
I think since the Prim's algorithm is based on implementation of min-heap, knowledge about the weights of edges will not help in speeding up the procedure. Is this correct?
With this constraint, you can implement a heap that uses O(V) / O(W) respectively space but has O(1) insert and O(1) extract-min operations. Actually you can get O(1) for all operations you require for Prim's algorithm. Since the time complexity of the heap influences the complexity of the main algorithm, you can get better than the default generic implementation.
I think the main idea to solve this problem is remember that W is a constant, so, if you represent your priority queue as some structure which size is bounded by W, travel the entire list at each iteration will not change the time complexity of your algorithm...
For example, if you represent your priority queue as an array T with W + 1 positions, having a linked list of vertices in each position such that T[i] is a list with all the vertices that have priority equal to i and use T[W + 1] to store vertices with priority equal to infinite, you will take
O(V) to build your priority queue (just insert all the vertices in the list T[W+1])
O(W) to extract the minimum element (just travel T searching for the first position non empty)
O(1) to decrease key (if vertex v had key equal to i and it was updated to j, just take-off v from list T[i] and insert at the first position of the list T[j]).
So, it will give you complexity O(VW + E) instead of O(V logV + E).
(Of course, it will not work if the range is from 1 to V, because V^2 + E is greater than V \logV + E).
For non-binary heap Prim's implementation, the pseudocode can be found with Cormen, Introduction to Algorithms, 3rd edition.
Knowing the range being 1...k, we can create an array with k size and walk through the list, adding edges to the corresponding weight. This, by nature of its storage, means the edges are sorted by weights. This would be O(n+m) time.
Relying on the pseudocode for Prim's algorithm in Cormen, we can analyze its complexity to result in O(nlog{n} + mlog{n}) = O((n+m)log{n}) time (Cormen page 636). In specific, step 7 and step 11 contributes the log{n} element that is iterated over n and m loop. The n log{n}-loop is from the EXTRACT-MIN operation, and the m log{n}-loop is from the "implicit DECREASE-KEY" operation. Both can be replaced with our edge-weight array, a loop of O(k). As such, with our modified Prim's algorithm, we would have a O(nk + mk) = O(k(n+m)) algorithm.
Related
I just want to have confirmation of the time complexity of the algorithm below.
EDIT: in what follows, we do not assume that the optimal data structures are being used. However, feel free to propose solutions that use such structures (clearly mentioning what data structures are used and what beneficial impact they have on complexity).
Notation: in what follows n=|V|, m=|E|, and avg_neigh denotes the average number of neighbors of any given node in the graph.
Assumption: the graph is unweighted, undirected, and we have its adjacency list representation loaded in memory (a list of lists containing the neighbors of each vertex).
Here is what we have so far:
Line 1: computing the degrees is O(n), as it simply involves getting the length of each sublist in the adjacency list representation, i.e., performing n O(1) operations.
Line 3: finding the lowest value requires checking all values, which is O(n). Since this is nested in the while loop which visits all nodes once, it becomes O(n^2).
Lines 6-7: removing vertex v is O(avg_neigh^2) as we know the neighbors of v from the adjacency list representation and removing v from each of the neighbors' sublists is O(avg_neigh). Lines 6-7 are nested in the while loop, so it becomes O(n * avg_neigh^2).
Line 9: it is O(1), because it simply involves getting the length of one list. It is nested in the for loop and while loop so it becomes O(n * avg_neigh).
Summary: the total complexity is O(n) + O(n^2) + O(n * avg_neigh^2) + O(n * avg_neigh) = O(n^2).
Note 1: if the length of each sublist is not available (e.g., because the adjacency list cannot be loaded in memory), computing the degrees in line 1 is O(n * avg_neigh), as each list features avg_neigh elements on average and there are n sublists. And line 9, the total complexity becomes O(n * avg_neigh^2).
Note 2: if the graph is weighted, we can store the edge weights in the adjacency list representation. However, getting the degrees in line 1 requires summing over each sublist and is now O(n * avg_neigh) if the adjacency list is loaded in RAM and O(n * avg_neigh^2) else. Similarly, line 9 becomes O(n * avg_neigh^2) or O(n * avg_neigh^3).
There is an algorithm that (1) is recognizable as an implementation of Algorithm 1 and (2) runs in time O(|E| + |V|).
First, let's consider the essence of Algorithm 1. Until the graph is empty, do the following: record the priority of the node with the lowest priority as the core number of that node, and delete that node. The priority of a node is defined dynamically as the maximum over (1) its degree in the residual graph and (2) the core numbers of its deleted neighbors. Observe that the priority of a node never increases, since its degree never increases, and a higher-priority neighbor would not be deleted first. Also, priorities always decrease by exactly one in each outer loop iteration.
The data structure support sufficient to achieve O(|E| + |V|) time splits nicely into
A graph that, starting from the initial graph (initialization time O(|E| + |V|)), supports
Deleting a node (O(degree + 1), summing to O(|E| + |V|) over all nodes),
Getting the residual degree of a node (O(1)).
A priority queue that, starting from the initial degrees (initialization time O(|V|)), supports
Popping the minimum priority element (O(1) amortized),
Decreasing the priority of an element by one (O(1)), never less than zero.
A suitable graph implementation uses doubly linked adjacency lists. Each undirected edge has two corresponding list nodes, which are linked to each other in addition to the previous/next nodes originating from the same vertex. Degrees can be stored in a hash table. It turns out we only need the residual degrees to implement this algorithm, so don't bother storing the residual graph.
Since the priorities are numbers between 0 and |V| - 1, a bucket queue works very nicely. This implementation consists of a |V|-element vector of doubly linked lists, where the list at index p contains the elements of priority p. We also track a lower bound on the minimum priority of an element in the queue. To find the minimum priority element, we examine the lists in order starting from this bound. In the worst case, this costs O(|V|), but if we credit the algorithm whenever it increases the bound and debit it whenever the bound decreases, we obtain an amortization scheme that offsets the variable cost of this scanning.
If choosing a data structure that can support efficient lookup of min value and efficient deletions (such as self balancing binary tree or min heap), this can be done in O(|E|log|V|).
Creating the data structure in line 1 is done in O(|V|) or
O(|V|log|V|).
The loop goes exactly once over each node v in V.
For each such node, it needs to peek the min value, and adjust the DS so you can peek at the next min efficiently next iteration. O(log|V|) is required for that (otherwise, you could do heap sort in o(nlogn) - note little o notation here).
Getting neighbors, and removing items from V and E could be done in O(|neighbors|) using efficient set implementation, such as a hash table. Since each edge and node is chosen exactly once for this step, it sums over all iterations in O(|V| + |E|).
The for loop over neighbors, again, sums to O(|E|), thanks to the fact that each edge is counted exactly once here.
In this loop, however, you might need to update p, and such an update costs O(log|V|), so this sums to O(|E|log|V|)
This sums in O(|V|log|V| + |E|log|V|). Assuming the graph is not very sparse (|E| is in Omega(|V|) - this is O(|E|log|V|).
We have a directed graph G=(V,E) ,at which each edge (u, v) in E has a relative value r(u, v) in R and 0<=r(u, v) <= 1, that represents the reliability , at a communication channel, from the vertex u to the vertex v.
Consider as r(u, v) the probability that the chanel from u to v will not fail the transfer and that the probabilities are independent.
I want to write an efficient algorithm that finds the most reliable path between two given nodes.
I have tried the following:
DIJKSTRA(G,r,s,t)
1. INITIALIZE-SINGLE-SOURCE(G,s)
2. S=Ø
3. Q=G.V
4. while Q != Ø
5. u<-EXTRACT-MAX(Q)
6. if (u=t) return d[t]
7. S<-S U {u}
8. for each vertex v in G.Adj[u]
9. RELAX(u,v,r)
INITIAL-SINGLE-SOURCE(G,s)
1. for each vertex v in V.G
2. d[v]=-inf
3. pi[v]=NIL
4. d[s]=1
RELAX(u,v,r)
1. if d[v]<d[u]*r(u,v)
2 d[v]<-d[u]*r(u,v)
3. pi[v]<-u
and I wanted to find the complexity of the algorithm.
The time complexity of INITIALIZE-SINGLE-SOURCE(G,s) is O(|V|).
The time complexity of the line 4 is O(1).
The time complexity of the line 5 is O(|V|).
The time complexity of the line 7 is O(log(|V|)).
The time complexity of the line 8 is O(1).
Which is the time complexityof the command S<-S U {u} ?
The line 10 is executed in total O(Σ_{v \in V} deg(v))=O(E) times and the time complexity of RELAX is O(1).
So the time complexity of the algorithm is equal to the time complexity of the lines (3-9)+O(E).
Which is the time complexity of the union?
So the time complexity of the algorithm is equal to the time
complexity of the lines (3-9)+O(E). Which is the time complexity of
the union?
No, it is not the complexity of the union, union can be done pretty efficiently if you are using hash table for example. Moreover, since you use S only for the union, it seems to be redundant.
The complexity of the algorithm also depends heavily on your EXTRACT-MAX(Q) function (usually it is logarithmic in the size of the Q, so logV per iteration), and on RELAX(u,v,r) (which is also usually logarithmic in the size of Q, since you need to update entries in your priority queue).
As expected, this brings us to the same complexity of original Dijkstra's algorithm, which is O(E+VlogV) or O(ElogV), depending on implementation of your priority queue.
I think that the solution should be based on the classic Dijkstra algorithm (complexity of which is well-known), as you suggested, however in your solution you define the "shortest path" problem incorrectly.
Note that the probability of A and B is p(A) * p(B) (if they're independent). Hence, you should find a path, whose multiplication of edges is maximized. Whereas Dijkstra algorithm finds the path whose sum of edges is minimized.
To overcome this issue you should define the weight of your edges as:
R*(u, v) = -log ( R(u, v) )
By introducing the logarithm, you convert multiplicative problem to additive.
The Kruskal's algorithm is the following:
MST-KRUSKAL(G,w)
1. A={}
2. for each vertex v∈ G.V
3. MAKE-SET(v)
4. sort the edges of G.E into nondecreasing order by weight w
5. for each edge (u,v) ∈ G.E, taken in nondecreasing order by weight w
6. if FIND-SET(u)!=FIND-SET(v)
7. A=A U {(u,v)}
8. Union(u,v)
9. return A
According to my textbook:
Initializing the set A in line 1 takes O(1) time, and the time to sort
the edges in line 4 is O(E lgE). The for loop of lines 5-8 performs
O(E) FIND-SET and UNION operations on the disjoint-set forest. Along
with the |V| MAKE-SET operations, these take a total of O((V+E)α(V))
time, where α is a very slowly growing function. Because we assume
that G is connected, we have |E| <= |V|-1, and so the disjoint-set
operations take O(E α(V)) time. Moreover, since α(V)=O(lgV)=O(lgE),
the total running time of Kruskal's algorithm is O(E lgE). Observing
that |E|<|V|^2, we have lg |E|=O(lgV), and so we can restate the
running time of Kruskal's algorithm as O(E lgV).
Could you explain me why we deduce that the time to sort the edges in line 4 is O(E lgE)?
Also how do we get that the total time complexity is O((V+E)α(V)) ?
In addition, suppose that all edge weights in a graph are integers from 1 to |V|. How fast can you make Kruskal's algorithm run? What if the edges weights are integers in the range from 1 to W for some constant W?
How does the time complexity depend on the weight of the edges?
EDIT:
In addition, suppose that all edge weights in a graph are integers
from 1 to |V|. How fast can you make Kruskal's algorithm run?
I have thought the following:
In order the Kruskal's algorithm to run faster, we can sort the edges applying Counting Sort.
The line 1 requires O(1) time.
The lines 2-3 require O(v) time.
The line 4 requires O(|V|+|E|) time.
The lines 5-8 require O(|E|α(|V|)) time.
The line 9 requires O(1) time.
So if we use Counting Sort in order to solve the edges, the time complexity of Kruskal will be
Could you tell me if my idea is right?
Also:
What if the edges weights are integers in the range from 1 to W for
some constant W?
We will again use Counting Sort. The algorithm will be the same. We find the time complexity as follows:
The line 1 requires O(1) time.
The lines 2-3 require O(|V|) time.
The line 4 requires O(W+|E|)=O(W)+O(|E|)=O(1)+O(|E|)=O(|E|) time.
The lines 5-8 require O(|E|α(|V|)) time.
The line 9 requires O(1) time.
So the time complexity will be:
Could you explain me why we deduce that the time to sort the edges in line 4 is O(E*lgE)?
To sort a set of N items we use O(Nlg(N)) algorithm, which is quick sort, merge sort or heap sort. To sort E edges we therefore need O(Elg(E)) time. This however is not necessary in some cases, as we could use sorting algorithm with better complexity (read further).
Also how do we get that the total time complexity is O((V+E)α(V))?
I don't think total complexity is O((V+E)α(V)). That would be complexity of the 5-8 loop. O((V+E)α(V)) complexity comes from V MAKE-SET operations and E Union operations. To find out why we multiply that with α(V) you will need to read in depth analysis of disjoint set data structure in some algorithmic book.
How fast can you make Kruskal's algorithm run?
For first part, line 4, we have O(E*lg(E)) complexity and for second part, line 5-8, we have O((E+V)α(V)) complexity. This two summed up yield O(Elg(E)) complexity. If we use O(N*lg(N)) sort this can't be improved.
What if the edges weights are integers in the range from 1 to W for
some constant W?
If that is the case, than we could use counting sort for first part. Giving line 4 complexity of O(E+W) = O(E). In that case algorithm would have O((E+V)*α(V)) total complexity. Note that however O(E + W) in reality includes a constant that could be rather large and might be impractical for large W.
How does the time complexity depend on the weight of the edges?
As said, if weight of the edges is small enough we can use counting sort and speed up the algorithm.
EDIT:
In addition, suppose that all edge weights in a graph are integers
from 1 to |V|. How fast can you make Kruskal's algorithm run? I have
thought the following:
In order the Kruskal's algorithm to run faster, we can sort the edges
applying Counting Sort.
The line 1 requires O(1) time. The lines 2-3 require O(vα(|V|)) time.
The line 4 requires O(|V|+|E|) time. The lines 5-8 require
O(|E|α(|V|)) time. The line 9 requires O(1) time.
Your idea is correct, however you can make bounds smaller.
The lines 2-3 requires O(|V|) rather than O(|V|α(|V|)). We however simplified it to O(|V|α(|V|)) in previous calculations to make calculations easier.
With this you get the time of:
O(1) + O(|V|) + O(|V| + |E|) + O(|E|α(|V|)) + O(1) = O(|V| + |E|) + O(|E|α(|V|))
You can simplify this to either O((|V| + |E|) * α(|V|) or to O(|V| + |E|*α(|V|).
So while you were correct, since O((|V| + |E|) * α(|V|) < O((|V| + |E|) * lg(|E|)
Calculations for the |W| are analogous.
I know Prim's algorithm and I know its implementation but always I skip a part that I want to ask now. It was written that Prim's algorithm implementation with Fibonacci heap is O(E + V log(V)) and my question is:
what is a Fibonacci heap in brief?
How is it implemented? And
How can you implement Prim's algorithm with a Fibonacci heap?
A Fibonacci heap is a fairly complex priority queue that has excellent amoritzed asymptotic behavior on all its operations - insertion, find-min, and decrease-key all run in O(1) amortized time, while delete and extract-min run in amortized O(lg n) time. If you want a good reference on the subject, I strongly recommend picking up a copy of "Introduction to Algorithms, 2nd Edition" by CLRS and reading the chapter on it. It's remarkably well-written and very illustrative. The original paper on Fibonacci heaps by Fredman and Tarjan is available online, and you might want to check it out. It's dense, but gives a good treatment of the material.
If you'd like to see an implementation of Fibonacci heaps and Prim's algorithm, I have to give a shameless plug for my own implementations:
My implementation of a Fibonacci heap.
My implementation of Prim's algorithm using a Fibonacci heap.
The comments in both of these implementations should provide a pretty good description of how they work; let me know if there's anything I can do to clarify!
Prim's algorithm selects the edge with the lowest weight between the group of vertexes already selected and the rest of the vertexes.
So to implement Prim's algorithm, you need a minimum heap. Each time you select an edge you add the new vertex to the group of vertexes you've already chosen, and all its adjacent edges go into the heap.
Then you choose the edge with the minimum value again from the heap.
So the time complexities we get are:
Fibonacci:
Choosing minimum edge = O(time of removing minimum) = O(log(E)) = O(log(V))
Inserting edges to heap = O(time of inserting item to heap) = 1
Min heap:
Choosing minimum edge = O(time of removing minimum from heap) = O(log(E)) = O(log(V))
Inserting edges to heap = O(time of inserting item to heap) = O(log(E)) = O(log(V))
(Remember that E =~ V^2 ... so log(E) == log(V^2) == 2Log(V) = O(log(V))
So, in total you have E inserts, and V minimum choosings (It's a tree in the end).
So in Min heap you'll get:
O(Vlog(V) + Elog(V)) == O(Elog(V))
And in Fibonacci heap you'll get:
O(Vlog(V) + E)
I implemented Dijkstra using Fibonacci heaps a few years ago, and the problem is pretty similar. Basically, the advantage of Fibonacci heaps is that it makes finding the minimum of a set a constant operation; so that's very appropriate for Prim and Dijkstra, where at each step you have to perform this operation.
Why it's good
The complexity of those algorithms using a binomial heap (which is the more "standard" way) is O(E * log V), because - roughly - you will try every edge (E), and for each of them you will either add the new vertex to your binomial heap (log V) or decrease its key (log V), and then have to find the minimum of your heap (another log V).
Instead, when you use a Fibonacci heap the cost of inserting a vertex or decreasing its key in your heap is constant so you only have a O(E) for that. BUT deleting a vertex is O(log V), so since in the end every vertex will be removed that adds a O(V * log V), for a total O(E + V * log V).
So if your graph is dense enough (eg E >> V), using a Fibonacci heap is better than a binomial heap.
How to
The idea is thus to use the Fibonacci heap to store all the vertices accessible from the subtree you already built, indexed by the weight of the smallest edge leading to it. If you understood the implementation or Prim's algorithm with using another data structure, there is no real difficulty in using a Fibonacci heap instead - just use the insert and deletemin methods of the heap as you would normally, and use the decreasekey method to update a vertex when you release an edge leading to it.
The only hard part is to implement the actual Fibonacci heap.
I can't give you all the implementation details here (that would take pages), but when I did mine I relied heavily on Introduction to algorithms (Cormen et al). If you don't have it yet but are interested in algorithms I highly recommend that you get a copy of it! It's language agnostic, and it provides detailed explanations about all the standards algorithms, as well as their proofs, and will definitely boost your knowledge and ability to use all of them, and design and prove new ones. This PDF (from the Wikipedia page you linked) provides some of the implementation details, but it's definitely not as clear as Introduction to algorithms.
I have a report and a presentation I wrote after doing that, that explain a bit how to proceed (for Dijkstra - see the end of the ppt for the Fibonacci heap functions in pseudo-code) but it's all in French... and my code is in Caml (and French) so I'm not sure if that helps!!! And if you can understand something of it please be indulgent, I was just starting programming so my coding skills were pretty poor at the time...
So I'm curious to know what the running time for the algorithm is on on priority queue implemented by a sorted list/array. I know for an unsorted list/array it is O((n^2+m)) where n is the number of vertices and m the number of edges. Thus that equates to O(n^2) time. But would it be faster if i used an sorted list/array...What would the running time be? I know extractmin would be constant time.
Well, Let's review what we need for dijkstra's algorithm(for future reference, usually vertices and edges are used as V and E, for example O(VlogE)):
Merging together all the sorted adjacency lists: O(E)
Extract Minimum : O(1)
Decrease Key : O(V)
Dijkstra uses O(V) extract minimum operations, and O(E) decrease key operations, therefore:
O(1)*O(V) = O(V)
O(E)*O(V) = O(EV) = O(V^2)
Taking the most asymptotically significant portion:
Eventual asymptotic runtime is O(V^2).
Can this be made better? Yes. Look into binary heaps, and better implementations of priority queues.
Edit: I actually made a mistake, now that I look at it again. E cannot be any higher than V^2, or in other words E = O(V^2).
Therefore, in the worst case scenario, the algorithm that we concluded runs in O(EV) is actually O(V^2 * V) == O(V^3)
I use SortedList
http://blog.devarchive.net/2013/03/fast-dijkstras-algorithm-inside-ms-sql.html
It is faster about 20-50 times than sorting List once per iteration