Why does Dijkstra's algorithm use decrease-key? - algorithm

Dijkstra's algorithm was taught to me was as follows
while pqueue is not empty:
distance, node = pqueue.delete_min()
if node has been visited:
continue
else:
mark node as visited
if node == target:
break
for each neighbor of node:
pqueue.insert(distance + distance_to_neighbor, neighbor)
But I've been doing some reading regarding the algorithm, and a lot of versions I see use decrease-key as opposed to insert.
Why is this, and what are the differences between the two approaches?

The reason for using decrease-key rather than reinserting nodes is to keep the number of nodes in the priority queue small, thus keeping the total number of priority queue dequeues small and the cost of each priority queue balance low.
In an implementation of Dijkstra's algorithm that reinserts nodes into the priority queue with their new priorities, one node is added to the priority queue for each of the m edges in the graph. This means that there are m enqueue operations and m dequeue operations on the priority queue, giving a total runtime of O(m Te + m Td), where Te is the time required to enqueue into the priority queue and Td is the time required to dequeue from the priority queue.
In an implementation of Dijkstra's algorithm that supports decrease-key, the priority queue holding the nodes begins with n nodes in it and on each step of the algorithm removes one node. This means that the total number of heap dequeues is n. Each node will have decrease-key called on it potentially once for each edge leading into it, so the total number of decrease-keys done is at most m. This gives a runtime of (n Te + n Td + m Tk), where Tk is the time required to call decrease-key.
So what effect does this have on the runtime? That depends on what priority queue you use. Here's a quick table that shows off different priority queues and the overall runtimes of the different Dijkstra's algorithm implementations:
Queue | T_e | T_d | T_k | w/o Dec-Key | w/Dec-Key
---------------+--------+--------+--------+-------------+---------------
Binary Heap |O(log N)|O(log N)|O(log N)| O(M log N) | O(M log N)
Binomial Heap |O(log N)|O(log N)|O(log N)| O(M log N) | O(M log N)
Fibonacci Heap | O(1) |O(log N)| O(1) | O(M log N) | O(M + N log N)
As you can see, with most types of priority queues, there really isn't a difference in the asymptotic runtime, and the decrease-key version isn't likely to do much better. However, if you use a Fibonacci heap implementation of the priority queue, then indeed Dijkstra's algorithm will be asymptotically more efficient when using decrease-key.
In short, using decrease-key, plus a good priority queue, can drop the asymptotic runtime of Dijkstra's beyond what's possible if you keep doing enqueues and dequeues.
Besides this point, some more advanced algorithms, such as Gabow's Shortest Paths Algorithm, use Dijkstra's algorithm as a subroutine and rely heavily on the decrease-key implementation. They use the fact that if you know the range of valid distances in advance, you can build a super efficient priority queue based on that fact.
Hope this helps!

In 2007, there was a paper that studied the differences in execution time between using the decrease-key version and the insert version. See http://www.cs.sunysb.edu/~rezaul/papers/TR-07-54.pdf
Their basic conclusion was not to use the decrease-key for most graphs. Especially for sparse graphs, the non-decrease key is significantly faster than the decrease-key version. See the paper for more details.

There are two ways to implement Dijkstra: one uses a heap that supports decrease-key and another a heap that doesn't support that.
They are both valid in general, but the latter is usually preferred.
In the following I'll use 'm' to denote the number of edges and 'n' to denote the number of vertices of our graph:
If you want the best possible worst-case complexity, you would go with a Fibonacci heap that supports decrease-key: you'll get a nice O(m + nlogn).
If you care about the average case, you could use a Binary heap as well: you'll get O(m + nlog(m/n)logn). Proof is here, pages 99/100. If the graph is dense (m >> n), both this one and the previous tend to O(m).
If you want to know what happens if you run them on real graphs, you could check this paper, as Mark Meketon suggested in his answer.
What the experiments results will show is that a "simpler" heap will give the best results in most cases.
In fact, among the implementations that use a decrease-key, Dijkstra performs better when using a simple Binary heap or a Pairing heap than when it uses a Fibonacci heap. This is because Fibonacci heaps involve larger constant factors and the actual number of decrease-key operations tends to be much smaller than what the worst case predicts.
For similar reasons, a heap that doesn't have to support a decrease-key operation, has even less constant factors and actually performs best. Especially if the graph is sparse.

Related

Which implementation is best for Prims algorithm , using Set or Prority Queue ? why?

I know about the implementation of both the data structures , i want to know which is better considering time complexity.
Both have same insertion and erase complexity O(log n), while get min is for both O(1).
A priority queue only gives you access to one element in sorted order ie, you can get the highest/lowest priority item, and when you remove that, you can get the next one, and so on. A set allows you full access in sorted order, for example, find two elements somewhere in the middle of the set, then traverse in order from one to the other.
In priority queue you can have multiple elements with same priority value, while in set you can't.
Set are generally backed by a binary tree, while priority queue is heap.
So the question is when should you use a binary tree instead of a heap?
In my opinion you should use neither of them. Check Binomial and Fibonacci heap. For prime algorithm they will have better performance.
If you insist in using one of them, I would go with priority queue, as it have smaller memory footprint and can have multiple elements with same priority value.
Theoretically speaking, both will give you an O(E log V)-time algorithm. This is not optimal; Fibonacci heaps give you O(E + V log V), which is better for dense graphs (E >> V).
Practically speaking, neither is ideally suited. Since set has long lived iterators, it's possible to implement a DecreaseKey operation, reducing the extra storage from O(E) to O(V) (the workaround is to enqueue vertices multiple times), but the space constant is worse than priority_queue, and the time constant probably is as well. You should measure your use case.
I will second Jim Mischel's recommendation of binary heap (a.k.a., priority_queue) -> pairing heap if the builtin isn't fast enough.

Difference between the runtime of Dijkstra's Algorithm: Priority Queue vs. Doubly Linked List

What is the difference, regarding runtime complexity, between the following and why?:
(1) DIJKSTRA's algorithm using regular Priority Queue (Heap)
(2) DIJKSTRA's algorithm using a doubly linked list
(Unless there isn't a difference)
The most general version of Dijkstra's algorithm assumes that you have access to some sort of priority queue structure that supports the following operations:
make-heap(s, n): build a heap of n nodes at initial distance ∞, except for the start node s, which has distance 0.
dequeue-min(): remove and return the element with the lowest priority.
decrease-key(obj, key): given an existing object obj in the priority queue, reduce its priority to the level given by key.
Dijkstra's algorithm's requires one call to make-heap, O(n) calls to dequeue-min, and O(m) calls to decrease-key, where n is the number of nodes and m is the number of edges. The overall runtime can actually be given as O(Tm-h + nTdeq + mTd-k), where Tm-h, Tdeq, and Td-k are the average (amortized) costs of doing an make-heap, a dequeue, and a decrease-key, respectively.
Now, let's suppose that your priority queue is a doubly-linked list. There's actually several ways you could use a doubly-linked list as a priority queue: you could keep the nodes sorted by distance, or you could keep them in unsorted order. Let's consider each of these.
In a sorted doubly-linked list, the cost of doing a make-heap is O(n): just insert the start node followed by n - 1 other nodes at distance infinity. The cost of doing a dequeue-min is O(1): just delete the first element. However, the cost of doing a decrease-key is O(n), since if you need to change a node's priority, you may have to move it, and you can't find where to move it without (in the worst case) doing a linear scan over the nodes. This means that the runtime will be O(n + n + nm) = O(mn).
In an unsorted doubly-linked list, the cost of doing a make-heap is still O(n) because you need to create n different nodes. The cost of a dequeue-min is now O(n) because you have to do a linear scan over all the nodes in the list to find the minimum value. However, the cost of a decrease-key is now O(1), since you can just update the node's key in-place. This means that the runtime is O(n + n2 + m) = O(n2 + m) = O(n2), since the number of edges is never more than O(n2). This is an improvement from before.
With a binary heap, the cost of doing a make-heap is O(n) if you use the standard linear-time heapify algorithm. The cost of doing a dequeue is O(log n), and the cost of doing a decrease-key is O(log n) as well (just bubble the element up until it's in the right place). This means that the runtime of Dijkstra's algorithm with a binary heap is O(n + n log n + m log n) = O(m log n), since if the graph is connected we'll have that m ≥ n.
You can do even better with a Fibonacci heap, in an asymptotic sense. It's a specialized priority queue invented specifically to make Dijkstra's algorithm fast. It can do a make-heap in time O(n), a dequeue-min in time O(log n), and a decrease-key in (amortized) O(1) time. This makes the runtime of Dijkstra's algorithm O(n + n log n + m) = O(m + n log n), though in practice the constant factors make Fibonacci heaps slower than binary heaps.
So there you have it! The different priority queues really do make a difference. It's interesting to see how "Dijkstra's algorithm" is more of a family of algorithms than a single algorithm, since the choice of data structure is so critical to the algorithm running quickly.

Big-O of Dijkstra's Algorithm with D-Ary Heap

I'm looking for a complete walkthrough on the runtime of Dijkstra's algorithm when implemented with a D-Ary heap.
My best understanding as of now is that the depth of the tree is at most log_d(n), so the max time of insertion and bubbling up is log_d(n). Wouldn't bubble down be the same on deleting a node?
I'm just having trouble piecing things together to find the total Big-O runtime here. My understanding is that it should be O(m logm/n n)), but I'd like to have a kind of walkthrough to understanding why that is the case.
In a d-ary heap, up-heaps (e.g., insert, decrease-key if you track heap nodes as they move around) take time O(log_d n) and down-heaps (e.g., delete-min) take time O(d log_d n), where n is the number of nodes. The reason that down-heaps are more expensive is that we have to find the minimum child to promote, whereas up-heaps just compare with the parent.
Assuming a connected graph, Dijkstra uses at most m - (n - 1) decrease-keys and at most n - 1 inserts/deletes (assuming that we never insert the root). The running time of Dijkstra using a d-ary heap as a priority queue is thus O((m + n d) log_d n), which is worth it for dense graphs.

How to implement Prim's algorithm with a Fibonacci heap?

I know Prim's algorithm and I know its implementation but always I skip a part that I want to ask now. It was written that Prim's algorithm implementation with Fibonacci heap is O(E + V log(V)) and my question is:
what is a Fibonacci heap in brief?
How is it implemented? And
How can you implement Prim's algorithm with a Fibonacci heap?
A Fibonacci heap is a fairly complex priority queue that has excellent amoritzed asymptotic behavior on all its operations - insertion, find-min, and decrease-key all run in O(1) amortized time, while delete and extract-min run in amortized O(lg n) time. If you want a good reference on the subject, I strongly recommend picking up a copy of "Introduction to Algorithms, 2nd Edition" by CLRS and reading the chapter on it. It's remarkably well-written and very illustrative. The original paper on Fibonacci heaps by Fredman and Tarjan is available online, and you might want to check it out. It's dense, but gives a good treatment of the material.
If you'd like to see an implementation of Fibonacci heaps and Prim's algorithm, I have to give a shameless plug for my own implementations:
My implementation of a Fibonacci heap.
My implementation of Prim's algorithm using a Fibonacci heap.
The comments in both of these implementations should provide a pretty good description of how they work; let me know if there's anything I can do to clarify!
Prim's algorithm selects the edge with the lowest weight between the group of vertexes already selected and the rest of the vertexes.
So to implement Prim's algorithm, you need a minimum heap. Each time you select an edge you add the new vertex to the group of vertexes you've already chosen, and all its adjacent edges go into the heap.
Then you choose the edge with the minimum value again from the heap.
So the time complexities we get are:
Fibonacci:
Choosing minimum edge = O(time of removing minimum) = O(log(E)) = O(log(V))
Inserting edges to heap = O(time of inserting item to heap) = 1
Min heap:
Choosing minimum edge = O(time of removing minimum from heap) = O(log(E)) = O(log(V))
Inserting edges to heap = O(time of inserting item to heap) = O(log(E)) = O(log(V))
(Remember that E =~ V^2 ... so log(E) == log(V^2) == 2Log(V) = O(log(V))
So, in total you have E inserts, and V minimum choosings (It's a tree in the end).
So in Min heap you'll get:
O(Vlog(V) + Elog(V)) == O(Elog(V))
And in Fibonacci heap you'll get:
O(Vlog(V) + E)
I implemented Dijkstra using Fibonacci heaps a few years ago, and the problem is pretty similar. Basically, the advantage of Fibonacci heaps is that it makes finding the minimum of a set a constant operation; so that's very appropriate for Prim and Dijkstra, where at each step you have to perform this operation.
Why it's good
The complexity of those algorithms using a binomial heap (which is the more "standard" way) is O(E * log V), because - roughly - you will try every edge (E), and for each of them you will either add the new vertex to your binomial heap (log V) or decrease its key (log V), and then have to find the minimum of your heap (another log V).
Instead, when you use a Fibonacci heap the cost of inserting a vertex or decreasing its key in your heap is constant so you only have a O(E) for that. BUT deleting a vertex is O(log V), so since in the end every vertex will be removed that adds a O(V * log V), for a total O(E + V * log V).
So if your graph is dense enough (eg E >> V), using a Fibonacci heap is better than a binomial heap.
How to
The idea is thus to use the Fibonacci heap to store all the vertices accessible from the subtree you already built, indexed by the weight of the smallest edge leading to it. If you understood the implementation or Prim's algorithm with using another data structure, there is no real difficulty in using a Fibonacci heap instead - just use the insert and deletemin methods of the heap as you would normally, and use the decreasekey method to update a vertex when you release an edge leading to it.
The only hard part is to implement the actual Fibonacci heap.
I can't give you all the implementation details here (that would take pages), but when I did mine I relied heavily on Introduction to algorithms (Cormen et al). If you don't have it yet but are interested in algorithms I highly recommend that you get a copy of it! It's language agnostic, and it provides detailed explanations about all the standards algorithms, as well as their proofs, and will definitely boost your knowledge and ability to use all of them, and design and prove new ones. This PDF (from the Wikipedia page you linked) provides some of the implementation details, but it's definitely not as clear as Introduction to algorithms.
I have a report and a presentation I wrote after doing that, that explain a bit how to proceed (for Dijkstra - see the end of the ppt for the Fibonacci heap functions in pseudo-code) but it's all in French... and my code is in Caml (and French) so I'm not sure if that helps!!! And if you can understand something of it please be indulgent, I was just starting programming so my coding skills were pretty poor at the time...

Running time for Dijkstra's algorithm on a priority queue implemented by sorted list/array

So I'm curious to know what the running time for the algorithm is on on priority queue implemented by a sorted list/array. I know for an unsorted list/array it is O((n^2+m)) where n is the number of vertices and m the number of edges. Thus that equates to O(n^2) time. But would it be faster if i used an sorted list/array...What would the running time be? I know extractmin would be constant time.
Well, Let's review what we need for dijkstra's algorithm(for future reference, usually vertices and edges are used as V and E, for example O(VlogE)):
Merging together all the sorted adjacency lists: O(E)
Extract Minimum : O(1)
Decrease Key : O(V)
Dijkstra uses O(V) extract minimum operations, and O(E) decrease key operations, therefore:
O(1)*O(V) = O(V)
O(E)*O(V) = O(EV) = O(V^2)
Taking the most asymptotically significant portion:
Eventual asymptotic runtime is O(V^2).
Can this be made better? Yes. Look into binary heaps, and better implementations of priority queues.
Edit: I actually made a mistake, now that I look at it again. E cannot be any higher than V^2, or in other words E = O(V^2).
Therefore, in the worst case scenario, the algorithm that we concluded runs in O(EV) is actually O(V^2 * V) == O(V^3)
I use SortedList
http://blog.devarchive.net/2013/03/fast-dijkstras-algorithm-inside-ms-sql.html
It is faster about 20-50 times than sorting List once per iteration

Resources