Min Fibonacci Heap - How to implement increase-key operation? - algorithm

I have been trying to implementing heap data structures for use in my research work. As part of that, I am trying to implement increase-key operations for min-heaps. I know that min-heaps generally support decrease-key. I was able to write the increase-key operation for a binary min-heap, wherein, I exchange increased key with the least child recursively.
In the case of the Fibonacci heap, In this reference, they say that the Fibonacci heap also supports an increase-key operation. But, I couldn't find anything about it in the original paper on Fibonacci Heaps, nor could I find anything in CLRS (Introduction to Algorithms by Cormen).
Can someone tell me how I can go about implementing the increase-key operation efficiently and also without disturbing the data structure's amortized bounds for all the other operations?

First, note that increase-key must be 𝑂(log𝑛) if we wish for insert and find-min to stay 𝑂(1) as they are in a Fibonacci heap.
If it weren't you'd be able to sort in 𝑂(𝑛) time by doing 𝑛 inserts, followed by repeatedly using find-min to get the minimum and then increase-key on the head by πœ” with βˆ€π‘₯:πœ”>π‘₯ to push the head to the end.
Now, knowing that increase-key must be 𝑂(log𝑛), we can provide a straightforward asymptotically optimal implementation for it. To increase a node 𝑛 to value π‘₯, first you decrease-key(𝑛,βˆ’βˆž), then delete-min() followed by insert(𝑛,π‘₯).
Refer here

Related

Delete and Increase key for Binomial heap

I currently studying the binomial heap right now.
I learned that following operations for the binomial heaps can be completed in Theta(log n) time.:
Get-max
Insert
Extract Max
Merge
Increase-Key
Delete
But, the two operations Increase key and Delete operations said they need the pointer to the element that need to be complete in Theta(log n).
Here is 3 questions I want to ask:
Is this because if Increase key and Delete don't have the pointer to element, they have to search the elements before the operations took place?
what is the time complexity for the searching operations for the binomial heap? (I believe O(n))
If the pointer to the element is not given for Increase key and Delete operations, those two operations will take O(n) time or it can be lower than that.
It’s good that you’re thinking about this!
Yes, that’s exactly right. The nodes in a binomial heap are organized in a way that makes it very quick to find the minimum value, but the relative ordering of the remaining elements is not guaranteed to be in an order that makes it easy to find things.
There isn’t a general way to search a binomial heap for an element faster than O(n). Or, stated differently, the worst-case cost of any way of searching a binomial heap is Ξ©(n). Here’s one way to see this. Form a binomial heap where n-1 items have priority 137 and one item has priority 42. The item with priority 42 must be a leaf node. There are (roughly) n/2 leaves in the heap, and since there is no ordering on them to find that one item you’d have to potentially look at all the leaves. To formalize this, you could form multiple different binomial heaps with these items, and whatever algorithm was looking for the item of priority 42 would necessarily have to find it in the last place it looks at least once.
For the reasons given above, no, there’s no way to implement those operations quickly without having pointers to them, since in the worst case you have to search everywhere.

Is it possible to create a LinkedList implementation with O(1) Insertion, Deletion, and O(1) Access to Minimum?

Assume you have unlimited space complexity for this problem. I believe I was showed the solution and I have completely forgotten it. If I recall correctly, one solution involved a stack to keep track of the min and the other involved adding a data values to the LinkedList Node.
A Minimum-Heap implementation would result in Log(n) insertion and deletion but is there a way to make it O(1)?
What would be the implementation of a data structure that can do this, if it is even possible.
If you had such a datastructure, you could sort n items using O(n) comparisons: add them to the list, then repeatedly find the minimum and remove it.
So it's not possible in general for this datastructure to exist with these performance bounds.

Which implementation is best for Prims algorithm , using Set or Prority Queue ? why?

I know about the implementation of both the data structures , i want to know which is better considering time complexity.
Both have same insertion and erase complexity O(log n), while get min is for both O(1).
A priority queue only gives you access to one element in sorted order ie, you can get the highest/lowest priority item, and when you remove that, you can get the next one, and so on. A set allows you full access in sorted order, for example, find two elements somewhere in the middle of the set, then traverse in order from one to the other.
In priority queue you can have multiple elements with same priority value, while in set you can't.
Set are generally backed by a binary tree, while priority queue is heap.
So the question is when should you use a binary tree instead of a heap?
In my opinion you should use neither of them. Check Binomial and Fibonacci heap. For prime algorithm they will have better performance.
If you insist in using one of them, I would go with priority queue, as it have smaller memory footprint and can have multiple elements with same priority value.
Theoretically speaking, both will give you an O(E log V)-time algorithm. This is not optimal; Fibonacci heaps give you O(E + V log V), which is better for dense graphs (E >> V).
Practically speaking, neither is ideally suited. Since set has long lived iterators, it's possible to implement a DecreaseKey operation, reducing the extra storage from O(E) to O(V) (the workaround is to enqueue vertices multiple times), but the space constant is worse than priority_queue, and the time constant probably is as well. You should measure your use case.
I will second Jim Mischel's recommendation of binary heap (a.k.a., priority_queue) -> pairing heap if the builtin isn't fast enough.

Best algorithm/data structure for a continually updated priority queue

I need to frequently find the minimum value object in a set that's being continually updated. I need to have a priority queue type of functionality. What's the best algorithm or data structure to do this? I was thinking of having a sorted tree/heap, and every time the value of an object is updated, I can remove the object, and re-insert it into the tree/heap. Is there a better way to accomplish this?
A binary heap is hard to beat for simplicity, but it has the disadvantage that decrease-key takes O(n) time. I know, the standard references say that it's O(log n), but first you have to find the item. That's O(n) for a standard binary heap.
By the way, if you do decide to use a binary heap, changing an item's priority doesn't require a remove and re-insert. You can change the item's priority in-place and then either bubble it up or sift it down as required.
If the performance of decrease-key is important, a good alternative is a pairing heap, which is theoretically slower than a Fibonacci heap, but is much easier to implement and in practice is faster than the Fibonacci heap due to lower constant factors. In practice, pairing heap compares favorably with binary heap, and outperforms binary heap if you do a lot of decrease-key operations.
You could also marry a binary heap and a dictionary or hash map, and keep the dictionary updated with the position of the item in the heap. This gives you faster decrease-key at the cost of more memory and increased constant factors for the other operations.
Quoting Wikipedia:
To improve performance, priority queues typically use a heap as their
backbone, giving O(log n) performance for inserts and removals, and
O(n) to build initially. Alternatively, when a self-balancing binary
search tree is used, insertion and removal also take O(log n) time,
although building trees from existing sequences of elements takes O(n
log n) time; this is typical where one might already have access to
these data structures, such as with third-party or standard libraries.
If you are looking for a better way, there must be something special about the objects in your priority queue. For example, if the keys are numbers from 1 to 10, a countsort-based approach may outperform the usual ones.
If your application looks anything like repeatedly choosing the next scheduled event in a discrete event simulation, you might consider the options listed in e.g. http://en.wikipedia.org/wiki/Discrete_event_simulation and http://www.acm-sigsim-mskr.org/Courseware/Fujimoto/Slides/FujimotoSlides-03-FutureEventList.pdf. The later summarizes results from different implementations in this domain, including many of the options considered in other comments and answers - and a search will find a number of papers in this area. Priority queue overhead really does make some difference in how many times real time you can get your simulation to run - and if you wish to simulate something that takes weeks of real time this can be important.

Priority Queue - Skip List vs. Fibonacci Heap

I am interested in implementing a priority queue to enable an efficient Astar implementation that is also relatively simple (the priority queue is simple I mean).
It seems that because a Skip List offers a simple O(1) extract-Min operation and an insert operation that is O(Log N) it may be competitive with the more difficult to implement Fibonacci Heap which has O(log N) extract-Min and an O(1) insert. I suppose that the Skip-List would be better for a graph with sparse nodes whereas a Fibonacci heap would be better for an environment with more densely connected nodes.
This would probably make the Fibonacci Heap usually better, but am I correct in assuming that Big-Oh wise these would be similar?
The raison d'etre of the Fibonacci heap is the O(1) decrease-key operation, enabling Dijkstra's algorithm to run in time O(|V| log |V| + |E|). In practice, however, if I needed an efficient decrease-key operation, I'd use a pairing heap, since the Fibonacci heap has awful constants. If your keys are small integers, it may be even better just to use bins.
Fibonacci heaps are very very very slow except for very very very very large and dense graphs (on the order of hundreds of millions of edges). They are also notoriously difficult to implement correctly.
On the other hand, skip lists are very nice data structures and relatively simple to implement.
However I wonder why you're not considering using a simple binary heap. I believe binary heaps-based priority queues are even faster than skip list-based priority queues. Skip lists are mainly used to take advantage of concurrency.

Resources