How to convert a binary heap to binomial queue - algorithm

Given a binary heap, How can I convert it to a binomial queue in linear time- O(n)? I thought of splitting the heap however I got stuck as the time for deletion is O(lg n)

Assuming that you have access to the backing array that contains the binary heap and you can iterate over it in O(n) time, then you can create your binomial heap simply by doing n inserts. As the Wikipedia article says:
Inserting a new element to a heap can be done by simply creating a new
heap containing only this element and then merging it with the
original heap. Due to the merge, insert takes O(log n) time. However,
across a series of n consecutive insertions, insert has an amortized
time of O(1) (i.e. constant).
In other words, doing n inserts into the binomial heap will require O(n) time.
You cannot do this in O(n) time by using the standard binary heap remove operation. As you noted, that would be O(log n) for each removal, resulting in O(n log n) complexity.

Related

Find log n greatest entries in O(n) time

Is there a way to find the log n greatest elements in an array with n elements in O(n) time?
I would create an array based HeapPriorityQueue, because if all elements are available the heap can be created in O(n) time using bottom up heap construction.
Then removing the first element of this priorityqueue should be in O(1) time isn`t?
Then removing the first element of this priority queue should be in O(1) time isn`t?
That will be O(logn), since you also remove the first element. Looking at it without removing is O(1). Repeating this removal logn times will be O(log^2(n)), which is still in O(n), so this solution will indeed meet the requirements.
Another option is to use selection algorithm to find the log(n)'th biggest element directly, which will be O(n) as well.
Basically, yes. The creation of the heap takes O(n) and this dominates the algorithm.
Removing the first element may take either O(1) if the heap does not updates it's keys after removing or O(log n) if it does. Either way the complexity of removing log(n) elements from the heap with and without updating would be O(log n * log n) and O(log n) respectively. Both of which are sublinear.

amortized analysis on a binary heap

So a regular binary heap has an operation extract_min which is O(log(n)) worst time. Suppose the amortized cost of extract_min is O(1). Let n be the size of the heap
So a sequence where we have n extract_min operations performed and it initially contained n elements. Does this mean that the entire sequence would be processed in O(n) time since each operation is O(1)?
Lets get this out of the way first: Removing ALL the elements in a heap via extract_min operations takes O(N log N) time.
This is a fact, so when you ask "Does constant amortized time extract_min imply linear time for removing all the elements?", what you are really asking is "Can extract_min take constant amortized time even though it takes O(N log N) time to extract all the elements?"
The answer to this actually depends on what operations the heap supports.
If the heap supports only the add and extract_min operations, then every extract_min that doesn't fail (in constant time) must correspond to a previous add. We can then say that add takes amortized O(log N) time, and extract_min take amortized O(1) time, because we can assign all of its non-constant costs to a previous add.
If the heap supports an O(N) time make_heap operation (amortized or not), however, then its possible to perform N extract_min operations without doing anything else that adds up to O(N log N) time. The whole O(N log N) cost would then have to be assigned to the N extract_min operations, and we could not claim that extract_min takes amortized constant time.

What are the worst-case time bounds for each operation on a Fibonacci heap?

Fibonacci heaps are efficient in an amortized sense, but how efficient are they in the worst case? Specifically, what is the worst-case time complexity of each of these operations on an n-node Fibonacci heap?
find min
delete-min
insert
decrease-key
merge
The find-min operation on a Fibonacci heap always takes worst-case O(1) time. There's always a pointer maintained that directly points to that object.
The cost of a delete-min, in the worst-case, takes time Θ(n). To see this, imagine starting with an empty heap and doing a series of n insertions into it. Each node will be stored in its own tree, and doing a delete-min the heap will coalesce all these objects into O(log n) trees, requiring Θ(n) work to visit all the nodes at least once.
The cost of an insertion is worst-case O(1); this is just creating a single node and adding it to the list. A merge is similarly O(1) since it just splices two lists together.
The cost of a decrease-key in the worst case is Θ(n). It's possible to build a degenerate Fibonacci heap in which all the elements are stored in a single tree consisting of a linked list of n marked nodes. Doing a decrease-key on the bottommost node then triggers a run of cascading cuts that will convert the tree into n independent nodes.
I almost agree with the great answer from #templatetypedef.
There cannot be a tree of a classical Fibonacci heap with $n$ marked nodes. This would mean that the height of the tree is O(n), but since for each subtree of rank $k$ its children are of ranks $\geq 0, \geq 1, ... , \geq k-1$. It is easy to see that the depth of the tree is at most O(logn). And therefore a single Decrease-key operation can cost O(logn).
I checked this thread, and it takes some modification of the Fibonacci heap, as it has marked node in the root list and does operation which do not belong to the Fibonacci heap.

T.C to find an element if it is already present in binary heap tree?

What will be the time complexity to find and element if it is already there in a binary heap ?
I think traversal operations are not possible in heap trees !!
In the worst case, one would end up traversing the entire binary heap to search for an element and therefore the time complexity would be O(N) that is linear in number of elements in the binary heap
Binary Heaps are not meant to be used as search data structure. They are used for implementing priority queues and they handle all the operations that are used frequently very well than linear time
insert: O(log n)
delete: O(log n)
increase_key: O(log n)
decrease_key: O(log n)
extract_max or extract_min: O(log n)
If you want the search operation to be always O(log n) use balanced search trees like AVL or Red Black trees.

Is there one type of set-like data structure supporting merging in O(logn) time and k-th search in O(logn) time?(n is the size of this set)

Is there one type of set-like data structure supporting merging in O(logn) time and k-th element search in O(logn) time? n is the size of this set.
You might try a Fibonacci heap which does merge in constant amortized time and decrease key in constant amortized time. Most of the time, such a heap is used for operations where you are repeatedly pulling the minimum value, so a check-for-membership function isn't implemented. However, it is simple enough to add one using the decrease key logic, and simply removing the decrease portion.
If k is a constant, then any meldable heap will do this, including leftist heaps, skew heaps, pairing heaps and Fibonacci heaps. Both merging and getting the first element in these structures typically take O(1) or O(lg n) amortized time, so O( k lg n) maximum.
Note, however, that getting to the k'th element may be destructive in the sense that the first k-1 items may have to be removed from the heap.
If you're willing to accept amortization, you could achieve the desired bounds of O(lg n) time for both meld and search by using a binary search tree to represent each set. Melding two trees of size m and n together requires time O(m log(n / m)) where m < n. If you use amortized analysis and charge the cost of the merge to the elements of the smaller set, at most O(lg n) is charged to each element over the course of all of the operations. Selecting the kth element of each set takes O(lg n) time as well.
I think you could also use a collection of sorted arrays to represent each set, but the amortization argument is a little trickier.
As stated in the other answers, you can use heaps, but getting O(lg n) for both meld and select requires some work.
Finger trees can do this and some more operations:
http://en.wikipedia.org/wiki/Finger_tree
There may be something even better if you are not restricted to purely functional data structures (i.e. aka "persistent", where by this is meant not "backed up on non-volatile disk storage", but "all previous 'versions' of the data structure are available even after 'adding' additional elements").

Resources