I am stuck on an exercise in Problem Solving with Algorithms and Data Structures. The exercise says that one can implement a queue in which enqueue and dequeue are both O(1) on average and that there is one circumstance when dequeue is O(n).
The only thing I could think of was to use a list in which the front (dequeue side) of the queue is tracked by an index of the list. In this enqueue is at the end (i.e., an append which is O(1)) and dequeue operates by copying the current "front" element and then moving the front index tracker forward. But this is massively space costing and is not the target answer because is always O(1) in both.
Any thoughts on this?
There are lots of ways to implement a queue and use only O(n) space.
How to implement a queue using two stacks?
C Program to Implement Queue Data Structure using Linked List
Circular buffer
... and I don't think you need to know the implementation that takes O(n) time to dequeue.
Related
I'm thinking of queue (FIFO) but what if at some point I need to prioritize orders like for example: any order that includes milk should be pushed into the back of the queue until the milk is available and then once the milk is there I should put these orders back to its previous state.
If I use queue here, I will endup having at least bigO(n log n) time complexity.
any suggestions?
One possibility is to have two data structures. Use the FIFO to hold orders for which you have all the ingredients. Use a different data structure for orders that are waiting for something. Once that thing comes in, then you can put it in the FIFO. Adding to the end of the queue is of course O(1). Putting it back in order will require O(n) time.
If you want the order that was held to get put back into the queue in the place it should have gone, then you probably want a priority queue that uses order number (if they're sequential), or time.
Whereas it takes O(log n) time to insert into or remove from a priority queue, that's not going to be a problem unless you're processing thousands of orders a second. Insertion worst case is O(log n), but in a system such as yours where the priority queue is generally just a FIFO that has a few exceptions, expected insertion will be close to O(1).
I should clarify that most priority queue implementations (in C++, Java, Python, and other mainstream languages) use a binary heap for the backing store. Insertion into a binary heap is worst case O(log n), but analysis shows that it's usually closer to O(1). See Argument for O(1) average-case complexity of heap insertion. You could also implement a pairing heap or other advanced heap type, which has O(1) amortized insertion. But again, unless you're processing thousands of orders per second, that's probably overkill.
Another option is, when you get these exception orders, just push them to the front of the queue once all the necessary items are available. That's easy to do with a double-ended queue. How effective that will be kind of depends on how long things usually sit in the queue, and how long it takes to re-stock items that you run out of.
I have been trying to implementing heap data structures for use in my research work. As part of that, I am trying to implement increase-key operations for min-heaps. I know that min-heaps generally support decrease-key. I was able to write the increase-key operation for a binary min-heap, wherein, I exchange increased key with the least child recursively.
In the case of the Fibonacci heap, In this reference, they say that the Fibonacci heap also supports an increase-key operation. But, I couldn't find anything about it in the original paper on Fibonacci Heaps, nor could I find anything in CLRS (Introduction to Algorithms by Cormen).
Can someone tell me how I can go about implementing the increase-key operation efficiently and also without disturbing the data structure's amortized bounds for all the other operations?
First, note that increase-key must be π(logπ) if we wish for insert and find-min to stay π(1) as they are in a Fibonacci heap.
If it weren't you'd be able to sort in π(π) time by doing π inserts, followed by repeatedly using find-min to get the minimum and then increase-key on the head by π with βπ₯:π>π₯ to push the head to the end.
Now, knowing that increase-key must be π(logπ), we can provide a straightforward asymptotically optimal implementation for it. To increase a node π to value π₯, first you decrease-key(π,ββ), then delete-min() followed by insert(π,π₯).
Refer here
I know about the implementation of both the data structures , i want to know which is better considering time complexity.
Both have same insertion and erase complexity O(log n), while get min is for both O(1).
A priority queue only gives you access to one element in sorted order ie, you can get the highest/lowest priority item, and when you remove that, you can get the next one, and so on. A set allows you full access in sorted order, for example, find two elements somewhere in the middle of the set, then traverse in order from one to the other.
In priority queue you can have multiple elements with same priority value, while in set you can't.
Set are generally backed by a binary tree, while priority queue is heap.
So the question is when should you use a binary tree instead of a heap?
In my opinion you should use neither of them. Check Binomial and Fibonacci heap. For prime algorithm they will have better performance.
If you insist in using one of them, I would go with priority queue, as it have smaller memory footprint and can have multiple elements with same priority value.
Theoretically speaking, both will give you an O(E log V)-time algorithm. This is not optimal; Fibonacci heaps give you O(E + V log V), which is better for dense graphs (E >> V).
Practically speaking, neither is ideally suited. Since set has long lived iterators, it's possible to implement a DecreaseKey operation, reducing the extra storage from O(E) to O(V) (the workaround is to enqueue vertices multiple times), but the space constant is worse than priority_queue, and the time constant probably is as well. You should measure your use case.
I will second Jim Mischel's recommendation of binary heap (a.k.a., priority_queue) -> pairing heap if the builtin isn't fast enough.
Currently reading Algorithms book. Q&A section for chapter 2.4 on heapsort implementation based on priority queue (p.328) has the following passage (let's focus on priority queue heap, not on heapsort):
Q. Iβm still not clear on the purpose of priority queues. Why exactly
donβt we just sort and then consider the items in increasing order in
the sorted array?
A. In some data-processing examples such as TopM and Multiway, the
total amount of data is far too large to consider sorting (or even
storing in memory). If you are looking for the top ten entries among a
billion items, do you really want to sort a billion-entry array? With
a priority queue, you can do it with a ten-entry priority queue. In
other examples, all the data does not even exist together at any point
in time: we take something from the priority queue, process it, and as
a result of processing it perhaps add some more things to the priority
queue.
TopM, Multiway are simple clients of priority queue. Book speaks about 2 phases of heapsort:
heap construction (author uses priority queue heap, we're interested in)
sortdown
In my understanding heap construction is almost sorting ("heap order"). In order to build a heap you practically need to visit each item in original dataset.
Question: can anyone illustrate the point of author I put in bold in above quote? How can we build a heap without visiting all items? What I miss here? Cheers for clarif.
Of course, you have to visit all entries. Just visiting them takes O(n) time. But sorting them usually requires O(n log n) time. And as the author states, you don't have to sort all of them. Only the ten greatest elements. The basic program would look as follows:
allocate priority queue q with space for t entries
visit each entry e in the input array
queueIsFull := size(q) == t
if !queueIsFull || e > min(q)
if !queueIsFull
insert e into q
else
exchange min(q) with e and bubble up
next
The basic point here is that you remove elements from the queue as soon as you know that they are not amongst the top-t entries. Hence, the insertion and exchange do not take O(log n) time but only O(log t). This reduces the overall time from O(n log n) to O(n log t), where log t is usually much smaller than log n.
I am looking for a datastructure to support a kind of advanced priority queueing. The idea is as follows. I need to sequentially process a number of items, and at any given point in time I know the "best" one to do next (based on some metric). The thing is, processing an item changes the metric for a few of the other items, so a static queue does not do the trick.
In my problem, I know which items need to have their priorities updated, so the datastructure I am looking for should have the methods
enqueue(item, priority)
dequeue()
requeue(item, new_priority)
Ideally I would like to requeue in O(log n) time. Any ideas?
There is an algorithm with time complexity similar to what you required, but it runs O(log n) only on average time, if it is what you needed. In this algorithm, you can use existing priority queue without the requeue() function.
Assuming you have a connection between the nodes in your graph and the elements in the priority queue. Let the element of the priority queue also store an extra bit called ignore. The algorithm for the modified dequeue runs as follow:
Call dequeue()
If the ignore bit in the element is true, go back to 1, otherwise return the item id.
The algorithm for the modified enqueue runs as follow:
Call enqueue(item, priority)
Visit neighbor nodes v of the item in the graph one by one
change the ignore bit to true for the current linked element in the queue correspond to v
enqueue(v, new_priority(v))
change the connection of the node v to the new enqueued elements.
num_ignore++
If the number of ignore element (num_ignore) is more than the number of non-ignore element, rebuild the priority queue
dequeue all elements, store, and then enqueue only non-ignore elements again
In this algorithm, the setting of ignore bit takes constant time, so you basically delay the O(log n) "requeue" until you accumulate O(n) ignore elements. Then clear all of them once, which takes O(n log n). Therefore, on average, each "requeue" takes O(log n).
You can not achieve the complexity you are asking for, as when updating elements the complexity should also depend on the number of updated elements.
However if we assume that the number of updated elements on a given step is p most of the typical implementations of a heap will do for a O(1) complexity to get max-element's value, O(log(n)) for deque, and O(p * log(n)) for the update operations. I would personally go for a binary heap as it is fairly easy to implement and will work for what you are asking for.
A Priority queue is exactly for this. You can implement it, for example, by using max-heap.
http://www.eecs.wsu.edu/~ananth/CptS223/Lectures/heaps.pdf describes increaseKey(), decreaseKey() and remove() operations. This would let you do what you want. I haven't figured out if the C++ stdlib implementation supports it yet.
Further, the version: http://theboostcpplibraries.com/boost.heap seems to support update() for some subclasses, but I haven't found a full reference yet.