Why insert from front of queue (deque) - data-structures

Are there any reason why people will want to insert something at the front of the queue? I'm writing a report on double-ended queue and this is bugging me.
I'm assuming that maybe something of higher importance will be inserted at the front when needed, but then a priority queue will be more relevant.

One example where a deque can be used is Steal job scheduling algorithm.This algorithm implements task scheduling for several processors. A separate deque with threads to be executed is maintained for each processor. To execute the next thread, the processor gets the first element from the deque (using the "remove first element" deque operation). If the current thread forks, it is put back to the front of the deque ("insert element at front") and a new thread is executed. When one of the processors finishes execution of its own threads (i.e. its deque is empty), it can "steal" a thread from another processor: it gets the last element from the deque of another processor ("remove last element") and executes it. The steal-job scheduling algorithm is used by Intel's Threading Building Blocks (TBB) library for parallel programming.

Note that a priority queue is inherently different from a deque, with elements in a PQ being processed according to their priority while in a deque you can only remove and insert at the front or back of the queue. A possible application of a deque I can quickly think of is an "undo" feature enabling you to fall back to a previous state.

Related

How to avoid starving out lower-priorly work item sources

Suppose you have sources of items to complete with differing priorities, and the ability to to complete items is always saturated. How do you ensure the lower priority items are not starved out completely?
I am thinking I could combine the priority of a source with the time since it was last serviced to come up with a dynamic 'effective' priority. That way, lower priority sources would slowly be raised until they were high enough to be serviced.
I didn't want to reinvent the wheel here without at least asking, in case a more elegant solution to this problem exists. Thanks!
What you are thinking is a standard idea and is called Aging.
Aging is used to ensure that jobs with lower priority will eventually complete their execution. This technique can be used to reduce starvation of low priority tasks. There are many ways to implement aging, but all have the same principle that the priority of a process should increase as it waits in the ready queue. The increase in priority may or may not be equal to the waiting time of the process.
Your current thought is to assign priority to the process. Generally you would do that by putting all the process in a min (or max, depending upon your implementation) heap and then poll the heap...
Alternatively you can assign process to priority. You do that by keeping multiple queues/lists of each priority type (highest, high, med, low, lowest etc.).
Keep multiple queues of each type;
Get item from highest priority list and finish it OR assign a time quantum to each of the high priority processes in a round robin fashion;
If all processes in high priority list are served, start serving processes from lower priority until there is something added to the higher priority list again;
As and when any process in lower priorities has been waiting for too long, remove it from that priority list and add it to the next higher priority level.
This too is a standard algorithm taught in Operating Systems.

Implementation of data structures

i just want to know what is the suitable data structure to implement for:
1. storing the recently visited web addresses on a web browser?
2. the processes to be scheduled on the CPU of a computer?
3. the undo mechanism in a text editor like Notepad?
storing the recently visited web addresses on a web browser?
If you want to store the k last addresses, you can use a queue.
If the queue is smaller than k - just add the new address.
If it's of size k, delete the last element in the queue (one inserted first, the "oldest" entry), and insert the new one.
Might want to combine it with a set (or a map that maps to the queue's entry) to make sure an entry is not filling multiples values in your queue.
If you don't need to ever delete entries (and the number of "visited" elements is unbounded), you can use a set.
the processes to be scheduled on the CPU of a computer?
There are many options for that, but some simple ones are using a queue, or a priority queue (heap).
the undo mechanism in a text editor like Notepad?
A stack. Each "do" is a push, and to "Undo", you pop the last element and revert its action.

What exactly is the queue Abstract Data Type?

I am not clear on the idea of a Queue. It seems that this term is ambiguous or at least I am confused about it.
While it seems that the most common explanation of a queue (e.g. in wikipedia) is that it is an Abstract Data Type that adheres to the FIFO principle, in practice this term appears to have a broader meaning.
For example, we have
Priority Queues where each item is retrieve according to a priority,
we have a stack which also is a form of inverse queue (LIFO),
we have message queues, which seem to be just a list of items with no
ordering, there by classifying a simple list as a queue etc
So could someone please help me out here on why exactly a queue has so many different meanings?
A queue is inherently a data structure following the FIFO principle as its default nature.
Let us treat this queue as a queue in our natural day-to-day lives. Take an example of a queue on the railway station for purchasing tickets.
Normal queue: The person standing front-most in the queue gets the ticket, and any new person arriving stands at the end of the queue, waiting for his turn to get a ticket.
Priority queue: Suppose you are a VIP standing in the middle of that queue. The ticket vendor immediately notices you, and calls you to the front of the queue to get your tickets, even though its not your turn to purchase. Had you not been important, the queue would have kept playing its usual role, but as soon as any element is considered more important than the other, its picked up, irrespective of its position in the queue. But otherwise, the default nature of the queue remains the same.
Stack: Let's not confuse it with the queue at all. The purpose of the stack is inherently different from that of a queue. Take an example of dishes washed and kept in your kitchen, where the last dish washed is the first one to be picked for serving. So, stack and queue have a different role to play in different situations, and should not be confused with each other.
Message queue: As is the case with priority queue, the default nature of this queue is that the message that comes first is read first, while the upcoming messages line up in the queue waiting for their turn, unless a message is considered more important than the other and is called to the front of the queue before its usual turn.
So, the default nature of any type of queue remains the same, it continues to follow its FIFO principle unless its made to do otherwise, in special circumstances.
Hope it helps
In general, a queue models a waiting area where items enter and are eventually selected and removed. However, different queues can have different scheduling policies such as First-In-First-Out (FIFO), Last-In-First-Out (LIFO), Priority, or Random. For example, queueing theory addresses all of these as queues.
However, in computer science/programming, we typically use the word "queue" to refer specifically to FIFO queues, and use the other words (stack, priority queue, etc.) for the other scheduling policies. In practice, you should assume FIFO when you hear the word queue, but don't completely rule out the possibility that the speaker might be using the word more generally.
As an aside, similar issues arise with the word "heap" which, depending on context, might refer to a specific implementation of priority queues or to priority queues in general (or, in a mostly unrelated meaning, to an area of memory used for dynamic allocation).
Priority Queue: Its not a queue. Take a look: http://docs.oracle.com/javase/7/docs/api/java/util/PriorityQueue.html it does not implement the Queue interface.
Stack: Its not a queue. Take a look: http://docs.oracle.com/javase/7/docs/api/java/util/Stack.html it does not implement the Queue interface.
Message queue: I do not know what it is.
Queue: Queue has only one meaning, whoever comes first also get served first.
Queue ADT: It is an interface, meaning it has bunch of functions. Most common ones: add-adds to the end of the line, remove-removes from the beginning of the line. http://docs.oracle.com/javase/7/docs/api/java/util/Queue.html

Efficiently implementing Birman-Schiper-Stephenson(BSS) protocol's delay queue

I am using the Birman-Schiper-Stephenson protocol of distributed system with the current assumption that peer set of any node doesn't change. As the protocol dictates, the messages which have come out of causal order to a node have to be put in a 'delay queue'. My problem is with the organisation of the delay queue where we must implement some kind of order with the messages. After deciding the order we will have to make a 'Wake-Up' protocol which would efficiently search the queue after the current timestamp is modified to find out if one of the delayed messages can be 'woken-up' and accepted.
I was thinking of segregating the delayed messages into bins based on the points of difference of their vector-timestamps with the timestamp of this node. But the number of bins can be very large and maintaining them won't be efficient.
Please suggest some designs for such a queue(s).
Sorry about the delay -- didn't see your question until now. Anyhow, if you look at Isis2.codeplex.com you'll see that in Isis2, I have a causalsend implementation that employs the same vector timestamp scheme we described in the BSS paper. What I do is to keep my messages in a partial order, sorted by VT, and then when a delivery occurs I can look at the delayed queue and deliver off the front of the queue until I find something that isn't deliverable. Everything behind it will be undeliverable too.
But in fact there is a deeper insight here: you actually never want to allow the queue of delayed messages to get very long. If the queue gets longer than a few messages (say, 50 or 100) you run into the problem that the guy with the queue could be holding quite a few bytes of data and may start paging or otherwise running slowly. So it becomes a self-perpetuating cycle in which because he has a queue, he is very likely to be dropping messages and hence enqueuing more and more. Plus in any case from his point of view, the urgent thing is to recover that missed message that caused the others to be out of order.
What this adds up to is that you need a flow control scheme in which the amount of pending asynchronous stuff is kept small. But once you know the queue is small, searching every single element won't be very costly! So this deeper perspective says flow control is needed no matter what, and then because of flow control (if you have a flow control scheme that works) the queue is small, and because the queue is small, the search won't be costly!

A priority queue which allows efficient priority update?

UPDATE: Here's my implementation of Hashed Timing Wheels. Please let me know if you have an idea to improve the performance and concurrency. (20-Jan-2009)
// Sample usage:
public static void main(String[] args) throws Exception {
Timer timer = new HashedWheelTimer();
for (int i = 0; i < 100000; i ++) {
timer.newTimeout(new TimerTask() {
public void run(Timeout timeout) throws Exception {
// Extend another second.
timeout.extend();
}
}, 1000, TimeUnit.MILLISECONDS);
}
}
UPDATE: I solved this problem by using Hierarchical and Hashed Timing Wheels. (19-Jan-2009)
I'm trying to implement a special purpose timer in Java which is optimized for timeout handling. For example, a user can register a task with a dead line and the timer could notify a user's callback method when the dead line is over. In most cases, a registered task will be done within a very short amount of time, so most tasks will be canceled (e.g. task.cancel()) or rescheduled to the future (e.g. task.rescheduleToLater(1, TimeUnit.SECOND)).
I want to use this timer to detect an idle socket connection (e.g. close the connection when no message is received in 10 seconds) and write timeout (e.g. raise an exception when the write operation is not finished in 30 seconds.) In most cases, the timeout will not occur, client will send a message and the response will be sent unless there's a weird network issue..
I can't use java.util.Timer or java.util.concurrent.ScheduledThreadPoolExecutor because they assume most tasks are supposed to be timed out. If a task is cancelled, the cancelled task is stored in its internal heap until ScheduledThreadPoolExecutor.purge() is called, and it's a very expensive operation. (O(NlogN) perhaps?)
In traditional heaps or priority queues I've learned in my CS classes, updating the priority of an element was an expensive operation (O(logN) in many cases because it can only be achieved by removing the element and re-inserting it with a new priority value. Some heaps like Fibonacci heap has O(1) time of decreaseKey() and min() operation, but what I need at least is fast increaseKey() and min() (or decreaseKey() and max()).
Do you know any data structure which is highly optimized for this particular use case? One strategy I'm thinking of is just storing all tasks in a hash table and iterating all tasks every second or so, but it's not that beautiful.
How about trying to separate the handing of the normal case where things complete quickly from the error cases?
Use both a hash table and a priority queue. When a task is started it gets put in the hash table and if it finishes quickly it gets removed in O(1) time.
Every one second you scan the hash table and any tasks that have been a long time, say .75 seconds, get moved to the priority queue. The priority queue should always be small and easy to handle. This assumes that one second is much less than the timeout times you are looking for.
If scanning the hash table is too slow, you could use two hash tables, essentially one for even-numbered seconds and one for odd-numbered seconds. When a task gets started it is put in the current hash table. Every second move all the tasks from the non-current hash table into the priority queue and swap the hash tables so that the current hash table is now empty and the non-current table contains the tasks started between one and two seconds ago.
There options are a lot more complicated than just using a priority queue, but are pretty easily implemented should be stable.
To the best of my knowledge (I wrote a paper about a new priority queue, which also reviewed past results), no priority queue implementation gets the bounds of Fibonacci heaps, as well as constant-time increase-key.
There is a small problem with getting that literally. If you could get increase-key in O(1), then you could get delete in O(1) -- just increase the key to +infinity (you can handle the queue being full of lots of +infinitys using some standard amortization tricks). But if find-min is also O(1), that means delete-min = find-min + delete becomes O(1). That's impossible in a comparison-based priority queue because the sorting bound implies (insert everything, then remove one-by-one) that
n * insert + n * delete-min > n log n.
The point here is that if you want a priority-queue to support increase-key in O(1), then you must accept one of the following penalties:
Not be comparison based. Actually, this is a pretty good way to get around things, e.g. vEB trees.
Accept O(log n) for inserts and also O(n log n) for make-heap (given n starting values). This sucks.
Accept O(log n) for find-min. This is entirely acceptable if you never actually do find-min (without an accompanying delete).
But, again, to the best of my knowledge, no one has done the last option. I've always seen it as an opportunity for new results in a pretty basic area of data structures.
Use Hashed Timing Wheel - Google 'Hashed Hierarchical Timing Wheels' for more information. It's a generalization of the answers made by people here. I'd prefer a hashed timing wheel with a large wheel size to hierarchical timing wheels.
Some combination of hashes and O(logN) structures should do what you ask.
I'm tempted to quibble with the way you're analyzing the problem. In your comment above, you say
Because the update will occur very very frequently. Let's say we are sending M messages per connection then the overall time becomes O(MNlogN), which is pretty big. – Trustin Lee (6 hours ago)
which is absolutely correct as far as it goes. But most people I know would concentrate on the cost per message, on the theory that as you app has more and more work to do, obviously it's going to require more resources.
So if your application has a billion sockets open simultaneously (is that really likely?) the insertion cost is only about 60 comparisons per message.
I'll bet money that this is premature optimization: you haven't actually measured the bottlenecks in you system with a performance analysis tool like CodeAnalyst or VTune.
Anyway, there's probably an infinite number of ways of doing what you ask, once you just decide that no single structure will do what you want, and you want some combination of the strengths and weaknesses of different algorithms.
One possiblity is to divide the socket domain N into some number of buckets of size B, and then hash each socket into one of those (N/B) buckets. In that bucket is a heap (or whatever) with O(log B) update time. If an upper bound on N isn't fixed in advance, but can vary, then you can create more buckets dynamically, which adds a little complication, but is certainly doable.
In the worst case, the watchdog timer has to search (N/B) queues for expirations, but I assume the watchdog timer is not required to kill idle sockets in any particular order!
That is, if 10 sockets went idle in the last time slice, it doesn't have to search that domain for the one that time-out first, deal with it, then find the one that timed-out second, etc. It just has to scan the (N/B) set of buckets and enumerate all time-outs.
If you're not satisfied with a linear array of buckets, you can use a priority queue of queues, but you want to avoid updating that queue on every message, or else you're back where you started. Instead, define some time that's less than the actual time-out. (Say, 3/4 or 7/8 of that) and you only put the low-level queue into the high-level queue if it's longest time exceeds that.
And at the risk of stating the obvious, you don't want your queues keyed on elapsed time. The keys should be start time. For each record in the queues, elapsed time would have to be updated constantly, but the start time of each record doesn't change.
There's a VERY simple way to do all inserts and removes in O(1), taking advantage of the fact that 1) priority is based on time and 2) you probably have a small, fixed number of timeout durations.
Create a regular FIFO queue to hold all tasks that timeout in 10 seconds. Because all tasks have identical timeout durations, you can simply insert to the end and remove from the beginning to keep the queue sorted.
Create another FIFO queue for tasks with 30-second timeout duration. Create more queues for other timeout durations.
To cancel, remove the item from the queue. This is O(1) if the queue is implemented as a linked list.
Rescheduling can be done as cancel-insert, as both operations are O(1). Note that tasks can be rescheduled to different queues.
Finally, to combine all the FIFO queues into a single overall priority queue, have the head of every FIFO queue participate in a regular heap. The head of this heap will be the task with the soonest expiring timeout out of ALL tasks.
If you have m number of different timeout durations, the complexity for each operation of the overall structure is O(log m). Insertion is O(log m) due to the need to look up which queue to insert to. Remove-min is O(log m) for restoring the heap. Cancelling is O(1) but worst case O(log m) if you're cancelling the head of a queue. Because m is a small, fixed number, O(log m) is essentially O(1). It does not scale with the number of tasks.
Your specific scenario suggests a circular buffer to me. If the max. timeout is 30 seconds and we want to reap sockets at least every tenth of a second, then use a buffer of 300 doubly-linked lists, one for each tenth of a second in that period. To 'increaseTime' on an entry, remove it from the list it's in and add it to the one for its new tenth-second period (both constant-time operations). When a period ends, reap anything left over in the current list (maybe by feeding it to a reaper thread) and advance the current-list pointer.
You've got a hard-limit on the number of items in the queue - there is a limit to TCP sockets.
Therefore the problem is bounded. I suspect any clever data structure will be slower than using built-in types.
Is there a good reason not to use java.lang.PriorityQueue? Doesn't remove() handle your cancel operations in log(N) time? Then implement your own waiting based on the time until the item on the front of the queue.
I think storing all the tasks in a list and iterating through them would be best.
You must be (going to) run the server on some pretty beefy machine to get to the limits where this cost will be important?

Resources