Two stacks with a deque, what's the purpose of implementing it? - algorithm

From Algorithms 4th:
1.3.48 Two stacks with a deque. Implement two stacks with a single deque so that each
operation takes a constant number of deque operations (see Exercise 1.3.33).
What's the meaning of implementing 2 stacks with 1 single deque? Any practical reasons? Why don't I just create 2 stacks directly?
1.3.49 Queue with three stacks. Implement a queue with three stacks so that each
queue operation takes a constant (worst-case) number of stack operations. Warning :
high degree of difficulty.
Related question: How to implement a queue with three stacks?
Also, why do I have to implement a queue with three stacks? Can't I just create a queue directly too?

That first problem looks more like it's designed as an exercise than as anything else. I doubt that there are many cases where you'd want to implement two stacks using a single deque, though I'm happy to be proven wrong. I think the purpose of the question, though, is to get you to think about the "geometry" of deques and stacks. There's a really beautiful solution to the problem that's quite elegant, and if you see how it works it'll give you a much deeper appreciation for how all these types work.
To your second question - in imperative programming languages, there isn't much of a reason to implement a queue with three stacks. However, in functional programming languages like Lisp, typically, stacks are fairly simple to implement, but it's actually quite difficult to get a queue working with a constant number of operations required per operation. In fact, for a while, if I remember correctly, it was believed that this simply wasn't possible. You could implement a queue with two stacks (this is a very common exercise, and it's actually a really good one because the resulting queue is extremely fast), but this usually only gives good amortized performance rather than good worst-case performance, and in functional languages where amortization is either not a thing or much harder to achieve this isn't necessarily a good idea. Getting a queue out of three stacks with constant complexity is a Big Deal, then, as it unlocks the ability to use a number of classical algorithms that rely on queues that otherwise wouldn't be available in a functional context.
But again, in both cases, these are primarily designed as exercises to help you build a better understanding of the fundamentals. Would you actually do either of these things in practice? Probably not - some library designer will likely do it for you. But will doing these exercises give you a much deeper understanding of how these data types work, the sorts of things they're good and bad at, and an appreciation for how hard library designers have to work? Yes, totally!

Stacks from Deques
Stacks are first-in, last-out structures. Deques let you push/pop from both their front and back. If you keep track of the number of items you've stored in the front/back then you can use the front as one stack and the back as the other, returning NULL items if your counters go to zero.
Why would you do this? Who knows, but read on.
Queues from Stacks
You can implement a queue so that it has O(1) amortized time on all of its operations by using two stacks. When you're placing items on the queue place them in one stack. When you need to pull things off the queue, empty that stack into the other stack and pop from the top of that stack (while filling up the other stack with new incoming items).
Why would you want to do this?
Because, this is, roughly speaking, how you make a queue. Data structures have to be implemented somehow. In a computer you allocate memory starting from a base address and building outwards. Thus, a stack is a very natural data structure because all you need to do is keep track of a single positive offset to know where the top of your stack is.
Implementing a queue directly is more difficult... you are adding items to one end but pulling them off of the other. Two stacks gives you a way to do it.
But why 3 queues?
Because this algorithm (if it exists) ensures that there is a constant bound on the time complexity of a queue operation. With our 2-stack solution on average each item takes O(1) time, but, if we have a really big stack, once in a while there'll be an operation that takes a long time. While that's happening the car crashes, the rocket blows up, or your patient dies.
You don't want a crummy algorithm that gives unpredictable performance.
You want guarantees.
This StackOverflow answer explains that a 3-stack solution isn't known, but that there is a 6-stack solution.
Why Stacks From Deques
Let's return to your first question. As we've seen, there are good reasons to be able to build queues from stacks. For a computer it's a natural way of building a complex data structure from a simple one. It can even offer us performance guarantees.
Stacks from Dequeues doesn't strike me as being practical in a computer. But computer science isn't about computers; it's about finding efficient algorithmic solutions to problems. Maybe you're storing stuff on a multi-directional conveyor belt in a factory. You can program the belt to act like a dequeue and use the dequeue to make stacks. In this context the question makes more sense.

Seems there is no practical usage for these implementations.
The main purpose is to encourage student to invent complex solutions using simple tools - important skill for every qualified developer.
(perhaps the secondary goal is to teach programmer to implement ridiculous visions of the boss :))

Related

Deciding between a Priority Queue and Sorting Algorithm

I'm relatively new to data structures and algorithms (learning off YouTube as a highschool student), and I've come to a crossroads in thinking, for a project I am working on.
My project is to create software to make a test. I'm thinking of weighting the individual questions by difficulty, so when taking the test, the least difficult questions display first, and the most difficult last (through a min-heap). This in of itself would work, and would be efficient doing it. However, in my test program, a user might not want the next question. They may want to go back a question. Currently, this is solved by having an array of my Question class (java).
Question[] quizQuestions = {question1, question2, question3, ...}
To go to a question, the program gets the index of the question, and displays it.
However, with a priority queue, I lose this functionality.
I can think of several ways to avoid this, such as creating a array that stores the questions after the priority queue hands them to the user.
But to me, that begs the question. Would it be more effective to use a sorting algorithm instead, based on the Questions difficulty integer, to just have a single array? Knowing that the time complexity of the priority queue is faster than that of any sorting algorithm, I am leaning towards the queue. But being very new to this, I'd like some outside input.Thanks in advance.
This would be a good time to learn about the "separation of concerns".
You absolutely want to sort the questions first, provide them to the quiz interface as a sorted list, and then have the quiz UI ask the questions in the order that they are provided.
This separates the concerns about question ordering from the concerns about the features of the quiz user interface, and lets you maintain and modify each of those independently. If you want to change the quiz ordering to something else, you can do that without worrying about all the quiz UI code, and if you want to change the quiz UI, you can do that without worrying about how the questions are ordered.
Matt has answered from a software engineering perspective. Algorithmically, the priority queue is O(n) time to set up, then O(log n) each time you pop the min element. If you pop all of the elements, you get an O(n log n)-time sorting algorithm called heapsort, which is asymptotically efficient but tends to be slower in practice than whatever sort method your programming environment provides. Honestly even quadratic time would be fine on any reasonably sized test, so I would just sort.

Floodfill: Stack vs. Queue

It is possible to write a flood fill function that uses either a queue or a stack. Which is faster under which circumstances (if at all), and why?
Provided you implement them correctly they should be equally fast. That is avoid recursion, implement the queue using a vector, not a linked list.
Both have O(N) complexity (N is the number of cells to be filled).
For very large examples(I would guess 10k x 10k), you might implement the stack approach so that you favor memory cache lines which would give you a slight advantage. This is hard to do right, reliably, since it is hardware dependent.

What are appropriate applications for a linked (doubly as well) list?

I have a question about fundamentals in data structures.
I understand that array's access time is faster than a linked list. O(1)- array vs O(N) -linked list
But a linked list beats an array in removing an element since there is no shifting needing O(N)- array vs O(1) -linked list
So my understanding is that if the majority of operations on the data is delete then using a linked list is preferable.
But if the use case is:
delete elements but not too frequently
access ALL elements
Is there a clear winner? In a general case I understand that the downside of using the list is that I access each node which could be on a separate page while an array has better locality.
But is this a theoretical or an actual concern that I should have?
And is the mixed-type i.e. create a linked list from an array (using extra fields) good idea?
Also does my question depend on the language? I assume that shifting elements in array has the same cost in all languages (at least asymptotically)
Singly-linked lists are very useful and can be better performance-wise relative to arrays if you are doing a lot of insertions/deletions, as opposed to pure referencing.
I haven't seen a good use for doubly-linked lists for decades.
I suppose there are some.
In terms of performance, never make decisions without understanding relative performance of your particular situation.
It's fairly common to see people asking about things that, comparatively speaking, are like getting a haircut to lose weight.
Before writing an app, I first ask if it should be compute-bound or IO-bound.
If IO-bound I try to make sure it actually is, by avoiding inefficiencies in IO, and keeping the processing straightforward.
If it should be compute-bound then I look at what its inner loop is likely to be, and try to make that swift.
Regardless, no matter how much I try, there will be (sometimes big) opportunities to make it go faster, and to find them I use this technique.
Whatever you do, don't just try to think it out or go back to your class notes.
Your problem is different from anyone else's, and so is the solution.
The problem with a list is not just the fragmentation, but mostly the data dependency. If you access every Nth element in array you don't have locality, but the accesses may still go to memory in parallel since you know the address. In a list it depends on the data being retrieved, and therefore traversing a list effectively serializes your memory accesses, causing it to be much slower in practice. This of course is orthogonal to asymptotic complexities, and would harm you regardless of the size.

Is there a parallel flood fill implementation?

I've got openMP and MPI at my disposal, and was wondering if anyone has come across a parallel version of any flood fill algorithm (preferably in c). If not, I'd be interested in sketches of how to do parallelise it - is it even possible given its based on recursion?
Wikipedia's got a pretty good article if you need to refresh your memory on flood fills.
Many thanks for your help.
There's nothing "inherently" recursive about flood-fill, just that to do some work, you need some information about previously-discovered "frontier" cells. If you think of it that way, it's clear that parallelism is eminently possible: even with a single queue, you could use four threads (one for each direction), and only move the tail of the queue when the cell has been examined by each thread. or equivalently, four queues. thinking in this way, one might even imagine partitioning the space into multiple queues - bucketed by coordinate ranges, perhaps.
one basic problem is that the problem definition usually includes the proviso that no cell is ever revisited. this implies that each worker needs an up-to-date map of which cells have been considered (globally). mutable global information is problematic, performance-wise, though it's not hard to think of ways to limit the necessity for propagating updates globally...

Is there any practical usage of Doubly Linked List, Queues and Stacks?

I've been coding for quite sometime now. And my work pertains to solving real-world business scenarios. However, I have not really come across any practical usage of some of the data structures like the Linked List, Queues and Stacks etc.
Not even at the business framework level. Of course, there is the ubiquitous HashTable, ArrayList and of late the List...but is there any practical usage of some of the other basic data structures?
It would be great if someone gave a real-world solution where a Doubly Linked List "performs" better than the obvious easily usable counterpart.
Of course it’s possible to get by with only a Map (aka HashTable) and a List. A Queue is only a glorified List but if you use a Queue everywhere you really need a queue then your code gets a lot more readable because nobody has to guess what you are using that List for.
And then there are algorithms that work a lot better when the underlying data structure is not a plain List but a DoublyLinkedList due to the way they have to navigate the list. The same is valid for all other data structures: there’s always a use for them. :)
Stacks can be used for pairing (parseing) such as matching open brackets to closing brackets.
Queues can be used for messaging, or activity processing.
Linked list, or double linked lists can be used for circular navigation.
Most of these algorithms are usually at a lower level than your usual "business" application. For example indices on the database is a variation of a multiply linked list. Implementation of function calling mechanism(or a parse tree) is a stack. Queues and FIFOs are used for servicing network request etc.
These are just examples of collection structures that are optimized for speed in various scenarios.
LIFO-Stack and FIFO-Queue are reasonably abstract (behavioral spec-level) data structures, so of course there are plenty of practical uses for them. For example, LIFO-Stack is a great way to help remove recursion (stack up the current state and loop, instead of making a recursive call); FIFO-Queue helps "buffer up" and "peel away" work nuggets in a coroutine arrangement; etc, etc.
Doubly-linked-List is more of an implementation issue than a behavioral spec-level one, mostly... can be a good way to implement a FIFO-Queue, for example. If you need a sequence with fast splicing and removal give a pointer to one sequence iten, you'll find plenty of other real-world uses, too.
I use queues, linked lists etc. in business solutions all the time.
Except they are implemented by Oracle, IBM, JMS etc.
These constructs are generally at a much lower level of abstaction than you would want while implementing a business solution. Where a business problem would benifit from
such low level constructs (e.g. delivery route planning, production line scheduling etc.) there is usually a package available to do it or you.
I don't use them very often, but they do come up. For example, I'm using a queue in a current project to process asynchronous character equipment changes that must happen in the order the user makes them.
A linked list is useful if you have a subset of "selected" items out of a larger set of items, where you must perform one type of operation on a "selected" item and a default operation or no operation at all on a normal item and the set of "selected" items can change at will (possibly due to user input). Because linked list removal can be done nearly instantaneously (vs. the traversal time it would take for an array search), if the subsets are large enough then it's faster to maintain a linked list than to either maintain an array or regenerate the whole subset by scanning through the whole larger set every time you need the subset.
With a hash table or binary tree, you could search for a single "selected" item, but you couldn't search for all "selected" items without checking every item (or having a separate dictionary for every permutation of selected items, which is obviously impractical).
A queue can be useful if you are in a scenario where you have a lot of requests coming in and you want to make sure to handle them fairly, in order.
I use stacks whenever I have a recursive algorithm, which usually means it's operating on some hierarchical data structure, and I want to print an error message if I run out of memory instead of simply letting the software crash if the program stack runs out of space. Instead of calling the function recursively, I store its local variables in an object, run a loop, and maintain a stack of those objects.

Resources