What is the Double Stack? - data-structures

I'm studying now for the final exam and I see the following question at the end of the professor's ppt slides, which are talking about the Stack:
What is a Double Stack?
I know that the stack is an ordered collection of homogeneous elements (i.e. a list), in which all insertions and deletions are made at one end of the list called the top of the stackm but what is the double stack? I tried search through google and I had no luck with finding an answer.

It could be 2 stacks which are stored in a single array and grow in opposite direction.
http://www.ceglug.org/index.php/labs/45-double-stack-implementationwith-structuresand
Though this is the only reference i found.

A DoubleStack is a stack of double values.
You can find more info at
http://www.cis.syr.edu/courses/cis351/docs/edu.colorado.collections.DoubleStack.html.gz

Double stack means two stacks which are implemented using a single array. To prevent memory wastage, the two stacks are grown in opposite direction. The pointer tops1 and tops2 points to top-most element of stack 1 and stack 2 respectively. Initially, tops1 is initialized as -1 and tops2 is initialized the capacity. As the elements are pushed into stack 1, tops1 is incremented. Similarly, as the elements are pushed into stack 2, tops2 is decremented. So, the array is full when tops1=tops2-1. Beyond this, pushing an element into any stack will lead to overflow condition.

Related

Implement and merge two stack in O(1)

I've been asked this question somewhere.
I've been given 2 stacks. I have to implement the following operations:
// Pass one of the stacks and a value to insert
push(Stack stack, value)
pop(Stack stack, val)
merge(Stack s1, Stack s2)
I have to perform above stack operations like push and pop in O(1). So far I've used a linked list to successfully implement these operations.
But how can I merge the two stacks in O(1)? I couldn't find how to do it in O(1).
Maybe I need to use some other data structure or something?
It's really easy if your stack objects keep both ends of the stack (top/botton, start/end, head/tail, whatever). I'll use top/bottom for this answer.
When you implement push/pop you operate on the top object. The bottom will remain the same (unless the stack is empty) and the node that represents it will have it's next pointer set to null.
So to merge two stacks you take the bottom of one, point it to the top of the other and return a "new" stack formed of the other pointers.
Stack merge(Stack s1, Stack s2) {
// join the stacks
s2.bottom.next = s1.top
// make a nice object to give back
Stack result;
result.bottom = s1.bottom
result.top = s2.top
// cleanup the parameters so they don't mess up the new structure.
s1.bottom = s1.top = s2.bottom = s2.top = null;
return result;
}
If you don't have the two pointers nicely kept in the stack object you would need to traverse one of the stacks get what would be kept here as bottom, making the complexity O(N).
I would like to give another perspective, the programming/object oriented perspective. If you do not have a pointed to the end of the stack as suggested before and in case merging means first return the elements of one stack, then the other, i.e. define an order between them - this is a real important consideration you did not address. You could follow the following approach
Create a StackList object which extends Stack Java example:
class StackList extends Stack
Now, hold a linked list of Stacks in it, the merging is trivial by adding the Stacks to the list, pop/push will simply call the pop/push methods of the head Stack.

Stack overflow solution with O(n) runtime

I have a problem related to runtime for push and pop in a stack.
Here, I implemented a stack using array.
I want to avoid overflow in the stack when I insert a new element to a full stack, so when the stack is full I do the following (Pseudo-Code):
(I consider a stack as an array)
Generate a new array with the size of double of the origin array.
Copy all the elements in the origin stack to the new array in the same order.
Now, I know that for a single push operation to the stack with the size of n the action executes in the worst case in O(n).
I want to show that the runtime of n pushes to an empty stack in the worst case is also O(n).
Also how can I update this algorithm that for every push the operation will execute in a constant runtime in the worst case?
Amortized constant-time is often just as good in practice if not better than constant-time alternatives.
Generate a new array with the size of double of the origin array.
Copy all the elements in the origin stack to the new array in the same order.
This is actually a very decent and respectable solution for a stack implementation because it has good locality of reference and the cost of reallocating and copying is amortized to the point of being almost negligible. Most generalized solutions to "growable arrays" like ArrayList in Java or std::vector in C++ rely on this type of solution, though they might not exactly double in size (lots of std::vector implementations increase their size by something closer to 1.5 than 2.0).
One of the reasons this is much better than it sounds is because our hardware is super fast at copying bits and bytes sequentially. After all, we often rely on millions of pixels being blitted many times a second in our daily software. That's a copying operation from one image to another (or frame buffer). If the data is contiguous and just sequentially processed, our hardware can do it very quickly.
Also how can I update this algorithm that for every push the operation
will execute in a constant runtime in the worst case?
I have come up with stack solutions in C++ that are ever-so-slightly faster than std::vector for pushing and popping a hundred million elements and meet your requirements, but only for pushing and popping in a LIFO pattern. We're talking about something like 0.22 secs for vector as opposed to 0.19 secs for my stack. That relies on just allocating blocks like this:
... of course typically with more than 5 elements worth of data per block! (I just didn't want to draw an epic diagram). There each block stores an array of contiguous data but when it fills up, it links to a next block. The blocks are linked (storing a previous link only) but each one might store, say, 512 bytes worth of data with 64-byte alignment. That allows constant-time pushes and pops without the need to reallocate/copy. When a block fills up, it just links a new block to the previous block and starts filling that up. When you pop, you just pop until the block becomes empty and then once it's empty, you traverse its previous link to get to the previous block before it and start popping from that (you can also free the now-empty block at this point).
Here's your basic pseudo-C++ example of the data structure:
template <class T>
struct UnrolledNode
{
// Points to the previous block. We use this to get
// back to a former block when this one becomes empty.
UnrolledNode* prev;
// Stores the number of elements in the block. If
// this becomes full with, say, 256 elements, we
// allocate a new block and link it to this one.
// If this reaches zero, we deallocate this block
// and begin popping from the previous block.
size_t num;
// Array of the elements. This has a fixed capacity,
// say 256 elements, though determined at runtime
// based on sizeof(T). The structure is a VLS to
// allow node and data to be allocated together.
T data[];
};
template <class T>
struct UnrolledStack
{
// Stores the tail end of the list (the last
// block we're going to be using to push to and
// pop from).
UnrolledNode<T>* tail;
};
That said, I actually recommend your solution instead for performance since mine barely has a performance edge over the simple reallocate and copy solutions and yours would have a slight edge when it comes to traversal since it can just traverse the array in a straightforward sequential fashion (as well as straightforward random-access if you need it). I didn't actually implement mine for performance reasons. I implemented it to prevent pointers from being invalidated when you push things to the container (the actual thing is a memory allocator in C) and, again, in spite of achieving true constant-time push backs and pop backs, it's still barely any faster than the amortized constant-time solution involving reallocation and memory copying.

dc: how do I pop (and discard) the top number of the stack?

In dc, how do I pop and discard a number from the top of the stack? A stack with three items (1 2 3) should become a stack with two items (2 3). Currently I'm shoving the number onto another stack (Sz) but that seems rather lame.
There are numerous ways to delete the top of the stack but they have side effects. Removing an element without side effects requires you to avoid included side effects.
To remove the top of the stack without a side effect, ensure that the top is a number and then run d!=z. If the stack had [5], this does the following
Start with item to remove. Stack: [5]
Duplicate top of stack. Stack: [5,5]
Pop top 2 and test if they are not equal: 5 != 5 Stack: []
If test passed (which it can't), run z Stack: []
To ensure that the top of stack is a number, I use Z which will calculate the length of a string or the number of digits in a number and push that back. There are other options such as X. Anything that makes a number out of anything will work so that it will be compatible with !=.
So the full answer for copy pasting in all situations is the following:
Zd!=r
I usually stick this in register D (for Drop):
[Zd!=r]sD
and then I can run
lDx

How does the stack work?

I have a couple questions about the stack. One thing I don't understand about the stack is the "pop" and "push" idea. Say I have integers a and b, with a above b on the stack. To access b, as I understand it, a would have to be popped off the stack to access b. So where is "a" stored when it is popped off the stack.
Also if stack memory is more efficient to access than heap memory, why isn't heap memory structured like the stack? Thanks.
So where is "a" stored when it is popped off the stack.
It depends. It goes where the program that's reading the stack decides. It may store the value, ignore it, print it, anything.
Also if stack memory is more efficient to access than heap memory, why
isn't heap memory structured like the stack?
A stack isn't more efficient to access than a heap is, it depends on the usage. The program's flow gets deeper and shallower just like a stack does. Local variables, arguments and return addresses are, in mainstream languages, stored in a stack structure because this kind of structure implements more easily the semantics of what we call a function's stack frame. A function can very efficiently access its own stack frame, but not necessarily its caller functions' stack frames, that is, the whole stack.
On the other hand, the heap would be inefficient if it were implemented that way, because it's expected for the heap to be able to access and possibly delete items anywhere, not just from its top/bottom.
I'm not an expert, but you can sort of think of this like the Tower of Hanoi puzzle. To access a lower disc, you "pop" discs above it and place them elsewhere - in this case, on other stacks, but in the case of programming it could be just a simple variable or pointer or anything. When you've got the item you need, then the other ones can be put back on the stack or moved elsewhere entirely.
Lets take your case scenario .
You have a stack with n elements on it, the last one is a, b is underneath.
pop operation returns the popped value, so if you want to access the second from the top being b, you could do:
var temp = stack.pop()
var b = stack.pop()
stack.push(temp)
However, stack would rarely be used this way. It is a LIFO queue and works best when accessed like a LIFO queue.
It seems you would rather need a collection with a random index based access.
That collection would probably be stored on the heap. Hope it clarified stack pop/push a little.
a is stored wherever you decide to store it. :-) You need to provide a variable in which to store the value at the top of the stack (a) when you remove it, then remove the next item (b) and store it in a different variable to use it, and then push the first value (a) back on the stack.
Picture an actual pile of dirty plates sitting on your counter to your left. You pick one up to wash it (pop it from the "dirty" stack), wash it, dry it, and put it on the top of the clean stack (push it) on your right.
If you want to reach the second plate from the top in either stack, you have to move the top one to get to it. So you pick it up (pop it), put it somewhere temporarily, pick up the next plate (pop it) and put it somewhere, and then put the first one you removed back on the pile (push it back on the stack).
If you can't picture it with plates, use an actual deck of playing cards (or baseball cards, or a stack of paper - anything you can neatly pile ("stack")) and put it on your desk at your left hand. Then perform the steps in my last paragraph, replacing the word "plate" with "card" and physically performing the steps.
So to access b, you declare a variable to store a in, pop a and save it in that variable, pop b into it's own variable, and then push a back onto the stack.

Is Stack with PushAt/PopAt still a Stack?

Lets say I have a Data Structure similar to Stack but in addition to usual Push/Pop it also has functions such as PushAt/PopAt both of which takes an integer as input and adds/returns the item at that particular location in data structure.
Now Stack is suppose to be LIFO. Does this data structure qualify as "Stack"?
In HP RPN calculators and in Postscript/PDF, other operators than push and pop exist:
swap or exch for permuting top of stack and next element,
roll as an extension of swap
Their main data structure is still considered a stack.
pushAt and popAt can be written only with pop/push and roll. So your data structure can still be named stack.
Technically not. LIFO means (as you know) last-in, first-out. If the last element going in isn't the first to come out, it doesn't satisfy the "semantic interface" (or contract) of a stack.
However, since it seems like you are only adding additional methods and not modify the existing ones your data structure could be used interchangeably with a stack if it is being used like one, i.e. pushAt() and popAt() are never called (for instance because they are "hidden" by passing it as a Stack to a function).
So in that sense your version is a stack in the same way that a list is a stack in Java, for example. It can be used as one, but it can also do other things.
It's not a stack because it's not LIFO, you have complete control of where the items are get/set it's just a normal list imho.
Implementation of this would be easy if a linked list were to be used due to how the pointers can be reassigned, although if you were to use an array it would be difficult to pop an entity in the middle and then resize the structure to fill the gap below a complexity of O(n^2). This complexity would be very inefficient with linked list the complexity would only be O(n).
It might look like an ADT, but it just sounds like an array.
IMO it's a stack as long as it supports Push and Pop. It doesn't matter if it also supports other actions.
If its not a LIFO object it can't be qualified as a Stack. I would say its simply a List with Push Pop functionality which again is nothing but AddAtEnd() and RemoveFromEnd()
Your data structure can be used as a stack.. or as an array/list..
Stack is just a specialized form of a list.. and your implementation appears to negate that specialness.. so I would lean towards calling it a list instead of a stack.
Actually if you can only Pop and Push elements you can still see it as Stack. A more flexible stack.

Resources