Please help me with this question: Given a stack A with n keys propose an algorithm that prints the m small stack elements without changing the stack. in time O(mlog(n))
I already know how to get the minimal value in O(1) (creating an auxiliar stack an in each pop() compare with the previous value saved in the auxiliar stack.
Thanks a lot.
Related
I have two stacks. One stack is empty and other stack contains a list of numbers. We need to segregate even odd numbers such that one stack contains even numbers and other stack contains odd numbers only. I am unable to find any optimal solution with O(n) or O(nlogn) -- time complexity and O(1) -- space complexity . Please help.
Quadratic approach. Let stack A contains values, B is empty.
Pseudocode:
while not A.Empty:
x = A.Pop
if IsOdd(x):
while not B.Empty and IsEven(B.Peek):
A.Push(B.Pop)
B.Push(x)
while not B.Empty and IsEven(B.Peek):
A.Push(B.Pop)
Now A contains even items, B contains odd ones.
Suppose there are 50000+ integer entries in a Stack? How to efficiently find out a maximum or minimum in it?
Stack can grow or reduce i.e., pop() and push(), but we need to keep track of max and min value in the stack?
As Mischel pointed out, Stack with find-min/find-max more efficient than O(n)?, already answers the question.
However that response suggests that you need to store each and every max and min at each point in the stack. A better solution would be to have a separate stack for the max and the min. For the max stack you will push a new element onto the max stack only if the new element is greater than the current max, and vice-versa for min. You will pop elements of the min and max stacks when the element that you are popping of the main stack is equal to them and not equal to the next element in the main stack.
Note that this requires O(n) extra space while providing O(1) operations.
The only idea I have is to inherit Stack and keep inside max and min. On pop and push do changes to your max and min.
I was working through a tutorial sheet I found online and came across a question I couldn't figure out how to solve.
http://www.bowdoin.edu/~ltoma/teaching/cs231/fall08/Problems/amortized.pdf
An ordered stack S is a stack where the elements appear in increasing order. It supports the following operations:
Init(S): Create an empty ordered stack.
Pop(S): Delete and return the top element from the ordered stack.
Push(S, x): Insert x at top of the ordered stack and re-establish the increasing
order by repeatedly removing the element immediately below x until x is the
largest element on the stack.
Destroy(S): Delete all elements on the ordered stack.
Argue that the amortized running time of all operations is O(1). Can anyone help?
i think what you can do is,
firstly prove that init(s), pop(S) and destroy() really actually takes O(1) time ( and they really do.)
then for the push(S, x) function that is asymtotically increasing the complexity to O(n) argue that the push() will start with O(1) time and continue to give the same complexity until unless a number smaller than the top of the stack in pushed. the probability of this happening can be calculated to support your argument.
(do comment if something is not correct)
i recently saw a question which req. to reverse a stack in O(1) space.
1) stack is not necessarily of an array ... we can't access index.
2)number of elements are not known.
i came up with below code and it is working but not convinced that it is O(1) space because i have declared "int temp" exactly n times, suppose there are initially n elements in stack)so it has taken O(n) space.
please tell i am right or not and is there a better way to find the solution?.
code:
#include<bits/stdc++.h>
using namespace std;
stack<int>st;
void rec()
{
if(st.empty())
return;
int temp=st.top();
st.pop();
rec();
st.push(temp);
}
int main()
{
st.push(1);
st.push(2);
st.push(3);
st.push(4);
rec();
}
You can build 2 stacks "back to back" in a single array with n elements. Basically stack #1 is a "normal" stack, and stack #2 grows "downwards" from the end of the array.
Whenever the 2 stacks together contain all n elements, there is no gap between them, so for example popping an element from stack #1 and immediately pushing it onto stack #2 in this situation can be accomplished without even moving any data: just move the top pointer for stack #1 down, and the top pointer for stack #2 physically down (but logically up).
Suppose we start with all elements in stack #1. Now you can pop all of them except the last one, immediately pushing each onto stack #2. The last element you can pop off and store in a temporary place x (O(1) extra storage, which we are allowed). Now pop all n-1 items in stack #2, pushing each in turn back onto stack #1, and then finally push x back onto (the now-empty) stack #2. At this point, we have succeeded in deleting the bottom element in stack #1, and putting it at the top of (well, it's the only element in) stack #2.
Now just recurse: pretend we only have n-1 items, and solve this smaller problem. Keep recursing until all elements have been pushed onto stack #2 in reverse order. In one final step, pop each of them off and push them back onto stack #1.
All in all, O(n^2) steps are required, but we manage with just O(1) space.
The only way I can think of is to write your own stack using a linked list and then swap the head/tail pointers and a "direction" indicator which tells your routine to go forward or backwards when you push/pop. Any other way I can think of would be O(n).
If you know upper limit of n you can also use an array/index instead of a list.
Whether it makes sense to do so is probably dependent on the reason for doing so and the language.
I am reading a quick sort implementation using a stack at the following link.
link
My question is regarding the following paragraph.
The policy of putting the larger of the small subfiles on the stack
ensures that each entry on the stack is no more than one-half of the
size of the one below it, so that the stack needs to contain room for
only about lg N entries. This maximum stack usage occurs when the
partition always falls at the center of the file. For random files,
the actual maximum stack size is much lower; for degenerate files it
is likely to be small.
This technique does not necessarily work in a truly recursive
implementation, because it depends on end- or tail-recursion removal.
If the last action of a procedure is to call another procedure, some
programming environments will arrange things such that local variables
are cleared from the stack before, rather than after, the call.
Without end-recursion removal, we cannot guarantee that the stack size
will be small for quicksort.
What does the author mean by "that each entry on the stack is no more than one-half of the size of the one below it"? Could you please give an example of this.
How did the author came to the conclusion that the stack needs space for only about lg N entries?
What does authore mean by "Without end-recursion removal, we cannot guarantee that the stack size will be small for quicksort" ?
Thanks for your time and help.
The policy of putting the larger of the small subfiles on the stack ensures that each entry on the stack is no more than one-half of the size of the one below it,
That is not quite true. Consider you want to sort a 100-element array, and the first pivot goes right in the middle. Then you have a stack
49
50
then you pop the 49-element part off the stack, partition, and push the two parts on the stack. Let's say the choice of pivot was not quite as good this time, there were 20 elements not larger than the pivot. Then you'd get the stack
20
28
50
and each stack entry is more than half of the one below.
But that cannot continue forever, and we have
During the entire sorting, if stack level k is occupied, its size is at most total_size / (2^k).
That is obviously true when the sorting begins, since then there is only one element on the stack, at level 0, which is the entire array of size total_size.
Now, assume the stated property holds on entering the loop (while(!stack.empty())).
A subarray of length s is popped from stack level m. If s <= 1, nothing else is done before the next loop iteration, and the invariant continues to hold. Otherwise, if s >= 2, After partitioning that, there are two new subarrays to be pushed on the stack, with s-1 elements together. The smaller of those two then has a size smaller_size <= (s-1)/2, and the larger has a size larger_size <= s-1. Stack level m will be occupied by the larger of the two, and we have
larger_size <= s-1 < s <= total_size / (2^m)
smaller_size <= (s-1)/2 < s/2 <= total_size / (2^(m+1))
for the stack levels m resp. m+1 at the end of the loop body. The invariant holds for the next iteration.
Since at most one subarray of size 0 is ever on the stack (it is then immediately popped off in the next iteration), there are never more than lg total_size + 1 stack levels occupied.
Regarding
What does author mean by "Without end-recursion removal, we cannot guarantee that the stack size will be small for quicksort" ?
In a recursive implementation, you can have deep recursion, and when the stack frame is not reused for the end-call, you may need linear stack space. Consider a stupid pivot selection, always choosing the first element as pivot, and an already sorted array.
[0,1,2,3,4]
partition, pivot goes in position 0, the smaller subarray is empty. The recursive call for the larger subarray [1,2,3,4], allocates a new stack frame (so there are now two stack frames). Same principle, the next recursive call with the subarray [2,3,4] allocates a third stack frame, etc.
If one has end-recursion removal, i.e. the stack frame is reused, one has the same guarantees as with the manual stack above.
I will try to answer your question (hopefully I am not wrong)...
Every step in quicksort you divide your input into two (one half). By doing so, you need logN. This explains your first and second question ("each entry on the stack is no more than one-half" and "logN" entries)