My teacher gave me the following exercise:
"Given two sorted stacks, stack1 and stack2, design an algorithm to create a new sorted stack, stack3, merge between stack1 and stack2"
I am having problems finding the easiest way to solve this exercise, can anyone recommend me the easy way to it? I thought that maybe I could store both stack1 and stack2 into some other structure (maybe an array?) and then proceed with the sorting but this looks long, I wonder if there is some other easy way.
P.S.: I can only use push and pop to insert/extract an element from the stacks.
The thing to keep in mind about this problem is that the stacks are already sorted. How would you do this as a human?
My guess is that you would do something like the following:
1. look at the top element in each stack and compare those two elements.
2. whichever is bigger(or smaller, depending on how they're sorted) will be added to the new stack
3. repeat.
Maybe try interpreting this human algorithm as psuedocode and try implementing it and you may get something that works. I hope that guides you in the right direction!
The reason you have to reverse the order is because you can only access the top elements of the stacks at a time to compare and pop.
This means that regardless of whether stack1 and stack2 are in descending or ascending order, you will need to create two new stacks.
The first new stack (helperStack) is what you will push your compared/popped elements into
The second new stack (sortedStack) serves the sole purpose of putting them in the reverse order which can be done by:
//reverses the order of the helperStack
//and pushes result into new sortedStack
while(!helperStack.empty())
{
sortedStack.push(helperStack.pop());
}
You just have to create a temporary stack where you keep pushing the
maximum of the stacks heads until both of them are empty,
and if you want it ordered in the original order direction, just push all stack elements from the temporary stack into a new stack one after another.
Related
I don't understand the Sliding Window Algorithm that is used to find maximum of subarrays in an Array. When we have a length n Array and we want to print out maximum for subarrays of length k. We can go sequentially through the Array by deques when we push Elements into it and Pop Elements that are not the maximum (I don't know how the removal of deques is handled).
I have seen algorithms that remove from both the front and the end of the deque. But I don't understand why. It is too complicated, so any help explaining me how this algorithm works, would be greatly appreciated.
Considering you know how a deque works (review it if you don't):
It would be nice if you could first simulate the algorithm with some examples.
I'll try to give you the main ideas (actually, they are mixed in the solution and here as well, sorry), better if you can read them after exercising the algorithm.
For me, the main trick is: the deque will keep a list of potential maximum values for the current and any upcoming windows.
Because of that, those values should be ordered. But how? Verify below how we can keep the maximum at the front of the deque.
And there is an intention of traversing the array only once [to achieve O(n)]
So, what to do when you have at hand the next value of the array?
Remove from the deque any other values out of the current window. You can for sure remove them starting from the front. Maybe you won't agree right now, but keep reading and try to understand the next step.
Insert this next value at the rear of the deque, throwing away any other smaller values. [here we assure that all deque values are ordered]
But there is good reason for throwing those values away: from now on (for any upcoming window), the current value will always 'beat' any previous smaller values. Cool, isn't it?
Therefore the maximum of the current window is the first item in the deque.
Garwick's Algorithm is an algorithm for dealing with stack overflows. I know the what the original algorithm is and how it works. However, there is a modified Garwick's algorithm and I have a very vague description of it "even stacks growing in the left direction, and odd stacks in the right direction".
The illustration of the modified algorithm from my lecture note is as the following, which is also very vague.
Can anyone help give more details about this modified algorithm, or provide some reference? Thank you!
If you need to put 2 stacks in an array, then you can put start one stack at the start of the array, growing upward as you push on elements, and one stack at the end, growing downward.
This way you don't need to worry about redistributing free space when one of them fills up, because they both use the same free space, and you can freely push onto either stack until the whole array is full.
The modified Garwick algorithm you refer to extends this idea to more than 2 stacks. With the original Garwick algorithm, the array is divided into N segments, and each segment has one stack, with all stacks growing in the same direction. In the modified version, the array is divided into N/2 segments, and each segment has 2 stacks, one growing upward from the start of the segment, and one growing downward from the end.
In the modified algorithm, when one segment fills up, free space is redistributed among segments (pairs of stacks) in the same way that the original algorithm redistributes space among single stacks.
From this, we can design data structure special stack with method getMin() which should return minimum element from the SpecialStack.
My question is: How to implement the method getMed() which should return the medium element from the SpecailStack?
From this Data Structure to find median, we know the best data structure is two heaps. Left is a Max-Heap; Right is a Min-Heap. However, for my question it seems not good, because the top element pushed into stack must be maintained which heap can not do that. Am I right?
Edit I do not know how to maintain the index of latest pushed element with Heap.
Thanks a lot.
You could alternatively use an Order Statistic Tree.
You can use any balanced binary search tree here instead of a heap. It is easy to find min or max element in a tree(the leftmost and the rightmost node), and it also supports deletion in O(log N).
So you can maintain a stack and two binary search trees(instead of two heaps). Push implementation is pretty strainforward. To pop an element, you can delete the top element from a tree where it is stored, adjust the trees(like in two heaps algorithm) and then pop it from the stack.
So I need to find a data structure for this situation that I'll describe:
This is not my problem but explains the data structure aspect i need more succinctly:
I have an army made up of platoons. Every platoon has a number of men and a rank number(highest being better). If an enemy were to attack my army, they would kill some POWER of my army, starting from the weakest platoon and working up, where it takes (platoon rank) amount of power to kill every soldier from a platoon.
I could easily simulate enemies attacking me by peeking and popping elements from my priority queue of platoons, ordered by rank number, but that is not what I need to do. What i need is to be able to allow enemies to view all the soldiers they would kill if they attacked me, without actually attacking, so without actually deleting elements from my priorityqueue(if i implemented it as a pq).
Sidenote: Java's PriorityQueue.Iterator() prints elements in a random order, I know an iterator is all I need, just fyi.
The problem is, if I implemented this as a pq, I can only see the top element, so I would have to pop platoons off as if they were dying and then push them back on when the thought of the attack has been calculated. I could also implement this as a linked list or array, but insertion takes too long. Ultimately I would love to use a priority queue I just need the ability to view either the (pick an index)'th element from the pq, or to have every object in the pq have a pointer to the next object in the pq, like a linked list.
Is this thought about maintaining pointers with a pq like a linked list possible within java's PriorityQueue? Is it implemented for me somewhere in PriorityQueue that I dont know about? is the index thing implemented? is there another data structure I can use that can better serve my purpose? Is it realistic for me to find the source code from Java's PriorityQueue and rewrite it on my machine to maintain these pointers like a linked list?
Any ideas are very welcome, not really sure which path I want to take on this one.
One thing you could do is an augmented binary search tree. That would allow efficient access to the nth smallest element while still keeping the elements ordered. You could also use a threaded binary search tree. That would allow you to step from one element to the next larger one in constant time, which is faster than in a normal binary tree. Both of these data structures are slower than a heap, though.
I have a list that is frequently insertion sorted. Is there a good position (other than the end) for adding to this list to minimize the work that the insertion sort has to do?
The best place to insert would be where the element belongs in the sorted list. This would be similar to preemptively insertion sorting.
Your question doesn't make sense. Either the list is insertion sorted (which means you can't append to the end by definition; the element will still end up in the place where it belongs. Otherwise, the list wouldn't be sorted).
If you have to add lots of elements, then the best solution is to clone the list, add all elements, sort the new list once and then replace the first list with the clone.
[EDIT] In reply to your comments: After doing a couple of appends, you must sort the list before you can do the next sorted insertion. So the question isn't how you can make the sorted insertion cheaper but the sort between appends and sorted insertions.
The answer is that most sorting algorithms do pretty good with partially sorted lists. The questions you need to ask are: What sorting algorithm is used, what properties does it have and, most importantly, why should you care.
The last question means that you should measure performance before you do any kind of optimization because you have a 90% chance that it will hurt more than it helps unless it's based on actual numbers.
Back to the sorting. Java uses a version of quicksort to sort collections. Quicksort will select a pivot element to partition the collection. This selection is crucial for the performance of the algorithm. For best performance, the pivot element should be as close to the element in the middle of the result as possible. Usually, quicksort uses an element from the middle of the current partition as a pivot element. Also, quicksort will start processing the list with the small indexes.
So adding the new elements at the end might not give you good performance. It won't affect the pivot element selection but quicksort will look at the new elements after it has checked all the sorted elements already. Adding the new elements in the middle will affect the pivot selection and we can't really tell whether that will have an influence on the performance or not. My instinctive guess is that the pivot element will be better if quicksort finds sorted elements in the middle of the partitions.
That leaves adding new elements at the beginning. This way, quicksort will usually find a perfect pivot element (since the middle of the list will be sorted) and it will pick up the new elements first. The drawback is that you must copy the whole array for every insert. There are two ways to avoid that: a) As I said elsewhere, todays PCs copy huge amounts of RAM in almost no time at all, so you can just ignore this small performance hit. b) You can use a second ArrayList, put all the new elements in it and then use addAll(). Java will do some optimizations internally for this case and just move the existing elements once.
[EDIT2] I completely misunderstood your question. For the algorithm insertion sort, the best place is probably somewhere in the middle. This should halve the chances that you have to move an element through the whole list. But since I'm not 100% sure, I suggest to create a couple of small tests to verify this.