Garwick's Algorithm is an algorithm for dealing with stack overflows. I know the what the original algorithm is and how it works. However, there is a modified Garwick's algorithm and I have a very vague description of it "even stacks growing in the left direction, and odd stacks in the right direction".
The illustration of the modified algorithm from my lecture note is as the following, which is also very vague.
Can anyone help give more details about this modified algorithm, or provide some reference? Thank you!
If you need to put 2 stacks in an array, then you can put start one stack at the start of the array, growing upward as you push on elements, and one stack at the end, growing downward.
This way you don't need to worry about redistributing free space when one of them fills up, because they both use the same free space, and you can freely push onto either stack until the whole array is full.
The modified Garwick algorithm you refer to extends this idea to more than 2 stacks. With the original Garwick algorithm, the array is divided into N segments, and each segment has one stack, with all stacks growing in the same direction. In the modified version, the array is divided into N/2 segments, and each segment has 2 stacks, one growing upward from the start of the segment, and one growing downward from the end.
In the modified algorithm, when one segment fills up, free space is redistributed among segments (pairs of stacks) in the same way that the original algorithm redistributes space among single stacks.
Related
I'm sorry if this is a duplicate of some thread, but I'm really not sure how to describe the question.
I'm wondering what is the minimal data structure to prevent 2D-grid traveler from repeating itself (i.e. travel to some point it already traveled before). The traveler can only move horizontally or vertically 1 step each time. For my special case (below), the 2D-grid is actually a lower-left triagle where one coordinate never exceeds another.
For example, with 1D case, this can be simply done by recording the direction of last travel. If direction changes, it's repeating itself.
For 2D case it becomes complicated. The most trivial way would be creating a list recording the points traveled before, but I'm wondering are there more efficient ways to do that?
I'm implementing a more-or-less "4-finger" algorithm for 4-sum where the 2 fingers in the middle moves in two directions (namely i, j, k, and l):
i=> <=j=> <=k=> <=l
1 2 3 ... 71 72 ... 123 124 ... 201 202 203
The directions fingers travel are decided (or suggested) by some algorithm but might lead to forever-loop. Therefore, I have to force not to take some suggestion if the 2 fingers in the middle starts to repeat history position.
EDIT
Among these days, I found 2 solutions. None of them is ideal solution to this problem, but they're at least somewhat usable:
As #Sorin mentioned below, one solution would be saving a bit array representing state of all cells. For the triangular-grid example here, we can even condense the array to cut memory cost by half (though requiring k^2 time to compute the bit position where k is the degree of freedom i.e. 2 here. A standard array would use only linear time).
Another solution would be directly avoid backward-travelling. Set up the algorithm such that j and k only move in one direction (this is probably greedy).
But still since the 2D-grid traveler have the nice property that it moves along axis 1 step each time, I'm wondering are there more "specialized" representation
for this kind of movement.
Thanks for your help!
If you are looking for optimal lookup complexity, then a hashset is the best thing. You need O(N) memory but all lookups & insertions will be O(1).
If it's often that you visit most of the cells then you can even skip the hash part and store a bit array. That is store one bit for every cell and just check if the corresponding bit is 0 or 1. This is much more compact in memory (at least 32x, one bit vs. one int, but likely more as you also skip storing some pointers internal to the datastructure, 64 bits).
If this still take too much space, you could use a bloom filter (link), but that will give you some false positives (tells you that you've visited a cell, but in fact you didn't). If that's something you can live with the space savings are fairly huge.
Other structures like BSP or Kd-trees could work as well. Once you reach a point where everything is either free or occupied (ignoring the unused cells in the upper triangle) you can store all that information in a single node.
This is hard to recommend because of it's complexity and that it will likely also use O(N) memory in many cases, but with a larger constant. Also all checks will be O(logN).
Why do we prefer to sort the smaller partition of a file and push the larger one on stack after partitioning for quicksort(non-recursive implementation)? Doing this reduces the space complexity of quicksort O(log n) for random files. Could someone elaborate it?
As you know, at each recursive step, you partition an array. Push the larger part on the stack, continue working on the smaller part.
Because the one you carry on working with is the smaller one, it is at most half the size of the one you were working with before. So for each range we push on the stack, we halve the size of the range we're working with.
That means we can't push more than log n ranges onto the stack before the range we're working with hits size 1 (and therefore is sorted). This bounds the amount of stack we need to complete the first descent.
When we start processing the "big parts", each "big part" B(k) is bigger than the "small part" S(k) produced at the same time, so we might need more stack to handle B(k) than we needed to handle S(k). But B(k) is still smaller than the previous "small part", S(k-1) and once we're processing B(k), we've taken it back off the stack, which therefore is one item smaller than when we processed S(k), and the same size as when we processed S(k-1). So we still have our bound.
Suppose we did it the other way around - push the small part and continue working with the large part. Then in the pathologically nasty case, we'd push a size 1 range on the stack each time, and continue working with a size only 2 smaller than the previous size. Hence we'd need n / 2 slots in our stack.
Consider the worst case where you partition in such a way that your partition is 1:n. If you sort small subfile first than you only need to use O(1) space, as you push the large subfile and then pop it back (and then again push the large subfile). But, if you sort large subfile first than you need O(N) space, because you keep pushing 1 element array in the stack.
Here is a quote from Algorithms by ROBERT SEDGEWICK (he was the one who wrote paper on this) :
For Quicksort, the combination of end- recursion removal and a policy
of processing the smaller of the two subfiles first turns out to
ensure that the stack need only contain room for about, lg N entries,
since each entry on the stack after the top one must represent a
subfile less than half the size of the previous entry.
OK, am I right that you mean if we make the Quicksort algorithm non-recursive, you have to use a stack where you put partitions on the stack?
If so: an algorithm must allocate for each variable it uses memory. So, if you run two instances of it parallel, they are allocating the double amount of one algorithm memory space...
Now, in a recursive version, you start a new instance of the algorithm (which needs to allocate memory) BUT the instance which calls the recursive one, DOES NOT end, so the allocated memory is needed! -> in fact, we have started lets say 10 recursive instances and need 10*X memory, where X is the memory needed by one instance.
Now, we use the non-recursive algorithm. You MUST only allocate the needed memory ONCE. In fact, helper variables only use the space of one instance. To accomplish the function of the algorithm we must save the already made partitions (or what we haven't done already). In fact, we put it on a stack and take the partitions off until we made the last "recursion" step. So, imagine: you are giving the algorithm an array. The recursive algorithm needs to allocate the whole array and some helper variables with each instance (again: if the recursion depth is 10, we need 10*X memory where the array needs much).
The non-recursive one needs to allocate the array, helper variables only once BUT it needs a stack. However, in the end you won't put so many parts on the stack that the recursive algorithm will need less memory due to the part that we doesn't need to allocate the array again each time/instance.
I hope, I have described it so that you can understand it, but my English isn't soooo good. :)
Why do we prefer to sort the smaller partition of a file and push the larger one on stack after partitioning for quicksort(non-recursive implementation)? Doing this reduces the space complexity of quicksort O(log n) for random files. Could someone elaborate it?
As you know, at each recursive step, you partition an array. Push the larger part on the stack, continue working on the smaller part.
Because the one you carry on working with is the smaller one, it is at most half the size of the one you were working with before. So for each range we push on the stack, we halve the size of the range we're working with.
That means we can't push more than log n ranges onto the stack before the range we're working with hits size 1 (and therefore is sorted). This bounds the amount of stack we need to complete the first descent.
When we start processing the "big parts", each "big part" B(k) is bigger than the "small part" S(k) produced at the same time, so we might need more stack to handle B(k) than we needed to handle S(k). But B(k) is still smaller than the previous "small part", S(k-1) and once we're processing B(k), we've taken it back off the stack, which therefore is one item smaller than when we processed S(k), and the same size as when we processed S(k-1). So we still have our bound.
Suppose we did it the other way around - push the small part and continue working with the large part. Then in the pathologically nasty case, we'd push a size 1 range on the stack each time, and continue working with a size only 2 smaller than the previous size. Hence we'd need n / 2 slots in our stack.
Consider the worst case where you partition in such a way that your partition is 1:n. If you sort small subfile first than you only need to use O(1) space, as you push the large subfile and then pop it back (and then again push the large subfile). But, if you sort large subfile first than you need O(N) space, because you keep pushing 1 element array in the stack.
Here is a quote from Algorithms by ROBERT SEDGEWICK (he was the one who wrote paper on this) :
For Quicksort, the combination of end- recursion removal and a policy
of processing the smaller of the two subfiles first turns out to
ensure that the stack need only contain room for about, lg N entries,
since each entry on the stack after the top one must represent a
subfile less than half the size of the previous entry.
OK, am I right that you mean if we make the Quicksort algorithm non-recursive, you have to use a stack where you put partitions on the stack?
If so: an algorithm must allocate for each variable it uses memory. So, if you run two instances of it parallel, they are allocating the double amount of one algorithm memory space...
Now, in a recursive version, you start a new instance of the algorithm (which needs to allocate memory) BUT the instance which calls the recursive one, DOES NOT end, so the allocated memory is needed! -> in fact, we have started lets say 10 recursive instances and need 10*X memory, where X is the memory needed by one instance.
Now, we use the non-recursive algorithm. You MUST only allocate the needed memory ONCE. In fact, helper variables only use the space of one instance. To accomplish the function of the algorithm we must save the already made partitions (or what we haven't done already). In fact, we put it on a stack and take the partitions off until we made the last "recursion" step. So, imagine: you are giving the algorithm an array. The recursive algorithm needs to allocate the whole array and some helper variables with each instance (again: if the recursion depth is 10, we need 10*X memory where the array needs much).
The non-recursive one needs to allocate the array, helper variables only once BUT it needs a stack. However, in the end you won't put so many parts on the stack that the recursive algorithm will need less memory due to the part that we doesn't need to allocate the array again each time/instance.
I hope, I have described it so that you can understand it, but my English isn't soooo good. :)
Scenario is as follows:-
I want to reverse the direction of the singly linked list, In other words, after the reversal all pointers should now point backwards..
Well the algorithm should take linear time.
The solution that i have thought of using another datastructure A Stack.. With the help of which the singly linked list would be easily reversed, with all pointers pointing backwards.. But i am in doubt, that whether the following implementation yeild linear time complexity.. Please comment on this.. And if any other efficient algorithm is in place, then please discuss..
Thanks.
You could do it like this: As long as there are nodes in the input list, remove its first node and insert it at the beginning of the output list:
node* reverse(node *in) {
out = NULL;
while (in) {
node = in;
in = in->next;
node->next = out;
out = node;
}
return out;
}
2 times O(N) = O(2*n) is still O(N). So first push N elements and then popping N elements from a stack is indeed linear in time, as you expected.
See also the section Multiplication by a Constant on the "Big O Notation" wikipedia entry.
If you put all of the nodes of your linked list in a stack, it will run in linear time, as you simply traverse the nodes on the stack backwards.
However, I don't think you need a stack. All you need to remember is the node you were just at, to reverse the pointer of the current node. Make note of the next node before you reverse the pointer at this node.
The previous answers have and already (and rightly) mentioned that the solution using pointer manipulation and the solution using stack are both O(n).
The remaining question is to compare the real run time (machine cycle complexity) performance of the two different implementations of the reverse() function.
I expect that the following two aspects might be relevant:
The stack implementation. Does it
require the maximum stack depth to
be explicitly specified? If so, how is that specified? If not, how
the stack does memory management as
the size grows arbitrarily large?
I guess that nodes have to be copied
from list to stack. [Is there a way
without copying?] In that case, the
copy complexity of the node needs to
be accounted for. Thats because the
size of the node can be
(arbitrarily) large.
Given these, in place reversal by manipulating pointers seems more attractive to me.
For a list of size n, you call n times push and n times pop, both of which are O(1) operations, so the whole operation is O(n).
You can use a stack to achieve a O(n) implementation. But the recursive solution IS using a stack (THE stack)! And, like all recursive algorithms, it is equivalent to looping. However, in this case, using recursion or an explicit stack would create a space complexity of O(n) which is completely unnecessary.
Imagine I have a stack-based toy language that comes with the operations Push, Pop, Jump and If.
I have a program and its input is the toy language. For instance I get the sequence
Push 1
Push 1
Pop
Pop
In that case the maximum stack would be 2. A more complicated example would use branches.
Push 1
Push true
If .success
Pop
Jump .continue
.success:
Push 1
Push 1
Pop
Pop
Pop
.continue:
In this case the maximum stack would be 3. However it is not possible to get the maximum stack by walking top to bottom as shown in this case since it would result in a stack-underflow error actually.
CFGs to the rescue you can build a graph and walk every possible path of the basic blocks you have. However since the number of paths can grow quickly for n vertices you get (n-1)! possible paths.
My current approach is to simplify the graph as much as possible and to have less possible paths. This works but I would consider it ugly. Is there a better (read: faster) way to attack this problem? I am fine if the algorithm produces a stack depth that is not optimal. If the correct stack size is m then my only constraint is that the result n is n >= m. Is there maybe a greedy algorithm available that would produce a good result here?
Update: I am aware of cycles and the invariant that all controlf flow merges have the same stack depth. I thought I write down a simple toy-like language to illustrate the issue. Basically I have a deterministic stack-based language (JVM bytecode), so each operation has a known stack-delta.
Please note that I do have a working solution to this problem that produces good results (simplified cfg) but I am looking for a better/faster approach.
Given that your language doesn't seem to have any user input all programs will simply compute in the same way all the time. Therefore, you could execute the program and keep track of the maximum stacksize during execution. Probably not what you want though.
As for your path argumentation: Be aware, that jumping allows cycles, hence, without further analysis a cycle might imply non-termination and stack overflows (i.e. stack size is increased after each cycle execution). [n nodes still means infinitely many paths if there is a cycle]
Instead of actual execution of the code you might be able to do some form of abstract interpretation.
Regarding the comment from IVlad: Simply counting the pushs is wrong due to the existence of possible cycles.
I am not sure what the semantics of your if-statements is though, so this could be useful too: Assume that an if-statement's label can only be a forward label (i.e., you can never jump back in your code). In that case your path counting argument comes back to life. In effect, the resulting CFG will be a tree (or DAG if you don't copy code). In that case you could do an approximative count, by a bottom-up computation of the number of pushs and then taking the maximum number of pushs for both branches in case of an if-statement. It's still not the optimal correct result, but yields a better approximation than a simple count of push-statements.
You generally want to have the stack depth invariant over jumps and loops.
That means that for every node, every incoming edge should have the same stack depth. This simplifies walking the CFG significantly, because backedges can no longer change the stack depth of already calculated instructions.
This also is requirement for bounded stack depth. If not enforced, you will have increasing loops in your code.
Another thing you should consider is making the stack effect of all opcodes deterministic. An example of a nondeterministic opcode would be: POP IF TopOfStack == 0.
Edit:
If you do have a deterministic set of opcodes and the stack depth invariant, there is no need to visit every possible path of the program. It's enough to do a DFS/BFS through the CFG to determine the maximum stack depth. This can be done in linear time (depending on the amount of instructions), but not faster.
Evaluating if the basic blocks, at which the outgoing edges of your current basic block point, still need to be evaluated should not be performance relevant. Even in the worst case, every instruction is an IF, there will be only 2*N edges to evaluate.