In a linked list implementation of queue, why rear element (inserted last) in a queue points to null? And in linked list implementation of a stack why very first element (inserted first) points to the null?
The reason I am asking this question is that I am implementing stack and queue in java using linked list and implementation changes if null is present let us say at the front of a queue OR null is present at the next of the top in a stack.
Both structures are linked lists. Each node has a next pointer to the next node. When there is no next node, then we need to decide which value to put there. It is convenient to store a null-pointer value there, as that value can never be confused with a real pointer.
This null-pointer is often important to identify a node as being the last one.
For instance, the pop method in a stack implementation, will need to take care of what happens when the last (bottom) element is removed. A pop would be implemented along these lines:
if (top == nullptr) return nullptr; // Stack is empty
Node * node = top;
top = top->next;
free(node);
So, as an empty stack is identified by top === nullptr, it is important that the bottom element of the stack has its next pointer set to nullptr. Only then will the empty-condition be correctly set after removing the bottom-element.
In case the data structure maintains a reference to the last node (like *rear in the case of the queue), then in theory we could leave the last node's next pointer's value undefined (i.e. to any value). We could then identify a node as being the last node by comparing its address with the rear pointer, without ever having to look at that (undefined, and thus unsafe) next pointer. But it is just better practice to store an explicit null-pointer there, which leads to (more) elegant code, and leads to better error reporting in case the code has a bug by which it mistakenly follows this last node's next pointer.
Related
Do existing OpenMesh iterators change, when I add elements?
Example code:
auto vh1 = mesh.vertex_handle(0);
auto vh2 = mesh.vertex_handle(1);
auto vh3 = mesh.vertex_handle(2);
for(auto fh: mesh.faces()) {
mesh.add_face(vh1, vh2, vh3);
}
I did not find something about this in the documentation.
Example seem to work, but I want to know if it's undefined behavior or if OpenMesh promises to make sure that the iterator does not change during the loop.
OpenMesh does not change the iterators when you add elements, but I don't think OpenMesh gives you a promise on that.
OpenMesh iterators are basically just ints. (They hold a SmartHandle and some information about which elements should be skipped. A SmartHandle holds a Handle and a reference to the mesh. A Handle is just a strongly typed integer.)
Incrementing the iterator will just increment the integer (until an element is reached which should not be skipped). Since you always access elements via the mesh and a handle the relocation of the actual memory that stores the elements is not a problem.
Note that depending on how you code your loop the new element may or may not be iterated over.
for (auto it = mesh_.vertices_begin(); it != mesh_.vertices_end(); ++it)
{
mesh_.add_vertex(point);
}
The loop above will include the newly added vertex as mesh_.vertices_end() is reevaluated for each comparison and will thus include the newly added elements. This leads to an infinite loop in that case.
auto end = mesh_.vertices.end();
for (auto it = mesh_.vertices_begin(); it != end; ++it)
{
mesh_.add_vertex(point);
}
In this case, the newly added elements will not be contained in the loop. That is because end is evaluated only once in the beginning and basically just holds the number of vertices the mesh had at that point.
for (auto vh : mesh_.vertices())
{
mesh_.add_vertex(point);
}
This will behave as the second version as here, too, vertices_end() is only evaluated once in the beginning.
Deletion
Since it was brought up in the other answer I want to quickly talk about deletion.
Deleting an element will only mark it as deleted. Thus, deleting elements while iterating over the elements is fine.
When you delete elements which have not been visited yet, they may or may not be iterated over later. If you use skipping iterators the deleted elements will be skipped, otherwise they won't be skipped.
For OpenMesh 7.0 or newer for (auto fh : mesh_.faces()) {...} will not include deleted elements.
Instead for (auto fh : mesh_.all_faces()) {...} will include deleted elements.
Garbage Collection
You should probably not call garbage collection inside your loop. If you have deleted elements, garbage collection will cause two problems. First, it reduces the size of the container storing the elements. Thus, versions of the loop that evaluate the end iterator once will likely run too far and crash.
If you use the other version of the loop or manage to create more new elements than you remove, you still have the problem that garbage collection will move elements from the back into the spots of the elements that were marked as deleted. Thus, you will miss those elements if they are moved to spots that you already passed.
One can search typedef std::vector< in openmesh,then you can find it. But
add_face won't reallocation this iterators, because the new vertex handle or face handle will push_back to the end of this vector. Meanwhile , in order to have a high-efficient search speed, Openmesh builds at least three layers of iterators, and the vector we discuss is only the bottom of them. The middle or top iterators, I use them by assemble functions,so I'm not sure it will be reallocated/invalidated or not, and you can find them in PolyConnectivity.hh and TriConnectivity.hh.
I've been asked this question somewhere.
I've been given 2 stacks. I have to implement the following operations:
// Pass one of the stacks and a value to insert
push(Stack stack, value)
pop(Stack stack, val)
merge(Stack s1, Stack s2)
I have to perform above stack operations like push and pop in O(1). So far I've used a linked list to successfully implement these operations.
But how can I merge the two stacks in O(1)? I couldn't find how to do it in O(1).
Maybe I need to use some other data structure or something?
It's really easy if your stack objects keep both ends of the stack (top/botton, start/end, head/tail, whatever). I'll use top/bottom for this answer.
When you implement push/pop you operate on the top object. The bottom will remain the same (unless the stack is empty) and the node that represents it will have it's next pointer set to null.
So to merge two stacks you take the bottom of one, point it to the top of the other and return a "new" stack formed of the other pointers.
Stack merge(Stack s1, Stack s2) {
// join the stacks
s2.bottom.next = s1.top
// make a nice object to give back
Stack result;
result.bottom = s1.bottom
result.top = s2.top
// cleanup the parameters so they don't mess up the new structure.
s1.bottom = s1.top = s2.bottom = s2.top = null;
return result;
}
If you don't have the two pointers nicely kept in the stack object you would need to traverse one of the stacks get what would be kept here as bottom, making the complexity O(N).
I would like to give another perspective, the programming/object oriented perspective. If you do not have a pointed to the end of the stack as suggested before and in case merging means first return the elements of one stack, then the other, i.e. define an order between them - this is a real important consideration you did not address. You could follow the following approach
Create a StackList object which extends Stack Java example:
class StackList extends Stack
Now, hold a linked list of Stacks in it, the merging is trivial by adding the Stacks to the list, pop/push will simply call the pop/push methods of the head Stack.
How to check whether linked list is circular or not without using extra memory if the head is given
We can use two pointers here
slow pointer which points to the head.
fast pointer which points to the head as well.
Now slow pointer will traverse the list one by one(slow= slow.next)
while fast pointer will jump one node ahead (fast = fast.next.next).
So if they both meet in any position then there is a loop in the linkedlist or if the fast pointer terminates by pointing to the null value, It means there is no loop in the link list.
condition -> if(fast.value == slow.value) then there is a loop.
you can refer this link for more explanation -> https://www.geeksforgeeks.org/detect-loop-in-a-linked-list/
I have a couple questions about the stack. One thing I don't understand about the stack is the "pop" and "push" idea. Say I have integers a and b, with a above b on the stack. To access b, as I understand it, a would have to be popped off the stack to access b. So where is "a" stored when it is popped off the stack.
Also if stack memory is more efficient to access than heap memory, why isn't heap memory structured like the stack? Thanks.
So where is "a" stored when it is popped off the stack.
It depends. It goes where the program that's reading the stack decides. It may store the value, ignore it, print it, anything.
Also if stack memory is more efficient to access than heap memory, why
isn't heap memory structured like the stack?
A stack isn't more efficient to access than a heap is, it depends on the usage. The program's flow gets deeper and shallower just like a stack does. Local variables, arguments and return addresses are, in mainstream languages, stored in a stack structure because this kind of structure implements more easily the semantics of what we call a function's stack frame. A function can very efficiently access its own stack frame, but not necessarily its caller functions' stack frames, that is, the whole stack.
On the other hand, the heap would be inefficient if it were implemented that way, because it's expected for the heap to be able to access and possibly delete items anywhere, not just from its top/bottom.
I'm not an expert, but you can sort of think of this like the Tower of Hanoi puzzle. To access a lower disc, you "pop" discs above it and place them elsewhere - in this case, on other stacks, but in the case of programming it could be just a simple variable or pointer or anything. When you've got the item you need, then the other ones can be put back on the stack or moved elsewhere entirely.
Lets take your case scenario .
You have a stack with n elements on it, the last one is a, b is underneath.
pop operation returns the popped value, so if you want to access the second from the top being b, you could do:
var temp = stack.pop()
var b = stack.pop()
stack.push(temp)
However, stack would rarely be used this way. It is a LIFO queue and works best when accessed like a LIFO queue.
It seems you would rather need a collection with a random index based access.
That collection would probably be stored on the heap. Hope it clarified stack pop/push a little.
a is stored wherever you decide to store it. :-) You need to provide a variable in which to store the value at the top of the stack (a) when you remove it, then remove the next item (b) and store it in a different variable to use it, and then push the first value (a) back on the stack.
Picture an actual pile of dirty plates sitting on your counter to your left. You pick one up to wash it (pop it from the "dirty" stack), wash it, dry it, and put it on the top of the clean stack (push it) on your right.
If you want to reach the second plate from the top in either stack, you have to move the top one to get to it. So you pick it up (pop it), put it somewhere temporarily, pick up the next plate (pop it) and put it somewhere, and then put the first one you removed back on the pile (push it back on the stack).
If you can't picture it with plates, use an actual deck of playing cards (or baseball cards, or a stack of paper - anything you can neatly pile ("stack")) and put it on your desk at your left hand. Then perform the steps in my last paragraph, replacing the word "plate" with "card" and physically performing the steps.
So to access b, you declare a variable to store a in, pop a and save it in that variable, pop b into it's own variable, and then push a back onto the stack.
Assuming the tree is balanced, how much stack space will the routine use for a tree of 1,000,000 elements?
void printTree(const Node *node) {
char buffer[1000];
if(node) {
printTree(node->left);
getNodeAsString(node, buffer);
puts(buffer);
printTree(node->right);
}
}
This was one of the algo questions in "The Pragmatic Programmer" where the answer was 21 buffers needed (lg(1m) ~= 20 and with the additional 1 at very top)
But I am thinking that it requires more than 1 buffer at levels lower than top level, due to the 2 calls to itself for left and right node. Is there something I missed?
*Sorry, but this is really not a homework. Don't see this on the booksite's errata.
First the left node call is made, then that call returns (and so its stack is available for re-use), then there's a bit of work, then the right node call is made.
So it's true that there are two buffers at the next level down, but those two buffers are required consecutively, not concurrently. So you only need to count one buffer in the high-water-mark stack usage. What matters is how deep the function recurses, not how many times in total the function is called.
This assuming of course that the code is written in a language similar to C, and that the C implementation uses a stack for automatic variables (I've yet to see one that doesn't), blah blah.
The first call will recurse all the way to the leaf node, then return. Then the second call will start -- but by the time the second call takes place, all activation records from the first call will have been cleared off the stack. IOW, there will only be data from one of those on the stack at any given time.