What is the technical error if we declare a Deque as a general one which allows input and output from both sides? - data-structures

We know that a Deque has two sub category...
Input restricted and output restricted...
Now what is the technical error in designing a Deque in such a way that it has no restrictions
i.e. The user can enter and remove data from both the front and back at any time ...with any restrictions !!!

Under Distinctions and sub-types, the Wikipedia article says, "This general data class has some possible sub-types:" It then goes on to list the input- and output-restricted. Note that they are possible sub-types. Nothing in the article or any other literature I've seen says that you can't have an unrestricted deque, and in fact many runtime libraries provide such.
So there is a deque (unrestricted double-ended queue), and there are input-restricted and output-restricted deques.
It's perhaps a bit of a stretch, but one could make the argument that both FIFO queues and LIFO stacks are also deque sub-types. The FIFO queue is input-restricted at one end and output-restricted at the other. A LIFO stack is input- and output-restricted at the same end.

Related

Why do I need stack and queue?

Well,this question has been asked before and I read about the implementations of stack and queue.But I feel like those things can also be implemented with array or list.For example: LIFO(Last in First Out) can easily be implemented in python by using list.
Then why do we need stack and queue?
Stack and Queue are data-structures. Each of them has certain properties. For example Stack is LIFO(last in first out) whereas Queue is FIFO(first in first out).
In case of implementation - it is totally upto you how you are implementing those. For example if you are using C++, then you can use array or vector or even linked-list to implement those. Similar case is for python. You can tweak list to your expected behavior(like stack or queue). In a more simplified definition - Stacks are basically array or list which has the property of LIFO and Queues are basically array or list which has the property of FIFO.
Now why do you need Stack or Queue? - well suppose if you need a data-structure which has the property of LIFO( or FIFO). What do you do? you can tweak list as per your need. But if in your program needs multiple stacks(or queues), what do you do then? Well you can implement a stack(which underneath uses list), which will give you a generic template, which you can re-use multiple times.

Can someone explain how std::greater is used to implement priority_queue

std::priority_queue<int, vector<int>, std::greater<int> > pq;
I cannot understand the work of std::greater in the priority queue.
I am replacing minheap by the priority queue.
this code is taken from
geeksForGeeks implementation of Prims algorithm using STL
The std::priority_queue type is what’s called a container adapter. It works by starting with a type you can use to represent a sequence, then uses that type to build the priority queue (specifically, as a binary heap). By default, it uses a vector.
In order to do this, the priority queue type has to know how to compare elements against one another in a way that determines which elements are “smaller” than other elements. By default, it uses the less-than operator.
If you make a standard std::priority_queue<int>, you get back a priority queue that
uses a std::vector for storage, and
uses the less-than operator to compare elements.
In many cases, this is what you want. If you insert elements into a priority queue created this way, you’ll read them back out from greatest to least.
In some cases, though, this isn’t the behavior you want. In Prim’s algorithm and Dijkstra’s algorithm, for example, you want the values to come back in ascending order rather than descending order. To do this, you need to, in effect, reverse the order of comparisons by using the greater-than operator instead of the less-than operator.
To do this, you need to tell the priority queue to use a different comparison method. Unfortunately, the priority queue type is designed so that if you want to do that, you also need to specify which underlying container you want to use. I think this is a mistake in the design - it would be really nice to just be able to specify the comparator rather than the comparator and the container - but c’est la vie. The syntax for this is
std::priority_queue<int, // store integers...
std::vector<int>, // ... in a vector ...
std::greater<int>> // ... comparing using >

Why list ++ requires to scan all elements of list on its left?

The Haskell tutorial says, be cautious that when we use "Hello"++" World", the new list construction has to visit all single elements(here, every character of "Hello"), so if the list on the left of "++" is long, then using "++" will bring down performance.
I think I was not understanding correctly, does Haskell's developers never tune the performance of list operations? Why this operation remains slow, to have some kind of syntax consistencies in any lambda function or currying?
Any hints? Thanks.
In some languages, a "list" is a general-purpose sequence type intended to offer good performance for concatenation, splitting, etc. In Haskell, and most traditional functional languages, a list is a very specific data structure, namely a singly-linked list. If you want a general-purpose sequence type, you should use Data.Sequence from the containers package (which is already installed on your system and offers very good big-O asymptotics for a wide variety of operations), or perhaps some other one more heavily optimized for common usage patterns.
If you have immutable list which has a head and a reference to the tail, you cannot change its tail. If you want to add something to the 'end' of the list, you have to reach the end and then put all items one by one to the head of your right list. It is the fundamential property of immutable lists: concatenation is expensive.
Haskell lists are like singly-linked lists: they are either empty or they consist of a head and a (possibly empty) tail. Hence, when appending something to a list, you'll first have to walk the entire list to get to the end. So you end up traversing the entire list (the list to which you append, that is), which needs O(n) runtime.

Purpose of Xor Linked List?

I stumbled on a Wikipedia article about the Xor linked list, and it's an interesting bit of trivia, but the article seems to imply that it occasionally gets used in real code. I'm curious what it's good for, since I don't see why it makes sense even in severely memory/cache constrained systems:
The main advantage of a regular doubly linked list is the ability to insert or delete elements from the middle of the list given only a pointer to the node to be deleted/inserted after. This can't be done with an xor-linked list.
If one wants O(1) prepending or O(1) insertion/removal while iterating then singly linked lists are good enough and avoid a lot of code complexity and caveats with respect to garbage collection, debugging tools, etc.
If one doesn't need O(1) prepending/insertion/removal then using an array is probably more efficient in both space and time. Even if one only needs efficient insertion/removal while iterating, arrays can be pretty good since the insertion/removal can be done while iterating.
Given the above, what's the point? Are there any weird corner cases where an xor linked list is actually worthwhile?
Apart from saving memory, it allows for O(1) reversal, while still supporting all the other destructive update operations efficienctly, like
concating two lists destructively in O(1)
insertAfter/insertBefore in O(1), when you only have a reference to the node and its successor/predecessor (which differs slightly from standard doubly linked lists)
remove in O(1), also with a reference to either the successor or predecessor.
I don't think the memory aspect is really important, since for most scenarios where you might use a XOR list, you can use a singly-linked list instead.
It is about saving memory. I had a situation where my data structure was 40 bytes. The memory manager aligned things on a 16 byte boundary, so each allocation was 48 bytes; regardless of the fact that I only needed 40. By using xor chain list, I was able to eliminate 8 bytes and drop my data structure size down to 32 bytes. Now, I can fit 2 nodes in the 64 byte pipeline cache at the same time. So, I was able to reduce memory usage, and improve performance.
Its purpose is (or more precisely was) just to save memory.
With a xor-linked-list you can do anything you can do with a ordinary doubly-linked list. The only difference is that you have to decode the previous and next memory addresses from the xor-ed pointer for each node every time you need them.

Is Stack with PushAt/PopAt still a Stack?

Lets say I have a Data Structure similar to Stack but in addition to usual Push/Pop it also has functions such as PushAt/PopAt both of which takes an integer as input and adds/returns the item at that particular location in data structure.
Now Stack is suppose to be LIFO. Does this data structure qualify as "Stack"?
In HP RPN calculators and in Postscript/PDF, other operators than push and pop exist:
swap or exch for permuting top of stack and next element,
roll as an extension of swap
Their main data structure is still considered a stack.
pushAt and popAt can be written only with pop/push and roll. So your data structure can still be named stack.
Technically not. LIFO means (as you know) last-in, first-out. If the last element going in isn't the first to come out, it doesn't satisfy the "semantic interface" (or contract) of a stack.
However, since it seems like you are only adding additional methods and not modify the existing ones your data structure could be used interchangeably with a stack if it is being used like one, i.e. pushAt() and popAt() are never called (for instance because they are "hidden" by passing it as a Stack to a function).
So in that sense your version is a stack in the same way that a list is a stack in Java, for example. It can be used as one, but it can also do other things.
It's not a stack because it's not LIFO, you have complete control of where the items are get/set it's just a normal list imho.
Implementation of this would be easy if a linked list were to be used due to how the pointers can be reassigned, although if you were to use an array it would be difficult to pop an entity in the middle and then resize the structure to fill the gap below a complexity of O(n^2). This complexity would be very inefficient with linked list the complexity would only be O(n).
It might look like an ADT, but it just sounds like an array.
IMO it's a stack as long as it supports Push and Pop. It doesn't matter if it also supports other actions.
If its not a LIFO object it can't be qualified as a Stack. I would say its simply a List with Push Pop functionality which again is nothing but AddAtEnd() and RemoveFromEnd()
Your data structure can be used as a stack.. or as an array/list..
Stack is just a specialized form of a list.. and your implementation appears to negate that specialness.. so I would lean towards calling it a list instead of a stack.
Actually if you can only Pop and Push elements you can still see it as Stack. A more flexible stack.

Resources