Append element in linkedlist without duplication - data-structures

Im in the first stage of learning LINKEDLIST in data structure. I need an algorithm which explains the append of an element without duplication.

As you are learning linkedlist -
To search an element in linkedlist, a linear search is required - looping the linkedlist to find the element.
Now, to insert a non-duplicate element, the linkedlist need to be searched once, to check if the element is in the linkedlist or not.
If the element is not in linkedlist then, we should append the element.
So,
You should try this approach,
insert(x,linkedlist){
1. Check if x in linkedlist
<code to search x in linked list>
2. If not found then append x
<code to append x in linked list>
}
There is another efficient approach using HashSet, As a beginner, I would not recommend this now. Please comment if you want this approach also.

Related

Tree that efficiently avoids duplicate values in ancestors

The question is pretty simple:
I have a (potentially very unbalanced) tree.
At every iteration, new children are appended to some node.
However, children with values duplicated in their ancestors must be filtered out.
Is there a (hopefully simple) way to maintain this data structure efficiently?
The obvious ways require O(depth(node)) time per append, which I'm trying to avoid.
Use AVL Trees or Binary Search Trees(BST). You have to apply a small logic to avoid duplicates in the AVL/BST. That logic: Use only >,< operator in tree building. never use >=, =< operators. //Pseudo code:
if(present_node_value<new_node_value)
insert_in_left_side
else if(present_node_value>new_node_value)
insert_in_right_side
else // Means duplicate entry
print " Duplicate Entry"
return

LinkedList does not provide index based access, so why does it have get(index) method?

I understand that ArrayList is index based datastructure, that allows you to access its element using the index but LinkedList is not supposed index based so why does it have get(index) method that allows direct access to the element?
It may not be efficient to retrieve items from a linked list by index, but linked lists do have indices, and sometimes you just need to retrieve an item at a certain index. When that happens, it's much better to have a get method than to force users to grab an iterator and iterate to the desired position. As long as you don't call it too much or the list is small, it's fine.
This is really just an implementation decision. While an array would probably be a fairly useless data structure if you can't look up elements by index, adding a by-index lookup to a linked-list implementation doesn't do any harm (well, unless users assume it's fast - see below), and it does come in handy sometimes.
One can assign every element a number as follows:
0 1 2 3 4
Head (Element0) -> Element1 -> Element2 -> Element3 -> Element4 -> NULL
From here, it's trivial to write a function to return the element at some given index.
Note that a by-index lookup on a linked-list will be slow - if you're looking for let's say the element in the middle, you'll need to work through half the list to get there.
The previous answers imply that LinkedLists have indices.
However, a fixed index for every element in the data structure would defeat the purpose of the LinkedList and e.g. make some remove/add operations slower because the structure would need to be reindexed every time. This would take linear time, even for elements at the beginning and at the end of the list, that are crucial for Java's LinkedList's efficiency.
From Java's LinkedList implementation you can see that there is no constant time index access to the element, but rather a linear traversal where the exact element is figured out on the go.

Traversal to print two BSTs ordered using recursion. Use of extra memory like arrays is not allowed

I was asked this question in an interview. Given are two BST (Binary Search Tree). We need to traverse the two trees in such a way that a merged sorted output is the result. Constraint is that we cannot use extra memory like arrays. I suggested a combined inorder traversal of both the trees. The approach was correct but I got stuck in recursion and was not able to write the code.
Note: We cant merge the two trees into one.
Please someone guide me in this direction.
Thanks in advance.
I am assuming that there are no links to parent or next nodes in the tree, because otherwise this
would be quite easy: Your just iterate your trees by following these links and write your merge algorithm as you would for linked lists.
If you don't have next or parent links, you cannot write simple recursion. You'll need two "recursion" stacks.
You can implement the following structure, which allows you to iterate the each of the trees separately.
class Iterator
{
stack<Node> st;
int item(){
return st.top().item();
}
void advance(){
if (st.top().right != null)
st.push(st.top().right);
// Go as far left as possible
while (st.top().left != null) st.push(st.top().left);
else {
int x = st.top().item();
// we pop until we see a node with a higher value
while(st.top().item()<=x) st.pop();
}
}
};
Then write your merge algorithm using two of these iterators.
You will need O(log n) space, but asymptotically this isn't more than any recursive iteration.
The "simplest" way would be to:
Convert tree A to a doubly linked list (sorted)
Convert tree B to a doubly linked list (sorted)
Traverse the sorted lists printing minimum (easy)
Convert list A to tree A
Convert list B to tree B
You can find algorithms for this steps online.
I don't think doing a parallel traversal of trees is possible. You would need additional information e.g. a visited flag to eliminate left subtree as visited and even then you would run into other problems.
If anyone knows how this would be possible with a parallel traversal I would be happy to know it.
print $ merge (inorder treeA) (inorder treeB)
what's the problem?
(notice, the above is actual Haskell code which actually runs and performs the task). inorder is trivial to implement with recursion. merge is a nearly-standard feature, merging its two argument ordered (non-decreasing) lists, producing an ordered output list, keeping the duplicates.
Because of lazy evaluation and garbage collection, the lists are not actually created - at most one produced element is retained for each tree, and is discarded when the next one is produced, in effect creating iterators for the traversals (each with its own internal state).
Here's the solution (if your language does not support the above, or the equivalent yield mechanism, or the explicit continuations of Scheme which allow to switch between two contexts deep inside control stack each (thus making it possible to have "two recursions" in parallel, as in the above)):
They don't say anything about time complexity, so we can do a recursive traversal of 1st tree, and traverse the 2nd tree anew, for each node of the 1st tree - while saving previous value on 1st. So, we have two consecutive values on 1st tree, and print all values from 2nd tree between them, with fresh recursive traversal, restarting from the top of the 2nd tree for each new pair of values from the 1st tree.

Immutablity of Node-based data structures

Is there any general approach if one wanted to provide an immutable version of e.g. LinkidList, implemented using as a linked sequence of nodes? I understand that in the case of ArrayList you would copy the underlying array, but in this case this is not that obvious to me...
Immutable lists are basically represented the same way as regular linked lists, except that all operations that would normally modify the list return a new one instead. This new list does not neccessarily need to contain a copy of the entire previous list but can reuse elements of it.
I recommend implementing the following operations in the following ways:
Popping the element at the front: simply return a pointer to the next node. Complexity: O(1).
Pushing an element to the front: Create a new node that point to the first node of the old list and return it. O(1).
Concatenating list a with list b: copy the entire list a and let the pointer in the final node point to the beginning of list b. Note that this is faster than the same operation on mutable lists. O(length(a)).
Inserting at position x: Copy everything up to x, add a node with the new element to the back of the copy, and let that node point to the old list at position x + 1. O(x).
Removing the element at position x: practically the same as inserting. O(x).
Sorting: you can just use plain quick- or mergesort. It's not much faster or slower than it would be on mutable lists. The only difference is that you can't sort in place but will have to sort to a copy. O(n*log n).

Hashtable with doubly linked lists?

Introduction to Algorithms (CLRS) states that a hash table using doubly linked lists is able to delete items more quickly than one with singly linked lists. Can anybody tell me what is the advantage of using doubly linked lists instead of single linked list for deletion in Hashtable implementation?
The confusion here is due to the notation in CLRS. To be consistent with the true question, I use the CLRS notation in this answer.
We use the hash table to store key-value pairs. The value portion is not mentioned in the CLRS pseudocode, while the key portion is defined as k.
In my copy of CLR (I am working off of the first edition here), the routines listed for hashes with chaining are insert, search, and delete (with more verbose names in the book). The insert and delete routines take argument x, which is the linked list element associated with key key[x]. The search routine takes argument k, which is the key portion of a key-value pair. I believe the confusion is that you have interpreted the delete routine as taking a key, rather than a linked list element.
Since x is a linked list element, having it alone is sufficient to do an O(1) deletion from the linked list in the h(key[x]) slot of the hash table, if it is a doubly-linked list. If, however, it is a singly-linked list, having x is not sufficient. In that case, you need to start at the head of the linked list in slot h(key[x]) of the table and traverse the list until you finally hit x to get its predecessor. Only when you have the predecessor of x can the deletion be done, which is why the book states the singly-linked case leads to the same running times for search and delete.
Additional Discussion
Although CLRS says that you can do the deletion in O(1) time, assuming a doubly-linked list, it also requires you have x when calling delete. The point is this: they defined the search routine to return an element x. That search is not constant time for an arbitrary key k. Once you get x from the search routine, you avoid incurring the cost of another search in the call to delete when using doubly-linked lists.
The pseudocode routines are lower level than you would use if presenting a hash table interface to a user. For instance, a delete routine that takes a key k as an argument is missing. If that delete is exposed to the user, you would probably just stick to singly-linked lists and have a special version of search to find the x associated with k and its predecessor element all at once.
Unfortunately my copy of CLRS is in another country right now, so I can't use it as a reference. However, here's what I think it is saying:
Basically, a doubly linked list supports O(1) deletions because if you know the address of the item, you can just do something like:
x.left.right = x.right;
x.right.left = x.left;
to delete the object from the linked list, while as in a linked list, even if you have the address, you need to search through the linked list to find its predecessor to do:
pred.next = x.next
So, when you delete an item from the hash table, you look it up, which is O(1) due to the properties of hash tables, then delete it in O(1), since you now have the address.
If this was a singly linked list, you would need to find the predecessor of the object you wish to delete, which would take O(n).
However:
I am also slightly confused about this assertion in the case of chained hash tables, because of how lookup works. In a chained hash table, if there is a collision, you already need to walk through the linked list of values in order to find the item you want, and thus would need to also find its predecessor.
But, the way the statement is phrased gives clarification: "If the hash table supports deletion, then its linked lists should be doubly linked so that we can delete an item quickly. If the lists were only singly linked, then to delete element x, we would first have to find x in the list T[h(x.key)] so that we could update the next attribute of x’s predecessor."
This is saying that you already have element x, which means you can delete it in the above manner. If you were using a singly linked list, even if you had element x already, you would still have to find its predecessor in order to delete it.
I can think of one reason, but this isn't a very good one. Suppose we have a hash table of size 100. Now suppose values A and G are each added to the table. Maybe A hashes to slot 75. Now suppose G also hashes to 75, and our collision resolution policy is to jump forward by a constant step size of 80. So we try to jump to (75 + 80) % 100 = 55. Now, instead of starting at the front of the list and traversing forward 85, we could start at the current node and traverse backwards 20, which is faster. When we get to the node that G is at, we can mark it as a tombstone to delete it.
Still, I recommend using arrays when implementing hash tables.
Hashtable is often implemented as a vector of lists. Where index in vector is the key (hash).
If you don't have more than one value per key and you are not interested in any logic regarding those values a single linked list is enough. A more complex/specific design in selecting one of the values may require a double linked list.
Let's design the data structures for a caching proxy. We need a map from URLs to content; let's use a hash table. We also need a way to find pages to evict; let's use a FIFO queue to track the order in which URLs were last accessed, so that we can implement LRU eviction. In C, the data structure could look something like
struct node {
struct node *queueprev, *queuenext;
struct node **hashbucketprev, *hashbucketnext;
const char *url;
const void *content;
size_t contentlength;
};
struct node *queuehead; /* circular doubly-linked list */
struct node **hashbucket;
One subtlety: to avoid a special case and wasting space in the hash buckets, x->hashbucketprev points to the pointer that points to x. If x is first in the bucket, it points into hashbucket; otherwise, it points into another node. We can remove x from its bucket with
x->hashbucketnext->hashbucketprev = x->hashbucketprev;
*(x->hashbucketprev) = x->hashbucketnext;
When evicting, we iterate over the least recently accessed nodes via the queuehead pointer. Without hashbucketprev, we would need to hash each node and find its predecessor with a linear search, since we did not reach it via hashbucketnext. (Whether that's really bad is debatable, given that the hash should be cheap and the chain should be short. I suspect that the comment you're asking about was basically a throwaway.)
If the items in your hashtable are stored in "intrusive" lists, they can be aware of the linked list they are a member of. Thus, if the intrusive list is also doubly-linked, items can be quickly removed from the table.
(Note, though, that the "intrusiveness" can be seen as a violation of abstraction principles...)
An example: in an object-oriented context, an intrusive list might require all items to be derived from a base class.
class BaseListItem {
BaseListItem *prev, *next;
...
public: // list operations
insertAfter(BaseListItem*);
insertBefore(BaseListItem*);
removeFromList();
};
The performance advantage is that any item can be quickly removed from its doubly-linked list without locating or traversing the rest of the list.

Resources