Why in double link list complexity of get(index) is equal O(n) not O(1)? Why it isn't like in array O(1) ? Is it because we have to traverse through previous nodes to get one ?
This is by definition. As you suspected, to get to the i-th element in the list, all previous items must be traversed.
As an exercise, implement a linked list for yourself.
Yes, having to "traverse through previous nodes to get one" is exactly it.
In a linked list, to find element # n, you would use something like:
def getNodeNum (n):
node = head
while n > 0 and node != NULL:
n = n - 1
node = node.next
return node
The reason an array is O(1) is because all the elements are laid out in contiguous memory. To get the address of element 42, you simply multiply 42 by the element size and then add the array base. This has the same cost for element number 3 as it does for element number 999.
You can't do that with a list because the elements are not necessarily contiguous in memory, hence you have to walk the list to find the one you desire. Thus the cost for finding element number 3 is actually far less than the cost for finding element number 999.
Related
As per this lecture from uc BerkleyCS61B lecture 16, if a priority queue is implemented as a sorted array, the remove min is constant time.
If you just remove the element at the zeroth index, then wouldn't you have to move all the items over to the left? In which case wouldnt it be theta of n? Else find min will not be constant time.
If you implement a priority queue as a sorted array, there are two different ways to ensure that removeMin is an O(1) operation.
If the array is sorted in ascending order, then the smallest element is at the front of the array. In this case you maintain an index that tells you where the beginning of the queue is. When you first build it, then of course the index is at the front of the array (a[0] or a[1], depending on your choice of language). When you want to remove the smallest element, you return the item at a[idx], and then increment idx.
Any time you insert an item you write the code to move everything up to the front of the array again and reset idx to 0 (or 1, as appropriate).
The other way is to maintain the array in descending order. The smallest element is the last element of the array. You already have to keep track of how many elements are in the array. So you have an index, call it ixEnd. When you want to remove the smallest element, you return a[ixEnd], and subtract 1 from ixEnd.
Either way, insertion is O(n) and removeMin is O(1).
I need to implement a function which can be find the kth minimum from doubly linked list.
I searched on internet and come to know about this :
quickSelect logic and k-th order statistic algorithm would be effective for array or vector but here i am using linked list where I do not have any size of linked list so its hard to divide them in 5 elements part.
My function testcase is looks like this :
for(int i = 0; i < 1000; ++i)
{
// create linked list with 1000 elements
int kthMinimum = findKthMin(LinkedList, i);
// validate kthMinimum answer.
}
Here linkedlist can be in anyorder, we have to assume randomized only.
Any idea or suggestion to find kth minimum from doubly linked list in efficient time?
Thanks
Algorithm
You can maintain a heap of size k by doing the following:
Fill the array with the k first elements of the list.
Heapify the array (using a MaxHeap)
Process the remaining elements of the list:
If top of the heap (the max) is greater than the current element in the list e, replace it with e (and maintain the heap invariant)
If the element is greater, just ignore it and carry on
At the end of the algorithm, the k-th smallest element will be at the top of the heap.
Complexity
Accumulate the first k elements + heapify the array: O(k)
Process the remaining part of the list O((n-k)ln(k)).
If the list is doubly-linked, you can run the QuickSort algorithm on it. On my experience QuickSort is the fastest sorting algorithm (measured generating random lists, and pitting it against HeapSort and MergeSort). After that, simply walk the list k-positions to get your k-th smallest element.
QuickSort average time is O(n*log(n)), walking the list will be O(k), which in its worst case is O(n). So, total time is O(n*log(n)).
One of my friends had the following interview question, and neither of us are quite sure what the correct answer is. Does anyone have an idea about how to approach this?
Given an unbalanced binary tree, describe an algorithm to select a node at random such that each node has an equal probability of being selected.
You can do it with a single pass of the tree. The algorithm is the same as with a list.
When you see the first item in the tree, you set it as the selected item.
When you see the second item, you pick a random number in the range (0,2]. If it's 1, then the new item becomes the selected item. Otherwise you skip that item.
For each node you see, you increase the count, and with probability 1/count, you select it. So at the 101st node, you pick a random number in the range (0,101]. If it's 100, that node is the new selected node.
When you're done traversing the tree, return the selected node. The operation is O(n) in time, with n being the number of nodes in the tree, and O(1) in space. No preprocessing required.
We can do this recursively in one parse by selecting the random node while parsing the tree and counting the number of nodes in left and right sub tree. At every step in recursion, we return the number of nodes at the root and a random node selected uniformly randomly from nodes in sub tree rooted at root.
Let's say number of nodes in left sub tree is n_l and number of nodes in right sub tree is n_r. Also, randomly selected node from left and right subtree be R_l and R_r respectively. Then, select a uniform random number in [0,1] and select R_l with probability n_l/(n_l+n_r+1) or select root with probability 1/(n_l+n_r+1) or select R_r with probability n_r/(n_l+n_r+1).
Note
If you're only doing a single query, and you don't already have a count at each node, the best time complexity you can get is O(n), so the depth-first-search approach would be the best one.
For repeated queries, the best option depends on the given constraints
(the fastest per-query approach is using a supplementary array).
Supplementary array
O(n) space, O(n) preprocessing, O(1) insert / remove, O(1) query
Have a supplementary array containing all the nodes.
Also have each node store its own index (so you can remove it from the array in O(1) - the way to do this would be to swap it with the last element in the array, update the index of the node that was at the last index appropriately and decrease the size of the array (removing the last element).
To get a random node, simply generate a random index in the array.
Per-node count
Modified tree (O(n) space), N/A (or O(n)) preprocessing, O(depth) insert / remove, O(depth) query
Let each node contain the number of elements in its subtree.
When generating a random node, go left or right based on the value of a random number generated and the counts of the left or right subtrees.
// note that subtreeCount = leftCount + rightCount + 1
val = getRandomNumber(subtreeCount)
if val = 0
return this node
else if val <= leftCount
go left
else
go right
Depth-first-search
O(depth) space, O(1) preprocessing, O(1) insert / remove, O(n) query
Count the number of nodes in the tree (if you don't already have the count).
Generate a random number between 0 and the number of nodes.
Simply do a depth-first-search through the tree and stop when you've processed the desired number of nodes.
This presumes a node doesn't have a parent member - having this will make this O(1) space.
I implemented #jim-mischel's algorithm in C# and it works great:
private void SelectRandomNode(ref int count, Node curNode, ref Node selectedNode)
{
foreach( var childNode in curNode.Children )
{
++count;
if( random.Next(count) == count - 1 )
selectedNode = childNode;
SelectRandomNode(ref count, childNode, ref selectedNode);
}
}
Call it like this:
var count = 1;
Node selected = root;
SelectRandomNode(ref count, root, ref selected);
I'm trying to find the best algorithm for
converting an "ordinary" linked list
into an `ideal skip list`
.
Where the definition of an ideal skip list is that in the first level we'll have all
the elements , in the level above - half of them , the one after - quarter of them ... and so on .
I'm thinking about O(n) run-time where involving throwing a coin for each node in
the original linked-list , whether or not for a specific node , should I go up or not , and create another duplicate node for the current node upstairs ...
Eventually this algorithm would produce O(n) , is there any better algorithm ?
Regards
I am assuming the linked list is sorted - otherwise it cannot be done in comparison based algorithm, since you need to sort it in Omega(nlogn)
Iterate on the "highest level" of the list, and add a "link up node" every second node.
Repeat until the highest level has only one node.
The idea is to generate a new list, half the size of the original, which is linked to the original in every 2nd link, and then recursively invoke on the smaller list, until you reach a list of size 1.
You will end up with lists of size 1,2,4,...,n/2 linked to each other.
pseudo code:
makeSkipList(list):
if (list == null || list.next == null): //stop clause - a list of size 1
return
//root is the next level list, which will have n/2 elements.
root <- new link node
root.linkedNode <- list //linked node is linking "down" in the skip list.
root.next <- null //next is linking "right" in the skip list.
lastLinkNode <- root
i <- 1
//we create a link every second element
for each node in list, exlude the first element:
if (i++ %2 == 0): //for every 2nd element, create a link node.
lastLinkNode.next <- new link node
lastLinkNode <- lastLinkNode.next //setting the "down" field to the element in the list
lastLinkNode.linkedNode <- node
lastLinkNode.next <- null
makeSkipList(root) //recursively invoke on the new list, which is of size n/2.
Complexity is O(n) since the algorithm complexity can be described as T(n) = n + T(n/2), thus you get T(n) = n + n/2 + n/4 + ... -> 2n
It is easy to see it cannot be done better then O(n), because at the very least you will have to add at least one node in the second half of the original list, and getting there is itself O(n)
I am given an array of real numbers, A. It has n+1 elements.
It is known that there are at least 2 elements of the array, x and y, such that:
abs(x-y) <= (max(A)-min(A))/n
I need to create an algorithm for finding the 2 items (if there are more, any couple is good) in O(n) time.
I've been trying for a few hours and I'm stuck, any clues/hints?
woo I got it! The trick is in the Pigeonhole Principle.
Okay.. think of the numbers as being points on a line. Then min(A) and max(A) define the start and end points of the line respectively. Now divide that line into n equal intervals of length (max(A)-min(A))/n. Since there are n+1 points, two of them must fall into one of the intervals.
Note that we don't need to rely on the question telling us that there are two points that satisfy the criterion. There are always two points that satisfy it.
The algorithm itself: You can use a simplified form of bucket sort here, since you only need one item per bucket (hit two and you're done). First loop once through the array to get min(A) and max(A) and create an integer array buckets[n] initialized to some default value, say -1. Then go for a second pass:
for (int i=0; i<len; i++) {
int bucket_num = find_bucket(array[i]);
if (bucket[bucket_num] == -1)
bucket[bucket_num] = i;
else
// found pair at (i, bucket[bucket_num])
}
Where find_bucket(x) returns the rounded-down integer result of x / ((max(A)-min(A))/n).
Let's re-word the problem: we're to find two elements, such that abs(x-y) <= c, where c is a constant, that we can find in O(n) time. (Indeed, we can compute both max(A) and min(A) in linear time and just assign c=(max-min)/n).
Let's imagine we have a set of buckets, so that in first bucket elements 0<=x<c are placed, in the second bucket elements c<=x<=2c are placed, etc. For each element, we can determine its bucket for O(1) time. Note that the number of buckets occupied will be not more than the number of elements in array.
Let's iterate the array and place each element to its bucket. If in the bucket we're going to place it, there already is another element, then we've just found the proper pair of x and y!
If we've iterated the whole array and every element has fallen into its own bucket, no worries! Iterate the buckets now (there is not more than n buckets, as we've said above) and for each bucket element x, if in the next bucket y element is such that abs(x-y)<=c, then we've found the solution.
If we iterated all the buckets and found no proper elements, then there is no solution. OMG, I really missed that pigeonhole stuff (see the other answer).
Buckets may be implemented as a hash map, where each bucket holds one array index (placing element in bucket will look like this: buckets[ a[i] / c] = i). We compute c in O(n) time, assign items to buckets in O(n)*O(1) time (O(1) is access to hash map), traverse buckets in O(n) time. Therefore, the whole algorithm is linear.