I need to make a case for the amortized run time for the successor function in a balanced bst.
I know that if starting from the lowest leaf,and going to the "greatest" node is the sequence then the average run time is O(1), because of the T(n)/n => O(n)/n => O(1)
but I am not certain what happens when the sequence is not the longest one.
I would gladly receive help in that regard.
Related
http://www.geeksforgeeks.org/group-multiple-occurrence-of-array-elements-ordered-by-first-occurrence/
Please check this question.
How to do BST method of this problem.
They have mentioned that total time complexity will be O(NLogN).
How is time complexity of tree is LogN for traversal?
Please help
search, delete and insert running time all depend on the height of tree, or O(h) for BST. A degenerate tree almost looks like a linked list can produce a running time of O(N).
On the other hand, consider a self-balancing tree such as AVL tree, the running time for lookup is lower bounded by O(logN) because like Binary Search, we divide the search space by half each time as in left and right subtree have almost identical height.
In what situation, searching a term using a binary search tree requires a time complexity that is linear to the size of the term vocabulary (say M)? How to ensure a worst time complexity of log M?
A complete binary tree is one for which every level, except possibly the last, is completely filled. The worst case search peformance is the height of the tree, which in this case would be O(lgM), assuming M vocabulay terms in the tree.
One way to ensure this performance would be to use a self-balancing tree, e.g. a red-black tree.
Since binary search is a divide-and-conquer algorithm, we can ensure O(log M) if the tree is balanced, with equal number of terms under the sub-trees of any node. O(log M) basically means that time goes up linearly while M goes up exponentially. If it takes 1 second to search a balanced binary tree that is 10 nodes, it’d take 2 seconds to search an equally balanced tree with 100 nodes, 3 seconds to search 1000 nodes, and so on.
But if the binary search tree is extremely unbalanced to the point where it looks a lot like a linked list, we would have to go through every node, requiring a time complexity that is linear to M.
In my yesterday's interview, i was asked time complexity to construct a binary tree from given inorder and preorder/postorder.
I came up with skewed tree, which needs O(N^2) and somehow if we can guarantee a balanced binary tree, then we can do it in O(N log N).
But, in order to reply unique, i came up with an idea that it can be done in O(N) time. Reason what i gave was
Put one by one all the nodes of inorder traversal in hash table in O(N).
Searching in hash table for a particular node can be done in amortized O(1).
Total time complexity can be theoretically reduced to O(N). (Actually, i haven't implemented it yet)
So, was i correct in my reply and the results haven't been announced yet.
It can be done in O(N) time and O(N) space (for a skewed binary tree), but instead of storing the elements of inorder traversal, store the indices of the elements in the hash table. Then the following algorithm should work:
Function buildTree (in_index1, in_index2, pre_index1, pre_index2)
root_index = hashEntry [pre-list[pre_index1]]
createNode (root)
root->leftChild = buildTree (in_index1, root_index-1, pre_index1 + 1, pre_index1 + (root_index-in_index1))
root->rightChild = buildTree (root_index+1, in_index2, pre_index1 + (root_index-in_index1) + 1, pre_index2)
return root
Note: Above is the basic idea. For the recursive calls, you will need to get the correct indices more carefully.
I am reading the basics of splay trees. The amortized cost of an operation is O(log n) over n operations. Some rough basic idea is that when you access a node, you splay it i.e. you take it to root so next time this is quickly accessed and also if the node was deep, it enhances balance-ness of tree.
I don't understand how the tree can perform amortized O(log n) for this sample input:
Say a tree of n nodes is already built. My next n operations are n reads. I access a deep node say at depth n. This takes O(n). True that after this access, the tree will become balanced. But say every time I access the most current deep node. This will never be less than O(log n). then how we can ever compensate for the first costly O(n) operation and bring the amortized cost of each read as O(log n)?
Thanks.
Assuming your analysis is correct and the operations are O(log(n)) per access and O(n) the first time...
If you always access the bottommost element (using some kind of worst-case oracle), a sequence of a accesses will take O(a*log(n) + n). And thus the amortized cost per operation is O((a*log(n) + n)/a)=O(log(n) + n/a) or just O(log(n)) as the number of accesses grows large.
This is the definition of asymptotic average-case performance/time/space, also called "amortized performance/time/space". You are accidentally thinking that a single O(n) step means all steps are at least O(n); one such step is only a constant amount of work in the long run; the O(...) is hiding what's really going on, which is taking the limit of [total amount of work]/[queries]=[average ("amortized") work per query].
This will never be less than O(log n).
It has to be in order to get O(log n) average performance. To get intuition, the following website may be good: http://users.informatik.uni-halle.de/~jopsi/dinf504/chap4.shtml specifically the image http://users.informatik.uni-halle.de/~jopsi/dinf504/splay_example.gif -- it seems that while performing the O(n) operations, you move the path you searched scrunching it towards the top of the tree. You probably only have a finite number of such O(n) operations to perform until the entire tree is balanced.
Here's another way to think about it:
Consider an unbalanced binary search tree. You can spend O(n) time balancing it. Assuming you don't add elements to it*, it takes O(log(n)) amortized time per query to fetch an element. The balancing setup cost is included in the amortized cost because it is effectively a constant which, as demonstrated in the equations in the answer, disappears (is dwarfed) by the infinite amount of work you are doing. (*if you do add elements to it, you need a self-balancing binary search tree, one of which is a splay tree)
I came across solution given at http://discuss.joelonsoftware.com/default.asp?interview.11.780597.8 using Morris InOrder traversal using which we can find the median in O(n) time.
But is it possible to achieve the same using O(logn) time? The same has been asked here - http://www.careercup.com/question?id=192816
If you also maintain the count of the number of left and right descendants of a node, you can do it in O(logN) time, by doing a search for the median position. In fact, you can find the kth largest element in O(logn) time.
Of course, this assumes that the tree is balanced. Maintaining the count does not change the insert/delete complexity.
If the tree is not balanced, then you have Omega(n) worst case complexity.
See: Order Statistic Tree.
btw, BigO and Smallo are very different (your title says Smallo).
Unless you guarantee some sort of balanced tree, it's not possible.
Consider a tree that's completely degenerate -- e.g., every left pointer is NULL (nil, whatever), so each node only has a right child (i.e., for all practical purposes the "tree" is really a singly linked list).
In this case, just accessing the median node (at all) takes linear time -- even if you started out knowing that node N was the median, it would still take N steps to get to that node.
We can find the median by using the rabbit and the turtle pointer. The rabbit moves twice as fast as the turtle in the in-order traversal of the BST. This way when the rabbit reaches the end of traversal, the turtle in at the median of the BST.
Please see the full explanation.