What will be time Complexity for Displaying data in alphabetical order using skip-list ?
and what time complexity will be for skip list if we implement using quad node ?
Let's assume that you have the input that contains N elements. Firstly you have to construct a skip-list. The complexity of a single insert operation is O(log N) in average so the complexity of inserting N elements would be O(N * log N). When the skip-list is constructed then elements within this list are sorted. So in order to enumerate them you only need to visit each element what is O(N).
It is worth saying that the skip-list is based on randomness. It means that O(log N) complexity of a single insert operation is not guaranteed. The worst case complexity is O(N), It means that in the worst case the complexity of inserting N elements into the skip-list would be O(N^2).
Related
I have an AVL tree implementation where the insertion method runs in O(log n) time and the method that returns an in-order list representation runs in O(n^2) time. If I have a list needed to be sorted. By using a for-loop, I can iterate through the list and insert each element into the AVL tree, which will run in O(n log n) time combined. So what is the performance of this entire sorting algorithm (i.e. iterate through the list, insert each element, then use in-order traversal to return a sorted list)?
You correctly say that adding n elements to the tree will take time O(nlog(n)). A simple in-order traversal of a BST can be performed in time O(n). It is thus possible to get a sorted list of the elements in time O(nlog(n) + n) = O(nlog(n)). If the time complexity of your algorithm to generate the sorted list from the tree is quadratic (i.e. in O(n^2) but not always in O(n)) the worst case time complexity of the procedure you describe is in O(nlog(n) + n^2) = O(n^2), which is not optimal.
I know that in the case of inserting numbers 1,2,3,......,n into a initially empty min heap with the order 1,2,3,.....,n, you will just need to put them in one-by-one.
But I can't quite work out how to calculate the time complexity of two different cases: if you insert them in a reverse order (n,n-1,n-2,....,2,1) or even with other numbers with the order (1,n+1,2,n+2,3,n+3,....,n-1,2n-1,n,2n). I know that for the reverse case, you will have to move the numer inserted "along" the height of the heap (which is logn) but I am not quite sure about the remaining parts...
As you say, when you insert the numbers 0..n in-order into a min-heap, insertion is O(1) per item. Because all you have to do is append the number into the array.
When you insert in reverse order, then every item is inserted into the bottom row, and has to be sifted up through the heap to the root. Every insertion has to move up log(n) rows. So insertion is O(log n) per item.
The average, when you're inserting items at random, as discussed at some length in Argument for O(1) average-case complexity of heap insertion and the articles it links, is something like 1.6.
So there is a very strong argument that the average complexity of binary heap insertion is O(1).
In your particular case, your insertions are alternating O(1) and O(log n). So over time you have O((1+log n)/2), which is going to be O(n log n) to insert all of the items.
What would be the worst case time complexity to build binary search tree with given arbitrary N elements ?
I think there is a difference between N given elements and the elements coming one by one and thereby making a BST of total N elements .
In the former case, it is O(n log n) and in second one is O(n^2) . Am i right ?
If Binary Search Tree (BST) is not perfectly balanced, then the worst case time complexity is O(n^2). Generally, BST is build by repeated insertion, so worst case will be O(n^2). But if you can sort the input (in O(nlogn)), it can be built in O(n), resulting in overall complexity of O(nlogn)
It BST is self-balancing, then the worst case time complexity is O(nlog n) even if we have repeated insertion.
Consider the scenario where data to be inserted in an array is always in order, i.e. (1, 5, 12, 20, ...)/A[i] >= A[i-1] or (1000, 900, 20, 1, -2, ...)/A[i] <= A[i-1].
To support such a dataset, is it more efficient to have a binary search tree or an array.
(Side note: I am just trying to run some naive analysis for a timed hash map of type (K, T, V) and the time is always in order. I am debating using Map<K, BST<T,V>> vs Map<K, Array<T,V>>.)
As I understand, the following costs (worst case) apply—
Array BST
Space O(n) O(n)
Search O(log n) O(n)
Max/Min O(1) O(1) *
Insert O(1) ** O(n)
Delete O(n) O(n)
*: Max/Min pointers
**: Amortized time complexity
Q: I want to be more clear about the question. What kind of data structure should I be using for such a scenario between these two? Please feel free to discuss other data structures like self balancing BSTs, etc.
EDIT:
Please note I didn't consider the complexity for a balanced binary search tree (RBTree, etc). As mentioned, a naive analysis using a binary search tree.
Deletion has been updated to O(n) (didn't consider time to search the node).
Max/Min for skewed BST will cost O(n). But it's also possible to store pointers for Max & Min so overall time complexity will be O(1).
See below the table which will help you choose. Note that I am assuming 2 things:
1) data will always come in sorted order - you mentioned this i.e. if 1000 is the last data inserted, new data will always be more than 1000 - if data does not come in sorted order, insertion can take O(log n), but deletion will not change
2) your "array" is actually similar to java.util.ArrayList. In short, its length is mutable. (it is actually unfair compare a mutable and an immutable data structure) However, if it is a normal array, your deletion will take amortized O(log n) {O(log n) to search and O(1) to delete, amortized if you need to create new array} and insertion will take amortized O(1) {you need to create new array}
ArrayList BST
Space O(n) O(n)
Search O(log n) O(log n) {optimized from O(n)}
Max/Min O(1) O(log n) {instead of O(1) - you need to traverse till the leaf}
Insert O(1) O(log n) {optimized from O(n)}
Delete O(log n) O(log n)
So, based on this, ArrayList seems better
May I know why the time complexity of insertion of skip list is O(log n) for average case, and why the height of Skip list with n elements is O(log n) in high probability. And why average search time in each layer is O(1).
I can help with the O(log n) part.
Basically...
[Skip list searching] is quite reminiscent of binary search in an array and is perhaps the best way to intuitively understand why the maximum number of nodes visited in this list is in .