Finding max element in first N elements of dynamic array [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm looking for an efficient algorithm or data structure to find largest element by second parameter in first N elements of multiset in which I'll make many ,so I can't use segment tree.Any Ideas?
note:I have multiset of pairs.

You can use any balanced binary search tree implementation you are familiar with. Arguably the most well known are AVL tree, Red-black tree.
Usually binary search tree description mentions a key and value pair stored in tree node. The keys are ordered from left to right. Insert, delete and find operations work with O(log(n)) time complexity because tree is balanced. Balance is often supported by tree rotation.
In order to be able to find maximum value on a range of elements you have to store and maintain additional information in each tree node namely maxValue in the subtree of the node and size of the subtree. Define a recursive function for a node to find maximum value among the first N nodes of its subtree. If N is equal to size you will already have an answer in maxValue of current node. Otherwise call the function for left/right node if some elements are in threir subtrees.
F(node, N) =
if N == size[node] : maxValue[node]
else if N <= size[leftChild[node]] :
F(leftChild[node], N)
else if N == size[leftChild[node]] + 1 :
MAX(F(leftChild[node], N), value[node])
else :
MAX(maxValue[leftChild[node]],
value[node],
F(rightChild[node], N - size[leftChild[node]] - 1)
If you are familiar with segment tree you will not encounter any problems with this implementation.
I may suggest you to use Treap. This is randomised binary tree. Because of the this randomised nature the tree always remains balances providing O(log(n)) time complexity for the basic operations. Treap DS has two basic operations split and merge, all other operations are implemented via their usage. An advatage of treap is that you don't have to deal with rotations.
EDIT: There is no way to maintain maxKey/minKey in each node explicitly O(log(n)).

Related

Why is a complete binary tree most suited for heap implementation? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 9 months ago.
Improve this question
I could not figure out why complete binary tree is most suited for mplementing heap? why we cannot use full binary tree? WHy complete binary tree is the most suited for heap implementation?
Full binary trees for heaps?
A full binary tree is not necessarily well-balanced. For instance, this is a full binary tree:
1
/ \
2 3
/ \
4 5
/ \
6 7
/ \
8 9
/ \
10 11
In general, a full binary tree where any right child is also a leaf, has a height that is O(𝑛), more precisely, (𝑛−1)/2. This will be problematic for heaps, which rely on the tree being well balanced to keep its insert/delete operations within a time complexity of O(log𝑛).
Secondly, full binary trees always have an odd number of nodes (except when they are empty). This already makes them impractical, as obviously heaps should be able to have even sizes too.
Other alternative
However, binary heaps do not have to be complete binary trees. That is only required when their implementation is the well-known array-based one. But one could also implement a binary heap with an AVL tree, which is not necessarily a complete binary tree, but which still keeps the tree balanced and will have the same time complexities for the heap operations. But since the overhead of the pointer management is larger than working with indices in an array, the array representation leads to faster operations.
Why complete?
The choice for a complete binary tree comes into play when the implementation is array-based and not an explicit node-pointer representation. When you fill an array with values in level-order, and don't allow for gaps in the array (unused slots), then it follows that the tree is complete. Although you could imagine an array that allows for gaps, this would be an inferior choice, as it wastes space, with no gain to compensate for it.
First of all, it is not possible to create a heap structure without it being tightly packed. Every item in the array has a position in the binary tree, and this position comes from the array index.
Also, it has several advantages like the following:
Some operations of the heap have a time complexity of O(lgn) where n is the height of the tree, keeping the height of the tree at a minimum allows us to keep the time required for these operations at a minimum.
All the items of the complete binary tree are stored in a contiguous manner in an array so random access is possible by keeping the heap as a complete binary tree
The completeness ensures that there is a well-defined and efficient way to determine the new root when an element is removed, not using a complete structure would mean losing this advantage (which is why you should use heap in the first place).

Sorting in O(n log n) time using only minimum, successor, and insert explanation

Found this question in The Algorithm Design Manual, and the solution to the question is
Sort2()
initialize-tree(t)
While (not EOF)
read(x);
insert(x,t);
y = Minimum(t)
While (y != NULL) do
print(y → item)
y = Successor(y,t)
and it's explained as "The second problem allows us to use the minimum and successor operations after constructing the tree. We can start from the minimum element, and then repeatedly find the successor to traverse the elements in sorted order."
I don't think I am following Sort2() here. If y is initialized to the minimum node, is it not true that there is a possibility it won't have any successor node? In the case that y only has a parent node, won't this code simply print out the mininum value in the tree y and then terminate?
The idea of that algorithm is similar to heap sort:
Arrange all elements in a tree. You need n insertions that take O(log n) time each.
Traverse the tree. This can be done in n steps that take O(1) time each.
The point is that a tree is not a sequence, but it can be arranged in a way that allows iterating it in order, which makes it equivalent to a sequence.
Also, just in case that caused your confusion, the minimum node is not the root node but typically the leftmost leaf node!

Get k-th largest values [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have some problem. I must add a lot of different values and just get only k-th largest in the end. How can I effectively implement that and what algorithm should I use?
Algorithm:
Create a binary maximum heap, and add each one of the first K values into the heap.
For each one of the remaining N-K values, if it larger than the last value in the heap:
Put it instead of the last value, and bubble it up in order to resort the heap.
Extract all the (K) values from the heap into a list.
Complexity:
O(K)
O((N-K)×log(K))
O(K×log(K))
If N-K ≥ K, then the overall complexity is O((N-K)×log(K)).
If N-K < K, then the overall complexity is O(K×log(K)).
(Based on comments that you do not want to store all the numbers you have seen...)
Keep a running list (sorted) of the k largest you have seen so far. As you get new numbers, look to see if it is larger than the least element in the list. If it is, remove the least element and insert (sorted) the new element into the list of k largest. Your original list of k (when you've seen no numbers) would consist of k entries of negative infinity.
first build max-heap using those elements which is O(n) time.
then extract k-1 elements in O(klogn) time.

Different Dictionary Implementations [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am preparing for an exam in algorithm analysis and after I have learned C# and implemented Dictionary in different ways I am confused about the advantages/disadvantages..
So here are my questions:
What is a reason to implement a Dictionary using unordered array instead of always sorted array?
Reaons to implement Dicitonary using always sorted array instead of unordered array?
Reasons to implement Dictionary using a Binary Search Tree instead of an always sorted array?
If you use an unordered array you can just tack items onto the end or copy the array into a new array and tack items on the end if the original fills up. O(1) or O(n) depending on these 2 cases to insert, but O(n) to do any lookups.
With an ordered array you COULD gain the ability to more quickly search it depending on its contents, either through a binary search or other cool searches, but it has increased insertion cost because you must move things around in the array every time you insert into it, unless you're only adding things already in sorted order, which case it might even be the worst case which would be O(n) for each element.
With a binary search tree you can easily find whatever node you're looking for based on whatever you're forking the tree on in O(log n) time, though this is only possible on a balanced binary tree. With an unbalanced binary search tree (basically a linked list) you could get worst case performance of O(n). With the balanced tree though, an insertion can be more expensive because it requires reorganizing the tree which is usually O(n log n), but again O is the worst case scenario, most balanced binary search trees can do most insertions more quickly.

Printing BST keys in the given range [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am trying to understand the runtime of printing BST keys in the given range.
I tried to understand it from this example, but I could not.
I think I understand where the O(log n) is coming from. That is from going through the BST recursively, this will take O(log n) for each side, but I am not sure about:
Where the K is coming from. is it just the constant time it takes to print? if yes why is the runtime not O(log n) + O(k) , and than you would ignore the K
Where is the O(n) from the in order traversal? because it is not in this runtime.
How the runtime will change if we have several values in the range on each side of the tree. For example, what if the range was from 4?
An easier way to understand the solution is to consider the following algorithm:
Searching for a minimum value greater than key k1 in the BST tree - O(lgn)
Performing in-order traversal of the BST tree nodes from k1 till we reach a node less than or equal to k2, and printing their keys. Because the in-order traversal of the complete BST takes O(n) time, if there are k keys between k1 and k2, the in-order traversal will take O(k) time.
The given algorithm is doing the same thing; Searching for a key between k1 and k2 takes O(lgn) time, whereas printing is done only for k keys within the range k1 and k2 which is O(k). If all BST keys lie within k1 and k2, the runtime will be O(lgn) + O(n) = O(n) because all n keys need to be printed out.

Resources