I have create a data structure that implements a maximum binary heap. Im trying to find 2 sequences of n numbers which the insertion takes O(n) and O(nlogn) time.
Is this possible?
Let me try to restate what you are asking; please correct me if this is wrong.
So a Binary Heap data structure has time complexity of logN for insertion. The process of insertion in a max-heap is as follows,
the tree is a complete binary tree, i.e. all levels are full except the last one.
insert at the left most spot in the tree.
if the node is smaller than the parent, a swap is performed.
the process is repeated until the node is at the appropriate level.
So for your question,
you want a sequence of n numbers with insertion time complexity of O(n). This means, that each insertion takes O(1) or constant time. This means we need a sequence where there is no need for a heapify operation. I think a sequence like following would obviate the need for a heapify operation.
[10, 8, 9, 4, 5, 6, 7 ]
for the second one, you want O(nlogn) which means each operation takes logn which is the standard or average performance a binary heap for insertion. So any sequence should do,
[ 1, 2, 3, 4, 5, 6, 7]
for each one from 2nd onward, you need to compare to parent node and swap.
Related
Given an already sorted array of n distinct elements where only the last element is out of order, would insertion sort be the fastest algorithm to be used here?
Ex: [1, 3, 5, 6, 7, 9, 2]
If it was an array, yes, insertion sort.
Worst case complexity: O(n)
Worst case scenario: Unsorted element is the smallest element.
If it was a linked list of any kind where the cost of insertion is constant time, then a binary search would be the fastest most efficient way.
Worst case complexity: O(log(n))
Suppose we have minimum of min_heap = [1, 3, 5, 9, 10, 13].
And the size of heap is finite as 6! It can't grow more than 10.
What happens when we put the greater element than any other elements in minimum heap? (ex. we put 15 into our heap in this case)
What will be the efficiency? O(K)? K= size of the heap!
It depends on how the "can't grow more than K" policy is implemented.
The most useful way to do that is to keep the best K elements at all times.
If so, inserting an element which is worse than the K best, naturally, does nothing.
The time for any insertion, successful or not, will be O (log K), as usual.
If on empty min heap we doing n arbitrary insert and delete operations, (with given location of delete in min-heap). why the amortized analysis for insert is O(1) and delete is O(log n)?
a) insert O(log n), delete O(1)
b) insert O(log n), delete O(log n)
c) insert O(1), delete O(1)
d) insert O(1), delete O(log n)
any person could clarify it for me?
Based on your question and responses to comments, I'm going to assume a binary heap.
First, the worst case for insertion is O(log n) and the worst case for removal of the smallest item is O(log n). This follows from the tree structure of the heap. That is, for a heap of n items, there are log(n) levels in the tree.
Insertion involves (logically) adding the item as the lowest right-most node in the tree and then "bubbling" it up to the required level. If the new item is smaller than the root, then it has to bubble all the way to the top--all log(n) levels. So if you insert the numbers 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 into a min-heap, you'll hit the worst case for every insertion.
Removal of the smallest element involves replacing the lowest item (the root) with the last item and then "sifting" the item down to its proper position. Again, this can take up to log(n) operations.
That's the worst case. The average case is much different.
Remember that in a binary heap, half of the nodes are leafs--they have no children. So if you're inserting items in random order, half the time the item you're inserting will belong on the lowest level and there is no "bubble up" to do. So half the time your insert operation is O(1). Of the other half, half of those will belong on the second level up. And so on. The only time you actually do log(n) operations on insert is when the item you're inserting is smaller than the existing root item. It's quite possible, then, that the observed runtime behavior is that insertion is O(1). In fact that will be the behavior if you insert a sorted array into a min-heap. That is, if you were to insert the values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 in that order.
When removing the smallest item from a min-heap, you take the last item from the heap and sift it down from the top. The "half the time" rule comes into play again, but this time it's working against you. That last item you took from the heap probably belongs down there on the lowest level. So you have to sift it all the way back down, which takes log(n) operations. Half the time you'll have do to all log(n) operations. Half of the remaining you'll need to do all but one of them, etc. And in fact the minimum number of levels you have to sift down will depend on the depth of the tree. For example, if your heap has more than three items then you know that removing the smallest item will require at least one sift-down operation because the next-lowest item is always on the second level of the tree.
It turns out, then, that in the average case insertion into a binary heap takes much less than O(log n) time. It's likely closer to O(1). And removal from a binary heap is much closer to the worst case of O(log n).
This is a homework assignment.
The goal is to present an algorithm in pseudocode that will search an array of numbers (doesn't specify if integers or >0) and check if the ratio of any two numbers equals a given x. Time complexity must be under O(nlogn).
My idea was to mergesort the array (O(nlogn) time) and then if |x| > 1 start checking for every number in desending order (using a binary traversal algorithm). The check should also take O(logn) time for each number, with a worst case of n checks gives a total of O(nlogn). If I am not missing anything this should give us a worst case of O(nlogn) + O(nlogn) = O(nlogn), within the parameters of the assignment.
I realize that it doesn't really matter where I start checking the ratios after sorting, but the time cost is amortized by 1/2).
Is my logic correct? Is there a faster algorithm?
An example in case it isn't clear:
Given an array { 4, 9, 2, 1, 8, 6 }
If we want to seach for a ratio of 2:
Mergesort { 9, 8, 6, 4, 2, 1 }
Since the given ratio is >1 we will search from left to right.
2a. First number is 9. Checking 9 / 4 > 2. Checking 9/6 < 2 Next Number.
2b. Second number is 8. Checking 8 / 4 = 2. DONE
The analysis you have presented is correct and is a perfectly good way to solve this problem. Sorting does work in time O(n log n), and 2n binary searches also takes O(n log n) time. That said, I don't think you want to use the term "amortized" here, since that refers to a different type of analysis.
As a hint for how to speed up your solution a bit, the general idea of your solution is to make it possible to efficiently query, for any number, whether that number exists in the array. That way, you can just loop over all numbers and look for anything that would make the ratio work. However, if you use an auxiliary data structure outside the array that supports fast access, you can possibly whittle down your runtime at the cost of increasing the memory usage. Try thinking about what data structures support very fast access (say, O(1) lookups) and see if you can use any of them here.
Hope this helps!
to solve this problem, only O(nlgn) is enough
step 1, sort the array. that cost O(nlgn)
step 2, check whether the ratio exists, this step only needs o(n)
u just need two pointers, one points to the first element(smallest one), another points to the last element(biggest one).
calculate the ratio.
if the ratio is bigger than the specified one, move the second pointer to its previous element.
if the ratio is smaller than the specified one, move the first pointer to its next element.
repeat the above steps until:
u find the exact ratio, or
either the first pointer reaches the end, or the second point reaches the beginning
The complexity of your algorithm is O(n²), because after sorting the array, you iterate over each element (up to n times) and in each iteration you execute up to n - 1 divisions.
Instead, after sorting the array, iterate over each element, and in each iteration divide the element by the ratio, then see if the result is contained in the array:
division: O(1)
search in sorted list: O(log n)
repeat for each element: n times
Results in time complexity O(n log n)
In your example:
9/2 = 4.5 (not found)
8/2 = 4 (found)
(1) Build a hashmap of this array. Time Cost: O(n)
(2) For every element a[i], search a[i]*x in HashMap. Time Cost: O(n).
Total Cost: O(n)
Using an algorithm Tree-Insert(T, v) that inserts a new value v into a binary search tree T, the following algorithm grows a binary search tree by repeatedly inserting each value in a given section of an array into the tree:
Tree-Grow(A, first, last, T)
1 for i ← first to last
2 do Tree-Insert(T, A[i])
If the tree is initially empty, and the length of array section (i.e., last-first+1) is n, what are the best-case and the worst-case asymptotic running time of the above algorithm, respectively?
When n = 7, give a best-case instance (as an array containing digits 1 to 7, in certain order), and a worst-case instance (in the same form) of the algorithm.
If the array is sorted and all the values are distinct, find a way to modify Tree-Grow, so that it will always build the shortest tree.
What are the best-case and the worst-case asymptotic running time of the modified algorithm, respectively?
Please tag homework questions with the homework tag. In order to do well on your final exam, I suggest you actually learn this stuff, but I'm not here to judge you.
1) It takes O(n) to iterate from first to last. It takes O(lg n) to insert into a binary tree, therefore it the algorithm that you have shown takes O(n lg n) in the best case.
The worst case of inserting into a binary tree is when the tree is really long, but not very bushy; similar to a linked list. In that case, it would take O(n) to insert, therefore it would take O(n^2) in the worst case.
2) Best Case: [4, 2, 6, 1, 3, 5, 7], Worst Case: [1, 2, 3, 4, 5, 6, 7]
3) Use the n/2 index as the root, then recursively do this for the left side and right side of the array.
4) O(n lg n) in the best and worst case.
I hope this helps.