a binary search tree is given, containing price for n items,(one price for each item), no two price are the same. Suppose I have m dollars, find a pair of two items with exact total cost m. Write an O(n) algorithm.
Can someone tell me how to finish it in O(n)
Related
Suppose given m sorted array with the total number of n elements. our goal is, to choose one element from each sorted array such that the difference between the maximum and the minimum of chosen elements is minimized.
I think we can solve the above problem in O(m log n) and I read this post, but I can't use that reference to solve the above problem.
Given an array of integers. I want to design a comparison-based algorithm
that pairs the largest element with the smallest one, the second largest one with the second smallest one and so on. Obviously, this is easy if I sort the array, but I want to do it in O(n) time. How can I possibly solve this problem?
Well i can prove that it does not exists.
Let`s proof by contradiction: suppose there was such algorithm
When we could get an array of kth min and kth max pairs.
We could when get sorted array by taking all mins in order then all max in order,
so we could get original array sorted in O(n) steps.
So we could get a comparision based sorting algorithm that sorts in O(n)
Yet it can be proven that comparision based sorting algorithm must take atleast n
log n steps. (many proofs online. i.e. https://www.geeksforgeeks.org/lower-bound-on-comparison-based-sorting-algorithms/)
Hence we have a contradiction so such algortihm does not
exist.
I am studying time complexity for binary search, ternary search and k-ary search in N elements and have come up with its respective asymptotic worse case run- time. However, I started to wonder what would happen if I divide N elements into n ranges (or aka n-ary search in n elements). Would that be a sorted linear search in an array which would result in a run-time of O(N)? This is a little confusing. Please help me out!
What you say is right.
For a k-ary search we have:
Do k-1 checks in boundaries to isolate one of the k ranges.
Jump into the range obtained from above.
Hence the time complexity is essentially O((k-1)*log_k(N)) where log_k(N) means 'log(N) to base k'. This has a minimum when k=2.
If k = N, the time complexity will be: O((N-1) * log_N(N)) = O(N-1) = O(N), which is the same algorithmically and complexity-wise as linear search.
Translated to the algorithm above, it is:
Do N-1 checks in boundaries (each of the first N-1 elements) to isolate one of the N ranges. This is the same as a linear search in the first N-1 elements.
Jump into the range obtained from above. This is the same as checking the last element (in constant time).
Suppose we have a balanced binary search tree T holding n numbers. We are given two
numbers L and H and wish to sum up all the numbers in T that lie between L and H. Suppose
there are m such numbers in T.Can someone explain how to calculate the absolute value of the time taken to compute the sum..?
I'll leave you to work out the full details, but here's a start. The algorithm will go:
Find the smallest number in the tree that's greater than L. You can do that in log time.
Walk the tree, each time moving to the next largest, and adding it to a running total.
Stop when you reach a number that's at least H.
I've assumed that "lie between" means "strictly between", but you might want weak inequalities in steps 1 and 3.
I am solving a problem but I got stuck on this part.
There are 3 types of query: add a element (integer), remove a element, get sum of n (n can be any integer) largest elements. How can I do this efficient ? I am current use this solution: add a element , remove a element (binary search, O(lg n) ). getSum (naive, O(n) ).
A segment tree is commonly used to find the sum of a given range. Building that on top of a binary search tree should get the data structure you are looking for with O(log N) adds, remove and sum given range. By querying sum over the range where the k-largest elements are (roughly N-k to N), you can get the sum of the k-largest elements in O(log N). The result being a mutable ordered segment tree rather than the standard immutable (static) unordered one.
Basically, you just add variables to hold the number of children and the sum of their values to each parent node and use that information to find the sum via O(log N) additions and/or subtractions.
If k is fixed, you can use the same approach that allows for O(1) find-min/max in heaps to allow for O(1) find the k-largest elements sum simply by updating a variable holding the value during each O(log N) add/remove.
A lot depends on the relative frequency of the queries but if we assume a typical situation where the sum query will be much more frequent than the add-remove requests (and add is more frequent than remove), the solution is to store a tuple of the sums and the numbers.
So the first element will be (a1, a1), the second element in your list will be (a2, a1+a2) and so on. (Note that when you insert a new element in the k-th position, you still don't need to do the whole sum, just add the new number to the preceding element's sum.)
Removals will be quite expensive though but that's the trade-off for an O(1) sum query.