Make balance bracket with highest score - algorithm

Question:
Given an array A of integers and a score S = 0. For each place in the array, you can do one of the following:
Place a "(". The score would be S += Ai
Place a ")". The score would be S -= Ai
Skip it
What is the highest score you can get so that the brackets are balanced?
Limits:
|Ai| <= 10^9
Size of array A: <= 10^5
P/S:
I have tried many ways but my best take is a brute force that takes O(3^n). Is there a way to do this problem in O(n.logn) or less?

You can do this in O(n log n) time with a max-heap.
First, remove the asymmetry in the operations. Rather than having open and closed brackets, assume we start off with a running sum of -sum(A), i.e. all closed brackets. Now, for every element in A, we can add it to our running sum either zero, one or two times, corresponding to leaving a closed bracket, removing the closed bracket, or adding an open bracket, respectively. The balance constraint now says that after processing the first k elements, we have:
Made at least k additions, for all integers k,
We make length(A) total additions.
We have added the final element to our sum either zero or one times.
Suppose that after processing the first k elements, we have made k additions, and that we have the maximum score possible of all such configurations. We can extend this to a maximum score configuration of the first k+1 elements with k+1 additions, greedily. We have a new choice going forward of adding the k+1-th element to our sum up to two times, but can only add it at most once now. Simply choose the largest seen element that has not yet been added to our sum two times, and add it to our sum: this must also be a maximum-score configuration, or we can show the old configuration wasn't maximum either.
Python Code: (All values are negated because Python only has a min-heap)
def solve(nums: List[int]) -> int:
"""Given an array of integers, return the maximum sum achievable.
We must add k elements from nums and subtract k elements from nums,
left to right and all distinct, so that at no point have we subtracted
more elements than we have added.
"""
max_heap = []
running_sum = 0
# Balance will be 0 after all loop iterations.
for value in nums:
running_sum -= value # Assume value is subtracted
heapq.heappush(max_heap, -value) # Option to not subtract value
heapq.heappush(max_heap, -value) # Option to add value
# Either un-subtract or add the largest previous free element
running_sum -= heapq.heappop(max_heap)
return running_sum

You can do this in O(n2) time by using a two-dimensional array highest_score, where highest_score[i][b] is the highest score achievable after position i with b open brackets yet to be closed. Each element highest_score[i][b] depends only on highest_score[i−1][b−1], highest_score[i−1][b], and highest_score[i−1][b+1] (and of course A[i]), so each row highest_score[i] can be computed in O(n) time from the previous row highest_score[i−1], and the final answer is highest_score[n][0].
(Note: that uses O(n2) space, but since each row of highest_score depends only on the previous row, you can actually do it in O(n) by reusing rows. But the asymptotic runtime complexity will be the same either way.)

Related

How to find 2 special elements in the array in O(n)

Let a1,...,an be a sequence of real numbers. Let m be the minimum of the sequence, and let M be the maximum of the sequence.
I proved that there exists 2 elements in the sequence, x,y, such that |x-y|<=(M-m)/n.
Now, is there a way to find an algorithm that finds such 2 elements in time complexity of O(n)?
I thought about sorting the sequence, but since I dont know anything about M I cannot use radix/bucket or any other linear time algorithm that I'm familier with.
I'd appreciate any idea.
Thanks in advance.
First find out n, M, m. If not already given they can be determined in O(n).
Then create a memory storage of n+1 elements; we will use the storage for n+1 buckets with width w=(M-m)/n.
The buckets cover the range of values equally: Bucket 1 goes from [m; m+w[, Bucket 2 from [m+w; m+2*w[, Bucket n from [m+(n-1)*w; m+n*w[ = [M-w; M[, and the (n+1)th bucket from [M; M+w[.
Now we go once through all the values and sort them into the buckets according to the assigned intervals. There should be at a maximum 1 element per bucket. If the bucket is already filled, it means that the elements are closer together than the boundaries of the half-open interval, e.g. we found elements x, y with |x-y| < w = (M-m)/n.
If no such two elements are found, afterwards n buckets of n+1 total buckets are filled with one element. And all those elements are sorted.
We once more go through all the buckets and compare the distance of the content of neighbouring buckets only, whether there are two elements, which fulfil the condition.
Due to the width of the buckets, the condition cannot be true for buckets, which are not adjoining: For those the distance is always |x-y| > w.
(The fulfilment of the last inequality in 4. is also the reason, why the interval is half-open and cannot be closed, and why we need n+1 buckets instead of n. An alternative would be, to use n buckets and make the now last bucket a special case with [M; M+w]. But O(n+1)=O(n) and using n+1 steps is preferable to special casing the last bucket.)
The running time is O(n) for step 1, 0 for step 2 - we actually do not do anything there, O(n) for step 3 and O(n) for step 4, as there is only 1 element per bucket. Altogether O(n).
This task shows, that either sorting of elements, which are not close together or coarse sorting without considering fine distances can be done in O(n) instead of O(n*log(n)). It has useful applications. Numbers on computers are discrete, they have a finite precision. I have sucessfuly used this sorting method for signal-processing / fast sorting in real-time production code.
About #Damien 's remark: The real threshold of (M-m)/(n-1) is provably true for every such sequence. I assumed in the answer so far the sequence we are looking at is a special kind, where the stronger condition is true, or at least, for all sequences, if the stronger condition was true, we would find such elements in O(n).
If this was a small mistake of the OP instead (who said to have proven the stronger condition) and we should find two elements x, y with |x-y| <= (M-m)/(n-1) instead, we can simplify:
-- 3. We would do steps 1 to 3 like above, but with n buckets and the bucket width set to w = (M-m)/(n-1). The bucket n now goes from [M; M+w[.
For step 4 we would do the following alternative:
4./alternative: n buckets are filled with one element each. The element at bucket n has to be M and is at the left boundary of the bucket interval. The distance of this element y = M to the element x in the n-1th bucket for every such possible element x in the n-1thbucket is: |M-x| <= w = (M-m)/(n-1), so we found x and y, which fulfil the condition, q.e.d.
First note that the real threshold should be (M-m)/(n-1).
The first step is to calculate the min m and max M elements, in O(N).
You calculate the mid = (m + M)/2value.
You concentrate the value less than mid at the beginning, and more than mid at the end of he array.
You select the part with the largest number of elements and you iterate until very few numbers are kept.
If both parts have the same number of elements, you can select any of them. If the remaining part has much more elements than n/2, then in order to maintain a O(n) complexity, you can keep onlyn/2 + 1 of them, as the goal is not to find the smallest difference, but one difference small enough only.
As indicated in a comment by #btilly, this solution could fail in some cases, for example with an input [0, 2.1, 2.9, 5]. For that, it is needed to calculate the max value of the left hand, and the min value of the right hand, and to test if the answer is not right_min - left_max. This doesn't change the O(n) complexity, even if the solution becomes less elegant.
Complexity of the search procedure: O(n) + O(n/2) + O(n/4) + ... + O(2) = O(2n) = O(n).
Damien is correct in his comment that the correct results is that there must be x, y such that |x-y| <= (M-m)/(n-1). If you have the sequence [0, 1, 2, 3, 4] you have 5 elements, but no two elements are closer than (M-m)/n = (4-0)/5 = 4/5.
With the right threshold, the solution is easy - find M and m by scanning through the input once, and then bucket the input into (n-1) buckets of size (M-m)/(n-1), putting values that are on the boundaries of a pair of buckets into both buckets. At least one bucket must have two values in it by the pigeon-hole principle.

Searching for time complexity of my recursive algorithm

So I have an assignment from my school, and I only wondered what time complexity my algorithm has (not needed for answer per se, only the algorithm need to run faster than O(n) worst case))
The question is: Given n (n ≥ 3) distinct elements, design a divide and conquer algorithm to compute the first three smallest elements. Your algorithm should return a triple (x, y, z) such that x < y < z < the rest n − 3 input elements, and run in linear time in the worst case.
And my solution is as follows:
Solution: Since “n” must be ≥ 3, the given array cannot have less than 3
elements.
If n == 3, simply compare the elements to each other (maximum of 3
comparisons needed).
Compare the first 2 elements to each other and assign the elements with a
“smallest element” and “2nd smallest element”, we’ll call
them “x” (smallest) and “y” (2nd smallest) from here on out. Compare “x”
(smallest) to the 3rd element.
If x > 3rd element, we know that the 3rd element is the smallest in this
array, therefore: “y” becomes the new “z”, “x” becomes the new “y”, and the
3rd element becomes the new “x”. You’re now left with the desired output.
Return the triple (x<y<z). (2 comparisons used)
Else if x < 3rd, make the 3rd element be “z”. Compare y with 3rd element
(now “z”).
If y < z, do nothing, you already have a triple with the desired outcome
(x<y<z).
If y > z, swap the elements. You now have a triple with the desired outcome
(x<y<z).
Now that the first case (n == 3) has been handled, let’s handle what should happen when n > 3.
If n > 3, split the array recursively into parts of n/2 (if n is an uneven
number, split the array into parts of n/2 + 1 & n/2).
Keep doing the 1st step until the split arrays have a size of ≥ 2.
Compare the elements in each sub-array to each other, and assign a lowest
and 2nd lowest value to them. (“x” and “y”)
When merging the sub-array, compare the lowest element of one sub-array, to
the other array’s lowest value.
If a1(lowest) < a2(lowest), make a1(lowest) be “x”, a2(lowest) the new “y”,
and a2(2nd lowest) the new “z”, then compare a1(2nd lowest) with the highest
value in a1 at that moment (“z” in this case).
If a1(2ndlowest) > “z”, do nothing. (“x”, “y” and “z” are already the three
lowest values in the sub-array)
Else if a1(2ndlowest) < “z”, swap the elements and make a1(2ndlowest) the
new “z”, then compare the new “z” with “y”
If “z” < “y”, swap the elements.
No more comparisons needed, as the “x” is the lowest element from a1, which
means the 3 elements, “x”, “y” & “z” are now the lowest possible from this
subsection of the array.
Repeat from step 4 until reaching the highest layer of the call, and you now
have a triple that satisfies the condition (x<y<z).
Sorry about the wall of text (and pseudocode). I've read about time complexity, and I understand the simple ones (like a statement has constant time, a for-loop has time depending on how long it is, etc.) but I have a hard time understanding the time complexity for my own algorithm. Thanks in advance.

Sample number with equal probability which is not part of a set

I have a number n and a set of numbers S ∈ [1..n]* with size s (which is substantially smaller than n). I want to sample a number k ∈ [1..n] with equal probability, but the number is not allowed to be in the set S.
I am trying to solve the problem in at worst O(log n + s). I am not sure whether it's possible.
A naive approach is creating an array of numbers from 1 to n excluding all numbers in S and then pick one array element. This will run in O(n) and is not an option.
Another approach may be just generating random numbers ∈[1..n] and rejecting them if they are contained in S. This has no theoretical bound as any number could be sampled multiple times even if it is in the set. But on average this might be a practical solution if s is substantially smaller than n.
Say s is sorted. Generate a random number between 1 and n-s, call it k. We've chosen the k'th element of {1,...,n} - s. Now we need to find it.
Use binary search on s to find the count of the elements of s <= k. This takes O(log |s|). Add this to k. In doing so, we may have passed or arrived at additional elements of s. We can adjust for this by incrementing our answer for each such element that we pass, which we find by checking the next larger element of s from the point we found in our binary search.
E.g., n = 100, s = {1,4,5,22}, and our random number is 3. So our approach should return the third element of [2,3,6,7,...,21,23,24,...,100] which is 6. Binary search finds that 1 element is at most 3, so we increment to 4. Now we compare to the next larger element of s which is 4 so increment to 5. Repeating this finds 5 in so we increment to 6. We check s once more, see that 6 isn't in it, so we stop.
E.g., n = 100, s = {1,4,5,22}, and our random number is 4. So our approach should return the fourth element of [2,3,6,7,...,21,23,24,...,100] which is 7. Binary search finds that 2 elements are at most 4, so we increment to 6. Now we compare to the next larger element of s which is 5 so increment to 7. We check s once more, see that the next number is > 7, so we stop.
If we assume that "s is substantially smaller than n" means |s| <= log(n), then we will increment at most log(n) times, and in any case at most s times.
If s is not sorted then we can do the following. Create an array of bits of size s. Generate k. Parse s and do two things: 1) count the number of elements < k, call this r. At the same time, set the i'th bit to 1 if k+i is in s (0 indexed so if k is in s then the first bit is set).
Now, increment k a number of times equal to r plus the number of set bits is the array with an index <= the number of times incremented.
E.g., n = 100, s = {1,4,5,22}, and our random number is 4. So our approach should return the fourth element of [2,3,6,7,...,21,23,24,...,100] which is 7. We parse s and 1) note that 1 element is below 4 (r=1), and 2) set our array to [1, 1, 0, 0]. We increment once for r=1 and an additional two times for the two set bits, ending up at 7.
This is O(s) time, O(s) space.
This is an O(1) solution with O(s) initial setup that works by mapping each non-allowed number > s to an allowed number <= s.
Let S be the set of non-allowed values, S(i), where i = [1 .. s] and s = |S|.
Here's a two part algorithm. The first part constructs a hash table based only on S in O(s) time, the second part finds the random value k ∈ {1..n}, k ∉ S in O(1) time, assuming we can generate a uniform random number in a contiguous range in constant time. The hash table can be reused for new random values and also for new n (assuming S ⊂ { 1 .. n } still holds of course).
To construct the hash, H. First set j = 1. Then iterate over S(i), the elements of S. They do not need to be sorted. If S(i) > s, add the key-value pair (S(i), j) to the hash table, unless j ∈ S, in which case increment j until it is not. Finally, increment j.
To find a random value k, first generate a uniform random value in the range s + 1 to n, inclusive. If k is a key in H, then k = H(k). I.e., we do at most one hash lookup to insure k is not in S.
Python code to generate the hash:
def substitute(S):
H = dict()
j = 1
for s in S:
if s > len(S):
while j in S: j += 1
H[s] = j
j += 1
return H
For the actual implementation to be O(s), one might need to convert S into something like a frozenset to insure the test for membership is O(1) and also move the len(S) loop invariant out of the loop. Assuming the j in S test and the insertion into the hash (H[s] = j) are constant time, this should have complexity O(s).
The generation of a random value is simply:
def myrand(n, s, H):
k = random.randint(s + 1, n)
return (H[k] if k in H else k)
If one is only interested in a single random value per S, then the algorithm can be optimized to improve the common case, while the worst case remains the same. This still requires S be in a hash table that allows for a constant time "element of" test.
def rand_not_in(n, S):
k = random.randint(len(S) + 1, n);
if k not in S: return k
j = 1
for s in S:
if s > len(S):
while j in S: j += 1
if s == k: return j
j += 1
Optimizations are: Only generate the mapping if the random value is in S. Don't save the mapping to a hash table. Short-circuit the mapping generation when the random value is found.
Actually, the rejection method seems like the practical approach.
Generate a number in 1...n and check whether it is forbidden; regenerate until the generated number is not forbidden.
The probability of a single rejection is p = s/n.
Thus the expected number of random number generations is 1 + p + p^2 + p^3 + ... which is 1/(1-p), which in turn is equal to n/(n-s).
Now, if s is much less than n, or even more up to s = n/2, this expected number is at most 2.
It would take s almost equal to n to make it infeasible in practice.
Multiply the expected time by log s if you use a tree-set to check whether the number is in the set, or by just 1 (expected value again) if it is a hash-set. So the average time is O(1) or O(log s) depending on the set implementation. There is also O(s) memory for storing the set, but unless the set is given in some special way, implicitly and concisely, I don't see how it can be avoided.
(Edit: As per comments, you do this only once for a given set.
If, additionally, we are out of luck, and the set is given as a plain array or list, not some fancier data structure, we get O(s) expected time with this approach, which still fits into the O(log n + s) requirement.)
If attacks against the unbounded algorithm are a concern (and only if they truly are), the method can include a fall-back algorithm for the cases when a certain fixed number of iterations didn't provide the answer.
Similarly to how IntroSort is QuickSort but falls back to HeapSort if the recursion depth gets too high (which is almost certainly a result of an attack resulting in quadratic QuickSort behavior).
Find all numbers that are in a forbidden set and less or equal then n-s. Call it array A.
Find all numbers that are not in a forbidden set and greater then n-s. Call it array B. It may be done in O(s) if set is sorted.
Note that lengths of A and B are equal, and create mapping map[A[i]] = B[i]
Generate number t up to n-s. If there is map[t] return it, otherwise return t
It will work in O(s) insertions to a map + 1 lookup which is either O(s) in average or O(s log s)

How to find pair with kth largest sum?

Given two sorted arrays of numbers, we want to find the pair with the kth largest possible sum. (A pair is one element from the first array and one element from the second array). For example, with arrays
[2, 3, 5, 8, 13]
[4, 8, 12, 16]
The pairs with largest sums are
13 + 16 = 29
13 + 12 = 25
8 + 16 = 24
13 + 8 = 21
8 + 12 = 20
So the pair with the 4th largest sum is (13, 8). How to find the pair with the kth largest possible sum?
Also, what is the fastest algorithm? The arrays are already sorted and sizes M and N.
I am already aware of the O(Klogk) solution , using Max-Heap given here .
It also is one of the favorite Google interview question , and they demand a O(k) solution .
I've also read somewhere that there exists a O(k) solution, which i am unable to figure out .
Can someone explain the correct solution with a pseudocode .
P.S.
Please DON'T post this link as answer/comment.It DOESN'T contain the answer.
I start with a simple but not quite linear-time algorithm. We choose some value between array1[0]+array2[0] and array1[N-1]+array2[N-1]. Then we determine how many pair sums are greater than this value and how many of them are less. This may be done by iterating the arrays with two pointers: pointer to the first array incremented when sum is too large and pointer to the second array decremented when sum is too small. Repeating this procedure for different values and using binary search (or one-sided binary search) we could find Kth largest sum in O(N log R) time, where N is size of the largest array and R is number of possible values between array1[N-1]+array2[N-1] and array1[0]+array2[0]. This algorithm has linear time complexity only when the array elements are integers bounded by small constant.
Previous algorithm may be improved if we stop binary search as soon as number of pair sums in binary search range decreases from O(N2) to O(N). Then we fill auxiliary array with these pair sums (this may be done with slightly modified two-pointers algorithm). And then we use quickselect algorithm to find Kth largest sum in this auxiliary array. All this does not improve worst-case complexity because we still need O(log R) binary search steps. What if we keep the quickselect part of this algorithm but (to get proper value range) we use something better than binary search?
We could estimate value range with the following trick: get every second element from each array and try to find the pair sum with rank k/4 for these half-arrays (using the same algorithm recursively). Obviously this should give some approximation for needed value range. And in fact slightly improved variant of this trick gives range containing only O(N) elements. This is proven in following paper: "Selection in X + Y and matrices with sorted rows and columns" by A. Mirzaian and E. Arjomandi. This paper contains detailed explanation of the algorithm, proof, complexity analysis, and pseudo-code for all parts of the algorithm except Quickselect. If linear worst-case complexity is required, Quickselect may be augmented with Median of medians algorithm.
This algorithm has complexity O(N). If one of the arrays is shorter than other array (M < N) we could assume that this shorter array is extended to size N with some very small elements so that all calculations in the algorithm use size of the largest array. We don't actually need to extract pairs with these "added" elements and feed them to quickselect, which makes algorithm a little bit faster but does not improve asymptotic complexity.
If k < N we could ignore all the array elements with index greater than k. In this case complexity is equal to O(k). If N < k < N(N-1) we just have better complexity than requested in OP. If k > N(N-1), we'd better solve the opposite problem: k'th smallest sum.
I uploaded simple C++11 implementation to ideone. Code is not optimized and not thoroughly tested. I tried to make it as close as possible to pseudo-code in linked paper. This implementation uses std::nth_element, which allows linear complexity only on average (not worst-case).
A completely different approach to find K'th sum in linear time is based on priority queue (PQ). One variation is to insert largest pair to PQ, then repeatedly remove top of PQ and instead insert up to two pairs (one with decremented index in one array, other with decremented index in other array). And take some measures to prevent inserting duplicate pairs. Other variation is to insert all possible pairs containing largest element of first array, then repeatedly remove top of PQ and instead insert pair with decremented index in first array and same index in second array. In this case there is no need to bother about duplicates.
OP mentions O(K log K) solution where PQ is implemented as max-heap. But in some cases (when array elements are evenly distributed integers with limited range and linear complexity is needed only on average, not worst-case) we could use O(1) time priority queue, for example, as described in this paper: "A Complexity O(1) Priority Queue for Event Driven Molecular Dynamics Simulations" by Gerald Paul. This allows O(K) expected time complexity.
Advantage of this approach is a possibility to provide first K elements in sorted order. Disadvantages are limited choice of array element type, more complex and slower algorithm, worse asymptotic complexity: O(K) > O(N).
EDIT: This does not work. I leave the answer, since apparently I am not the only one who could have this kind of idea; see the discussion below.
A counter-example is x = (2, 3, 6), y = (1, 4, 5) and k=3, where the algorithm gives 7 (3+4) instead of 8 (3+5).
Let x and y be the two arrays, sorted in decreasing order; we want to construct the K-th largest sum.
The variables are: i the index in the first array (element x[i]), j the index in the second array (element y[j]), and k the "order" of the sum (k in 1..K), in the sense that S(k)=x[i]+y[j] will be the k-th greater sum satisfying your conditions (this is the loop invariant).
Start from (i, j) equal to (0, 0): clearly, S(1) = x[0]+y[0].
for k from 1 to K-1, do:
if x[i+1]+ y[j] > x[i] + y[j+1], then i := i+1 (and j does not change) ; else j:=j+1
To see that it works, consider you have S(k) = x[i] + y[j]. Then, S(k+1) is the greatest sum which is lower (or equal) to S(k), and such as at least one element (i or j) changes. It is not difficult to see that exactly one of i or j should change.
If i changes, the greater sum you can construct which is lower than S(k) is by setting i=i+1, because x is decreasing and all the x[i'] + y[j] with i' < i are greater than S(k). The same holds for j, showing that S(k+1) is either x[i+1] + y[j] or x[i] + y[j+1].
Therefore, at the end of the loop you found the K-th greater sum.
tl;dr: If you look ahead and look behind at each iteration, you can start with the end (which is highest) and work back in O(K) time.
Although the insight underlying this approach is, I believe, sound, the code below is not quite correct at present (see comments).
Let's see: first of all, the arrays are sorted. So, if the arrays are a and b with lengths M and N, and as you have arranged them, the largest items are in slots M and N respectively, the largest pair will always be a[M]+b[N].
Now, what's the second largest pair? It's going to have perhaps one of {a[M],b[N]} (it can't have both, because that's just the largest pair again), and at least one of {a[M-1],b[N-1]}. BUT, we also know that if we choose a[M-1]+b[N-1], we can make one of the operands larger by choosing the higher number from the same list, so it will have exactly one number from the last column, and one from the penultimate column.
Consider the following two arrays: a = [1, 2, 53]; b = [66, 67, 68]. Our highest pair is 53+68. If we lose the smaller of those two, our pair is 68+2; if we lose the larger, it's 53+67. So, we have to look ahead to decide what our next pair will be. The simplest lookahead strategy is simply to calculate the sum of both possible pairs. That will always cost two additions, and two comparisons for each transition (three because we need to deal with the case where the sums are equal);let's call that cost Q).
At first, I was tempted to repeat that K-1 times. BUT there's a hitch: the next largest pair might actually be the other pair we can validly make from {{a[M],b[N]}, {a[M-1],b[N-1]}. So, we also need to look behind.
So, let's code (python, should be 2/3 compatible):
def kth(a,b,k):
M = len(a)
N = len(b)
if k > M*N:
raise ValueError("There are only %s possible pairs; you asked for the %sth largest, which is impossible" % M*N,k)
(ia,ib) = M-1,N-1 #0 based arrays
# we need this for lookback
nottakenindices = (0,0) # could be any value
nottakensum = float('-inf')
for i in range(k-1):
optionone = a[ia]+b[ib-1]
optiontwo = a[ia-1]+b[ib]
biggest = max((optionone,optiontwo))
#first deal with look behind
if nottakensum > biggest:
if optionone == biggest:
newnottakenindices = (ia,ib-1)
else: newnottakenindices = (ia-1,ib)
ia,ib = nottakenindices
nottakensum = biggest
nottakenindices = newnottakenindices
#deal with case where indices hit 0
elif ia <= 0 and ib <= 0:
ia = ib = 0
elif ia <= 0:
ib-=1
ia = 0
nottakensum = float('-inf')
elif ib <= 0:
ia-=1
ib = 0
nottakensum = float('-inf')
#lookahead cases
elif optionone > optiontwo:
#then choose the first option as our next pair
nottakensum,nottakenindices = optiontwo,(ia-1,ib)
ib-=1
elif optionone < optiontwo: # choose the second
nottakensum,nottakenindices = optionone,(ia,ib-1)
ia-=1
#next two cases apply if options are equal
elif a[ia] > b[ib]:# drop the smallest
nottakensum,nottakenindices = optiontwo,(ia-1,ib)
ib-=1
else: # might be equal or not - we can choose arbitrarily if equal
nottakensum,nottakenindices = optionone,(ia,ib-1)
ia-=1
#+2 - one for zero-based, one for skipping the 1st largest
data = (i+2,a[ia],b[ib],a[ia]+b[ib],ia,ib)
narrative = "%sth largest pair is %s+%s=%s, with indices (%s,%s)" % data
print (narrative) #this will work in both versions of python
if ia <= 0 and ib <= 0:
raise ValueError("Both arrays exhausted before Kth (%sth) pair reached"%data[0])
return data, narrative
For those without python, here's an ideone: http://ideone.com/tfm2MA
At worst, we have 5 comparisons in each iteration, and K-1 iterations, which means that this is an O(K) algorithm.
Now, it might be possible to exploit information about differences between values to optimise this a little bit, but this accomplishes the goal.
Here's a reference implementation (not O(K), but will always work, unless there's a corner case with cases where pairs have equal sums):
import itertools
def refkth(a,b,k):
(rightia,righta),(rightib,rightb) = sorted(itertools.product(enumerate(a),enumerate(b)), key=lamba((ia,ea),(ib,eb):ea+eb)[k-1]
data = k,righta,rightb,righta+rightb,rightia,rightib
narrative = "%sth largest pair is %s+%s=%s, with indices (%s,%s)" % data
print (narrative) #this will work in both versions of python
return data, narrative
This calculates the cartesian product of the two arrays (i.e. all possible pairs), sorts them by sum, and takes the kth element. The enumerate function decorates each item with its index.
The max-heap algorithm in the other question is simple, fast and correct. Don't knock it. It's really well explained too. https://stackoverflow.com/a/5212618/284795
Might be there isn't any O(k) algorithm. That's okay, O(k log k) is almost as fast.
If the last two solutions were at (a1, b1), (a2, b2), then it seems to me there are only four candidate solutions (a1-1, b1) (a1, b1-1) (a2-1, b2) (a2, b2-1). This intuition could be wrong. Surely there are at most four candidates for each coordinate, and the next highest is among the 16 pairs (a in {a1,a2,a1-1,a2-1}, b in {b1,b2,b1-1,b2-1}). That's O(k).
(No it's not, still not sure whether that's possible.)
[2, 3, 5, 8, 13]
[4, 8, 12, 16]
Merge the 2 arrays and note down the indexes in the sorted array. Here is the index array looks like (starting from 1 not 0)
[1, 2, 4, 6, 8]
[3, 5, 7, 9]
Now start from end and make tuples. sum the elements in the tuple and pick the kth largest sum.
public static List<List<Integer>> optimization(int[] nums1, int[] nums2, int k) {
// 2 * O(n log(n))
Arrays.sort(nums1);
Arrays.sort(nums2);
List<List<Integer>> results = new ArrayList<>(k);
int endIndex = 0;
// Find the number whose square is the first one bigger than k
for (int i = 1; i <= k; i++) {
if (i * i >= k) {
endIndex = i;
break;
}
}
// The following Iteration provides at most endIndex^2 elements, and both arrays are in ascending order,
// so k smallest pairs must can be found in this iteration. To flatten the nested loop, refer
// 'https://stackoverflow.com/questions/7457879/algorithm-to-optimize-nested-loops'
for (int i = 0; i < endIndex * endIndex; i++) {
int m = i / endIndex;
int n = i % endIndex;
List<Integer> item = new ArrayList<>(2);
item.add(nums1[m]);
item.add(nums2[n]);
results.add(item);
}
results.sort(Comparator.comparing(pair->pair.get(0) + pair.get(1)));
return results.stream().limit(k).collect(Collectors.toList());
}
Key to eliminate O(n^2):
Avoid cartesian product(or 'cross join' like operation) of both arrays, which means flattening the nested loop.
Downsize iteration over the 2 arrays.
So:
Sort both arrays (Arrays.sort offers O(n log(n)) performance according to Java doc)
Limit the iteration range to the size which is just big enough to support k smallest pairs searching.

a "divide and conquer" algorithm assignment

Now I have N different intergers, I need to find an interval that has the most numbers whose value is between the endpoints of the interval in O(NlogN) time. I call it a "divide and conquer" problem because it is in my final exam's "divide and conquer" category. I have been thinking about it for 2 weeks and have done a lot of experiments, none of them are right(compared to a brute force algorithm). Could someone help me?
examples:
8,1,3,4,7. The answer is 1-7.
2,6,5,4,9,8. The answer is 2-9 or 2-8.
I think the word "interval" doesn't express my meaning. I mean to find a subsequence of the array that has the most numbers whose value is between the endpoints of the subsequence. Eg.1: "1,3,4,7" has two numbers(3,4), and eg.2: both "2,6,5,4,9" and "2,6,5,4,9,8" have three numbers(6,5,4).
here is my code(O(n^2)). #Vaughn Cato I use this to compare to your code.
#! /usr/bin/env python
#coding=utf-8
import itertools
def n2(numbers):
a = [0]*len(numbers)
ans = -1
l = 0
r = 0
for j in range(1,len(numbers)):
t = 0
for i in range(j-1,-1,-1):
if numbers[i]<numbers[j]:
x = t - a[i]
if x>ans:
ans = x
l = i
r = j
t += 1
else:
a[i] += 1
return (numbers[l],numbers[r],ans)
def countBetween(numbers,left,right):
cnt = 0
for i in range(left+1,right):
if numbers[left]<numbers[i]<numbers[right]:
cnt += 1
return cnt
for numbers in itertools.permutations(range(5)):
ans1=n2(numbers)
ans2=longestInterval(numbers)
if(ans1[2]!=ans2[2]):
print ans1,ans2,numbers
NOTE: This doesn't actually work, but it might give you some ideas.
Think of it this way:
Let X be the array of numbers.
Let s be the index of the start of the subsequence.
Let e be the index of the end of the subsequence.
If you pick an arbitrary partition index p, then the longest subsequence either goes across this partition or it falls to the left or right of that partition. If the longest subsequence goes across this partition, then s < p <= e. To find s, find the index with the most numbers between s and p which are greater than X[s]. To find 'e', find the index with the most numbers between p and e which are less than X[e].
You can recursively check the left and right sides to see if you can find a longer subsequence.
Finding which index has the most greater numbers to the right or the most less than numbers to the left can be done in linear time if you have the indices of X sorted by value:
To find the start index, begin with the first index of your sorted list of indices and say it is the best so far. If the next index is greater than the best so far, then any future index will need to be even farther to the left than our current best to be the new best, so we subtract one from our best index (but remember what the best index really was). If the next index is to the left of our best index, then make it be the best index. Keep repeating this process, for each of the indices in order.
You can do a similar procedure to find the best index for the end on the right side.
The only remaining trick is to maintain the sorted list of indices for whatever range we are working on. This can be done by sorting the entire set of numbers initially and finding their indices, then at each level of the recursion we can split the sorted indices into two sublists in linear time.
Here is a python implementation of the idea:
# Find the index from the given indices that has the most numbers to the
# right of it which are greater in value. The indices are sorted by
# the value of the numbers at that index. We don't even need to know
# what the numbers are.
def longestLowerSequence(indices):
best_index=indices[0]
target_index=best_index
for i in range(0,len(indices)):
if indices[i]<target_index:
best_index=indices[i]
target_index=best_index
else:
target_index-=1
return best_index
# Find the index from the given indices that has the most numbers to the
# left of it which are less in value.
def longestUpperSequence(indices):
n=len(indices)
best_index=indices[n-1]
target_index=best_index
for i in range(0,n):
if indices[n-1-i]>target_index:
best_index=indices[n-1-i]
target_index=best_index
else:
target_index+=1
return best_index
# Return the pair of indices which has the most values between it.
def longestRangeFromSortedIndices(numbers,indices,begin,end):
assert end>begin
if end-begin<=2:
return (indices[begin],indices[end-1])
assert type(indices) is list
partition=(begin+end)/2
left_indices=filter(lambda index: index<partition,indices)
right_indices=filter(lambda index: index>=partition,indices)
assert len(left_indices)>0
assert len(right_indices)>0
left=longestLowerSequence(left_indices)
right=longestUpperSequence(right_indices)
left_range=longestRangeFromSortedIndices(numbers,indices,begin,partition)
right_range=longestRangeFromSortedIndices(numbers,indices,partition,end)
best_size=countBetween(numbers,left,right)
best_range=(left,right)
left_size=countBetween(numbers,left_range[0],left_range[1])
right_size=countBetween(numbers,right_range[0],right_range[1])
if left_size>best_size:
best_size=left_size
best_range=left_range
if right_size>best_size:
best_size=right_size
best_range=right_range
return best_range
def sortedIndices(numbers):
return sorted(range(len(numbers)),key=lambda i: numbers[i])
def longestInterval(numbers):
indices=sortedIndices(numbers)
longest_range=longestRangeFromSortedIndices(numbers,indices,0,len(numbers))
return (numbers[longest_range[0]],numbers[longest_range[1]])
I believe this is a variant of the maximum subarray problem.
It can be solved using divide and conquer as follows:
Divide the integer array into equal halves
Compute the results R1, R2 on both halves respectively(R1, R2 are lengths of the maximum intervals for each half, and the start and end points are stored as well)
Obtain the minimum integer MIN from the first half and the maximum integer MAX from the second half and compute result R3 as the distance from MIN to MAX in the original array (Min and Max are the start and end point respectively)
Return the largest of R1, R2 and R3 as the result of the entire problem
Why this works:
The largest interval comes from one of the three cases: 1) the first half 2) the second half 3) across the two halves. Thus, computing the largest of the three yields the optimal result.
Time complexity:
Solving the recurrence:
T(n) = 2T(n/2) + O(n)
gives T(n) = O(nlogn). Note: as the recurrence indicates, we solve two subproblems of half size(2T(n/2))and find the minimum and maximum integers in two halves in linear time(O(n)).

Resources