Find kth smallest element in matrix - c++11

I have problem/assignment about finding the k-th smallest element in a matrix.
I have a matrix A(NxN), with Aij = i* i + j* j and N = 10^6. The upper time limit is 1sec, using C++ 11 in Visual Studio 2010.
I have a solution where I sort the matrix and find it, but it takes O(n*n*log(n*n) + K) time and not enough memory.
Any suggestions?

Related

Maximum Score a Mathematician can score

For an array A of n integers, a mathematician can perform the following moves move on the array
1. Choose an index i(0<=i<length(A)) and add A[i] to the scores.
2. Discard either the left partition(i.e A[0....i-1]) or the right
partition(i.e A[i+1 ... length(A)-1]). the partition discarded can
be empty too. The selected partition becomes the new value of A and
is used for subsequent operations.
Starting from the initial score of 0 mathematician wishes to find the maximum score achievable after K moves.
Example:
A = [4,6,-10,-1,10,-20], K = 4
Maximum Score is 19
Explanation:
- Select A[4](0-based indexing) and keep the left subarray. Now the
score is 10 and A = [4,6,-10,-1].
- Select A[0] and keep the right subarray. Now Score is 10+4=14 and A =
[6,-10,-1].
- Select A[0] and keep the right subarray. Now the score is 14+6=20,
and A = [-10,-1].
- Select A[1] and then right subarray. Now score is 20-1=19 and A = []
So, after K=4 moves, the maximum score is 19
I tried a dynamic programming solution with the following subproblem and recurrence relation:
- opt(i,j,k) = maximum score possible using element from index i to j
in k moves
- opt(i,j,k) = max( opt(i,j,k), a[l] + max(opt(i,l-1,k-1),
opt(l+1,j,k-1)) for l ranging from i to j (inclusive).
the complexity of the above dp solution is: n^3k
Can you help me with a better solution?
Let M be a set of the K largest values in A. It's obvious the maximum achievable score is the sum of all the elements in M. Note that it's always possible to get such a score. The mathematician can first find M and then go through the array selecting the leftmost value in A that belongs to M and discarding the left part of the array. This proves that finding the sum of M is the answer.
You can use Quickselect to achieve O(n) performance on average. If you want to avoid the worst-case performance O(n^2) you can find M using a min heap of size K storing the K largest numbers as you iterate over A. This would lead to O(n * log(K)) time complexity.

Least cost increasing subsequence

Say we have an array A that contains N integers. The problem is that we want to minimize the cost of some increasing subsequence(not necessarily strictly increasing) starting at position 1 and ending at position N. The total cost of a subsequence is the total cost of transitioning between elements in the subsequence. When building the subsequence, the cost of transitioning from position j to position i, where i >= j can be found in the matrix COST[i][j]. It is guaranteed that some increasing subsequence exists in which we start from position 1 and reach position N. Values in the array may be very large.
For example:
N = 5
A = [0,3,2,3,3]
Cost =
[[0,INF,INF,INF,INF],
[3,0,INF,INF,INF],
[3,INF,0,INF,INF],
[5,2,2,0,INF],
[6,0,3,1,0]]
The least-cost increasing subsequence is (A[1], A[2], A[5]) or (0,3,3).
The cost
is COST[2][1] + COST[5][2] = 3 + 0 = 3.
So far I have been able to modify the traditional O(n^2) dp solution by initializing dp[i] to infinity and dp[1] to 0 and subsequently looping over all previous values to extend the subsequence. While iterating through previous values I simply maintain the minimum cost.
Now I want to improve this solution and make it o(nlogn). I know the regular LIS problem can be solved using arrays and binary search, but I have been unable to modify such an approach to fit this problem.

Find kth number in sum array

Given an array A with N elements I need to find pair (i,j) such that i is not equal to j and if we write the sum A[i]+A[j] for all pairs of (i,j) then it comes at the kth position.
Example : Let N=4 and arrays A=[1 2 3 4] and if K=3 then answer is 5 as we can see it clearly that sum array becomes like this : [3,4,5,5,6,7]
I can't go for all pair of i and j as N can go up to 100000. Please help how to solve this problem
I mean something like this :
int len=N*(N+1)/2;
int sum[len];
int count=0;
for(int i=0;i<N;i++){
for(int j=i+1;j<N;j++){
sum[count]=A[i]+A[j];
count++;
}
}
//Then just find kth element.
We can't go with this approach
A solution that is based on a fact that K <= 50: Let's take the first K + 1 elements of the array in a sorted order. Now we can just try all their combinations. Proof of correctness: let's assume that a pair (i, j) is the answer, where j > K + 1. But there are K pairs with the same or smaller sum: (1, 2), (1, 3), ..., (1, K + 1). Thus, it cannot be the K-th pair.
It is possible to achieve an O(N + K ^ 2) time complexity by choosing the K + 1 smallest numbers using a quickselect algorithm(it is possible to do even better, but it is not required). You can also just the array and get an O(N * log N + K ^ 2 * log K) complexity.
I assume that you got this question from http://www.careercup.com/question?id=7457663.
If k is close to 0 then the accepted answer to How to find kth largest number in pairwise sums like setA + setB? can be adapted quite easily to this problem and be quite efficient. You need O(n log(n)) to sort the array, O(n) to set up a priority queue, and then O(k log(k)) to iterate through the elements. The reversed solution is also efficient if k is near n*n - n.
If k is close to n*n/2 then that won't be very good. But you can adapt the pivot approach of http://en.wikipedia.org/wiki/Quickselect to this problem. First in time O(n log(n)) you can sort the array. In time O(n) you can set up a data structure representing the various contiguous ranges of columns. Then you'll need to select pivots O(log(n)) times. (Remember, log(n*n) = O(log(n)).) For each pivot, you can do a binary search of each column to figure out where it split it in time O(log(n)) per column, and total cost of O(n log(n)) for all columns.
The resulting algorithm will be O(n log(n) log(n)).
Update: I do not have time to do the finger exercise of supplying code. But I can outline some of the classes you might have in an implementation.
The implementation will be a bit verbose, but that is sometimes the cost of a good general-purpose algorithm.
ArrayRangeWithAddend. This represents a range of an array, summed with one value.with has an array (reference or pointer so the underlying data can be shared between objects), a start and an end to the range, and a shiftValue for the value to add to every element in the range.
It should have a constructor. A method to give the size. A method to partition(n) it into a range less than n, the count equal to n, and a range greater than n. And value(i) to give the i'th value.
ArrayRangeCollection. This is a collection of ArrayRangeWithAddend objects. It should have methods to give its size, pick a random element, and a method to partition(n) it into an ArrayRangeCollection that is below n, count of those equal to n, and an ArrayRangeCollection that is larger than n. In the partition method it will be good to not include ArrayRangeWithAddend objects that have size 0.
Now your main program can sort the array, and create an ArrayRangeCollection covering all pairs of sums that you are interested in. Then the random and partition method can be used to implement the standard quickselect algorithm that you will find in the link I provided.
Here is how to do it (in pseudo-code). I have now confirmed that it works correctly.
//A is the original array, such as A=[1,2,3,4]
//k (an integer) is the element in the 'sum' array to find
N = A.length
//first we find i
i = -1
nl = N
k2 = k
while (k2 >= 0) {
i++
nl--
k2 -= nl
}
//then we find j
j = k2 + nl + i + 1
//now compute the sum at index position k
kSum = A[i] + A[j]
EDIT:
I have now tested this works. I had to fix some parts... basically the k input argument should use 0-based indexing. (The OP seems to use 1-based indexing.)
EDIT 2:
I'll try to explain my theory then. I began with the concept that the sum array should be visualised as a 2D jagged array (diminishing in width as the height increases), with the coordinates (as mentioned in the OP) being i and j. So for an array such as [1,2,3,4,5] the sum array would be conceived as this:
3,4,5,6,
5,6,7,
7,8,
9.
The top row are all values where i would equal 0. The second row is where i equals 1. To find the value of 'j' we do the same but in the column direction.
... Sorry I cannot explain this any better!

Optimizing a DP on Intervals/Points

Well the problem is quite easy to solve naively in O(n3) time. The problem is something like:
There are N unique points on a number line. You want to cover every
single point on the number line with some set of intervals. You can
place an interval anywhere, and it costs B + MX to create an
interval, where B is the initial cost of creating an interval, and
X is half the length of the interval, and M is the cost per
length of interval. You want to find the minimum cost to cover every
single interval.
Sample data:
Points = {0, 7, 100}
B = 20
M = 5
So the optimal solution would be 57.50 because you can build an interval [0,7] at cost 20 + 3.5×5 and build an interval at [100,100] at cost 100 + 0×5, which adds up to 57.50.
I have an O(n3) solution, where the DP is minimum cost to cover points from [left, right]. So the answer would be in DP[1][N]. For every pair (i,j) I just iterate over k = {i...j-1} and compute DP[i][k] + DP[k + 1][j].
However, this solution is O(n3) (kind of like matrix multiplication I think) so it's too slow on N > 2000. Any way to optimize this?
Here's a quadratic solution:
Sort all the points by coordinate. Call the points p.
We'll keep an array A such that A[k] is the minimum cost to cover the first k points. Set A[0] to zero and all other elements to infinity.
For each k from 0 to n-1 and for each l from k+1 to n, set A[l] = min(A[l], A[k] + B + M*(p[l-1] - p[k])/2);
You should be able to convince yourself that, at the end, A[n] is the minimum cost to cover all n points. (We considered all possible minimal covering intervals and we did so from "left to right" in a certain sense.)
You can speed this up so that it runs in O(n log n) time; replace step 3 with the following:
Set A[1] = B. For each k from 2 to n, set A[k] = A[k-1] + min(M/2 * (p[k-1] - p[k-2]), B).
The idea here is that we either extend the previous interval to cover the next point or we end the previous interval at p[k-2] and begin a new one at p[k-1]. And the only thing we need to know to make that decision is the distance between the two points.
Notice also that, when computing A[k], I only needed the value of A[k-1]. In particular, you don't need to store the whole array A; only its most recent element.

Sum-subset with a fixed subset size

The sum-subset problem states:
Given a set of integers, is there a non-empty subset whose sum is zero?
This problem is NP-complete in general. I'm curious if the complexity of this slight variant is known:
Given a set of integers, is there a subset of size k whose sum is zero?
For example, if k = 1, you can do a binary search to find the answer in O(log n). If k = 2, then you can get it down to O(n log n) (e.g. see Find a pair of elements from an array whose sum equals a given number). If k = 3, then you can do O(n^2) (e.g. see Finding three elements in an array whose sum is closest to a given number).
Is there a known bound that can be placed on this problem as a function of k?
As motivation, I was thinking about this question How do you partition an array into 2 parts such that the two parts have equal average? and trying to determine if it is actually NP-complete. The answer lies in whether or not there is a formula as described above.
Barring a general solution, I'd be very interested in knowing an optimal bound for k=4.
For k=4, space complexity O(n), time complexity O(n2 * log(n))
Sort the array. Starting from 2 smallest and 2 largest elements, calculate all lesser sums of 2 elements (a[i] + a[j]) in the non-decreasing order and all greater sums of 2 elements (a[k] + a[l]) in the non-increasing order. Increase lesser sum if total sum is less than zero, decrease greater one if total sum is greater than zero, stop when total sum is zero (success) or a[i] + a[j] > a[k] + a[l] (failure).
The trick is to iterate through all the indexes i and j in such a way, that (a[i] + a[j]) will never decrease. And for k and l, (a[k] + a[l]) should never increase. A priority queue helps to do this:
Put key=(a[i] + a[j]), value=(i = 0, j = 1) to priority queue.
Pop (sum, i, j) from priority queue.
Use sum in the above algorithm.
Put (a[i+1] + a[j]), i+1, j and (a[i] + a[j+1]), i, j+1 to priority queue only if these elements were not already used. To keep track of used elements, maintain an array of maximal used 'j' for each 'i'. It is enough to use only values for 'j', that are greater, than 'i'.
Continue from step 2.
For k>4
If space complexity is limited to O(n), I cannot find anything better, than use brute force for k-4 values and the above algorithm for the remaining 4 values. Time complexity O(n(k-2) * log(n)).
For very large k integer linear programming may give some improvement.
Update
If n is very large (on the same order as maximum integer value), it is possible to implement O(1) priority queue, improving complexities to O(n2) and O(n(k-2)).
If n >= k * INT_MAX, different algorithm with O(n) space complexity is possible. Precalculate a bitset for all possible sums of k/2 values. And use it to check sums of other k/2 values. Time complexity is O(n(ceil(k/2))).
The problem of determining whether 0 in W + X + Y + Z = {w + x + y + z | w in W, x in X, y in Y, z in Z} is basically the same except for not having annoying degenerate cases (i.e., the problems are inter-reducible with minimal resources).
This problem (and thus the original for k = 4) has an O(n^2 log n)-time, O(n)-space algorithm. The O(n log n)-time algorithm for k = 2 (to determine whether 0 in A + B) accesses A in sorted order and B in reverse sorted order. Thus all we need is an O(n)-space iterator for A = W + X, which can be reused symmetrically for B = Y + Z. Let W = {w1, ..., wn} in sorted order. For all x in X, insert a key-value item (w1 + x, (1, x)) into a priority queue. Repeatedly remove the min element (wi + x, (i, x)) and insert (wi+1 + x, (i+1, x)).
Question that is very similar:
Is this variant of the subset sum problem easier to solve?
It's still NP-complete.
If it were not, the subset-sum would also be in P, as it could be represented as F(1) | F(2) | ... F(n) where F is your function. This would have O(O(F(1)) + O(F(2)) + O(F(n))) which would still be polynomial, which is incorrect as we know it's NP-complete.
Note that if you have certain bounds on the inputs you can achieve polynomial time.
Also note that the brute-force runtime can be calculated with binomial coefficients.
The solution for k=4 in O(n^2log(n))
Step 1: Calculate the pairwise sum and sort the list. There are n(n-1)/2 sums. So the complexity is O(n^2log(n)). Keep the identities of the individuals which make the sum.
Step 2: For each element in the above list search for the complement and make sure they don't share "the individuals). There are n^2 searches, each with complexity O(log(n))
EDIT: The space complexity of the original algorithm is O(n^2). The space complexity can be reduced to O(1) by simulating a virtual 2D matrix (O(n), if you consider space to store sorted version of the array).
First about 2D matrix: sort the numbers and create a matrix X using pairwise sums. Now the matrix is ins such a way that all the rows and columns are sorted. To search for a value in this matrix, search the numbers on the diagonal. If the number is in between X[i,i] and X[i+1,i+1], you can basically halve the search space by to matrices X[i:N, 0:i] and X[0:i, i:N]. The resulting search algorithm is O(log^2n) (I AM NOT VERY SURE. CAN SOMEBODY CHECK IT?).
Now, instead of using a real matrix, use a virtual matrix where X[i,j] are calculated as needed instead of pre-computing them.
Resulting time complexity: O( (nlogn)^2 ).
PS: In the following link, it says the complexity of 2D sorted matrix search is O(n) complexity. If that is true (i.e. O(log^2n) is incorrect), then the finally complexity is O(n^3).
To build on awesomo's answer... if we can assume that numbers are sorted, we can do better than O(n^k) for given k; simply take all O(n^(k-1)) subsets of size (k-1), then do a binary search in what remains for a number that, when added to the first (k-1), gives the target. This is O(n^(k-1) log n). This means the complexity is certainly less than that.
In fact, if we know that the complexity is O(n^2) for k=3, we can do even better for k > 3: choose all (k-3)-subsets, of which there are O(n^(k-3)), and then solve the problem in O(n^2) on the remaining elements. This is O(n^(k-1)) for k >= 3.
However, maybe you can do even better? I'll think about this one.
EDIT: I was initially going to add a lot proposing a different take on this problem, but I've decided to post an abridged version. I encourage other posters to see whether they believe this idea has any merit. The analysis is tough, but it might just be crazy enough to work.
We can use the fact that we have a fixed k, and that sums of odd and even numbers behave in certain ways, to define a recursive algorithm to solve this problem.
First, modify the problem so that you have both even and odd numbers in the list (this can be accomplished by dividing by two if all are even, or by subtracting 1 from numbers and k from the target sum if all are odd, and repeating as necessary).
Next, use the fact that even target sums can be reached only by using an even number of odd numbers, and odd target sums can be reached using only an odd number of odd numbers. Generate appropriate subsets of the odd numbers, and call the algorithm recursively using the even numbers, the sum minus the sum of the subset of odd numbers being examined, and k minus the size of the subset of odd numbers. When k = 1, do binary search. If ever k > n (not sure this can happen), return false.
If you have very few odd numbers, this could allow you to very quickly pick up terms that must be part of a winning subset, or discard ones that cannot. You can transform problems with lots of even numbers to equivalent problems with lots of odd numbers by using the subtraction trick. The worst case must therefore be when the numbers of even and odd numbers are very similar... and that's where I am right now. A uselessly loose upper bound on this is many orders of magnitudes worse than brute-force, but I feel like this is probably at least as good as brute-force. Thoughts are welcome!
EDIT2: An example of the above, for illustration.
{1, 2, 2, 6, 7, 7, 20}, k = 3, sum = 20.
Subset {}:
{2, 2, 6, 20}, k = 3, sum = 20
= {1, 1, 3, 10}, k = 3, sum = 10
Subset {}:
{10}, k = 3, sum = 10
Failure
Subset {1, 1}:
{10}, k = 1, sum = 8
Failure
Subset {1, 3}:
{10}, k = 1, sum = 6
Failure
Subset {1, 7}:
{2, 2, 6, 20}, k = 1, sum = 12
Failure
Subset {7, 7}:
{2, 2, 6, 20}, k = 1, sum = 6
Success
The time complexity is trivially O(n^k) (number of k-sized subsets from n elements).
Since k is a given constant, a (possibly quite high-order) polynomial upper bounds the complexity as a function of n.

Resources