I am solving a problem where we are given a square binary grid. We have to find the number of rectangles where the number of 1's is at most k in following time complexity: O(N^2 log(N)k).
I am not able to think of any approach to this time complexity. Is it even possible and if yes how can we approach it?
Related
In Closest Pair algorithm, it is said that presorting points according to x and y coordinates can help decrease time complexibility from O(nlog^2n) to O(nlogn), but how can that happen? I think presort also requires O(nlogn) time rather than O(n), so the equation is still T(n)=2T(n/2)+O(nlogn).
Can anyone show how to complete presort in details to achieve O(n)? Or do I have any misunderstandings about it?
Not sure to what you're referring to as "presort", but the algorithm is O(n log(n)), according to these steps:
First, sort according to the x coordinate.
Recursively, divide into two, similarly-sized sets, divided by an xm value.
a. solve for each of the left and right subsets to xm.
b. for each of the points on the left, find the closest points in a bounded rectangle containing points to the right (see details in link above); same for the points on the right.
c. return the minimum of the smallest distances found in b.
Step 1 is O(n log(n). Step 2 is given by T(n) = 2 T(n / 2) + Θ(n).
I have this question that asks to rewrite the subset sum problem in terms of only N.
If unaware the problem is that given weights, each with cost 1 how would you find the optimal solution given a max weight to achieve.
So the O(NW) is the space and time costs, where space will be for the 2d matrix and in the use of dynamic programming. This problem is a special case of the knapsac problem.
I'm not sure how to approach this as I tried to think about it and only thing I thought of was find the sum of all weights and just have a general worst case scenario. Thanks
If the weight is not bounded, and so the complexity must depend solely on N, there is at least an O (2N) approach, which is trying all possible subsets of N elements and computing their sums.
If you are willing to use exponential space rather than polynomial space, you can solve the problem in O(n 2^(n/2)) time and O(2^(n/2)) space if you split your set of n weights into two sets A and B of roughly equal size and compute the sum of weights for all the subsets of the two sets, and then hash all sums of subsets in A and hash W - x for all sums x of subsets of B, and if you get a collision between a subset of A and a subset of B in the hash table then you have found a subset that sums to W.
Suppose you have an N × N matrix where each row has exactly one nonzero element and each column has exactly one nonzero element (the nonzero elements can be either positive or negative). We want to find the maximum-sum submatrix. How efficiently can we do so?
The matrix has dimension N × N and only N non-zero elements. N is so big, so I can't use an O(N3) algorithm. Does anyone know how to solve this in time O(N2), O(N log N), or some other time complexity like this?
Thanks!
If you want to find the maximum sum subrectangle, you can do it in O(n^2 log n) time using the algorithm described here maximum sum subrectangle in a sparse matrix . This beats Kadane's algorithm which is O(n^3).
I have an m x n matrix which is sparse with N non-zero entries.
A modified version of Kadane's 2-d algorithm can find the maximum sum subrectangle in O(m N log n) time, which beats traditional Kadane's algorithm of O(m^2 n) for sufficiently sparse matrices.
Now I want to know if the optimal solution can be updated quickly if one entry in the matrix is changed.
By "quickly" I mean something like O(m log n) time or better.
It's possible that perhaps the matrix does not have to be sparse to work out a solution, however a solution when N = O(min(m,n)) would be ok.
Input: nxn matrix of postitive/negative numbers and k.
Output: submatrix with maximum sum of its elements divided by its number of elements that has at least k elements.
Is there any algorithm better than O(n^4) for this problem?
An FFT-based divide-and-conquer approach to this problem:
https://github.com/thearn/maximum-submatrix-sum
It's not as efficient as Kadane's, (O(N^3) vs. O(N^3 log N)) but does give a different take on solution construction.
There is a O(n^3) 2-d kadane algorithm for finding the maximum sum submatrix (i.e. subrectangle) in an nxn matrix. (You can find posts on SO about it, or read online). Once you understand how the algorithm works, it is clear that you can get an O(n^3) time solution for your problem if you can solve the problem of finding a maximum average subinterval of length at least m in a 1-d array of n numbers in O(n) time. This is indeed possible, see the paper cs.slu.edu/~goldwasser/publications/DensityPreprint.pdf
Thus there is an O(n^3) time solution for your problem.