Given an array A of size N and an integer P, find the subarray B = A[i...j] such that i <= j, compute the bitwise value of subarray elements say K = B[i] & B[i + 1] & ... & B[j].
Output the minimum value of |K-P| among all possible values of K.
Here is a a quasilinear approach, assuming the elements of the array have a constant number of bits.
The rows of the matrix K[i,j] = A[i] & A[i + 1] & ... & A[j] are monotonically decreasing (ignore the lower triangle of the matrix). That means the absolute value of the difference between K[i,:] and the search parameter P is unimodal and a minimum (not necessarily the minimum as the same minimum may occur several times, but then they will do so in a row) can be found in O(log n) time with ternary search (assuming access to elements of K can be arranged in constant time). Repeat this for every row and output the position of the lowest minimum, bringing it up to O(n log n).
Performing the row-minimum search in a time less than the size of row requires implicit access to the elements of the matrix K, which could be accomplished by creating b prefix-sum arrays, one for each bit of the elements of A. A range-AND can then be found by calculating all b single-bit range-sums and comparing them with the length of the range, each comparison giving a single bit of the range-AND. This takes O(nb) preprocessing and gives O(b) (so constant, by the assumption I made at the beginning) access to arbitrary elements of K.
I had hoped that the matrix of absolute differences would be a Monge matrix allowing the SMAWK algorithm to be used, but that does not seem to be the case and I could not find a way to push to towards that property.
Are you familiar with the Find subarray with given sum problem? The solution I'm proposing uses the same method as in the efficient solution in the link. It is highly recommended to read it before continuing.
First let's notice that the longer a subarray its K will be it will be smaller, since the & operator between two numbers can create only a smaller number.
So if I have a subarray from i to j and I want want to make its K smaller I'll add more elements (now the subarray is from i to j + 1), if I want to make K larger I'll remove elements (i + 1 to j).
If we review the solution to Find subarray with given sum we see that we can easily transform it to our problem - the given sum is K and summing is like using the & operator, but more elements is smaller K so we can flip the comparison of the sums.
This problem tells you if the solution exist but if you simply maintain the minimal difference you found so far you can solve your problem as well.
Edit
This solution is true if all the numbers are positive, as mentioned in the comments, if not all the numbers are positive the solution is slightly different.
Notice that if not all of the numbers are negative, the K will be positive, so in order to find a negative P we can consider only the negatives in the algorithm, than use the algorithm as shown above.
Here an other quasi-linear algorithm, mixing the yonlif Find subarray with given sum problem solution with Harold idea to compute K[i,j]; therefore I don't use pre-processing which if memory-hungry. I use a counter to keep trace of bits and compute at most 2N values of K, each costing at most O(log N). since log N is generally smaller than the word size (B), it's faster than a linear O(NB) algorithm.
Counts of bits of N numbers can be done with only ~log N words :
So you can compute A[i]&A[i+1]& ... &A[I+N-1] with only log N operations.
Here the way to manage the counter: if
counter is C0,C1, ...Cp, and
Ck is Ck0,Ck1, ...Ckm,
Then Cpq ... C1q,C0q is the binary representation of the number of bits equal to 1 among the q-th bit of {A[i],A[i+1], ... ,A[j-1]}.
The bit-level implementation (in python); all bits are managed in parallel.
def add(counter,x):
k = 0
while x :
x, counter[k] = x & counter[k], x ^ counter[k]
k += 1
def sub(counter,x):
k = 0
while x :
x, counter[k] = x & ~counter[k], x ^ counter[k]
k += 1
def val(counter,count): # return A[i] & .... & A[j-1] if count = j-i.
k = 0
res = -1
while count:
if count %2 > 0 : res &= counter[k]
else: res &= ~counter[k]
count //= 2
k += 1
return res
And the algorithm :
def solve(A,P):
counter = np.zeros(32, np.int64) # up to 4Go
n = A.size
i = j = 0
K=P # trig fill buffer
mini = np.int64(2**63-1)
while i<n :
if K<P or j == n : # dump buffer
sub(counter,A[i])
i += 1
else: # fill buffer
add(counter,A[j])
j += 1
if j>i:
K = val(counter, count)
X = np.abs(K - P)
if mini > X: mini = X
else : K = P # reset K
return mini
val,sub and add are O(ln N) so the whole process is O(N ln N)
Test :
n = 10**5
A = np.random.randint(0, 10**8, n, dtype=np.int64)
P = np.random.randint(0, 10**8, dtype=np.int64)
%time solve(A,P)
Wall time: 0.8 s
Out: 452613036735
A numba compiled version (decorate the 4 functions by #numba.jit) is 200x faster (5 ms).
Yonlif answer is wrong.
In the Find subaray with given sum solution we have a loop where we do substruction.
while (curr_sum > sum && start < i-1)
curr_sum = curr_sum - arr[start++];
Since there is no inverse operator of a logical AND, we cannot rewrite this line and we cannot use this solution directly.
One would say that we can recalculate the sum every time when we increase the lower bound of a sliding window (which would lead us to O(n^2) time complexity), but this solution would not work (I'll provide the code and counter example in the end).
Here is brute force solution that works in O(n^3)
unsigned int getSum(const vector<int>& vec, int from, int to) {
unsigned int sum = -1;
for (auto k = from; k <= to; k++)
sum &= (unsigned int)vec[k];
return sum;
}
void updateMin(unsigned int& minDiff, int sum, int target) {
minDiff = std::min(minDiff, (unsigned int)std::abs((int)sum - target));
}
// Brute force solution: O(n^3)
int maxSubArray(const std::vector<int>& vec, int target) {
auto minDiff = UINT_MAX;
for (auto i = 0; i < vec.size(); i++)
for (auto j = i; j < vec.size(); j++)
updateMin(minDiff, getSum(vec, i, j), target);
return minDiff;
}
Here is O(n^2) solution in C++ (thanks to B.M answer) The idea is to update current sum instead calling getSum for every two indices. You should also look at B.M answer as it contains conditions for early braak. Here is C++ version:
int maxSubArray(const std::vector<int>& vec, int target) {
auto minDiff = UINT_MAX;
for (auto i = 0; i < vec.size(); i++) {
unsigned int sum = -1;
for (auto j = i; j < vec.size(); j++) {
sum &= (unsigned int)vec[j];
updateMin(minDiff, sum, target);
}
}
return minDiff;
}
Here is NOT working solution with a sliding window: This is the idea from Yonlif answer with the precomputation of the sum in O(n^2)
int maxSubArray(const std::vector<int>& vec, int target) {
auto minDiff = UINT_MAX;
unsigned int sum = -1;
auto left = 0, right = 0;
while (right < vec.size()) {
if (sum > target)
sum &= (unsigned int)vec[right++];
else
sum = getSum(vec, ++left, right);
updateMin(minDiff, sum, target);
}
right--;
while (left < vec.size()) {
sum = getSum(vec, left++, right);
updateMin(minDiff, sum, target);
}
return minDiff;
}
The problem with this solution is that we skip some sequences which can actually be the best ones.
Input: vector = [26,77,21,6], target = 5.
Ouput should be zero as 77&21=5, but sliding window approach is not capable of finding that one as it will first consider window [0..3] and than increase lower bound, without possibility to consider window [1..2].
If someone have a linear or log-linear solution which works it would be nice to post.
Here is a solution that i wrote and that takes time complexity of the order O(n^2).
The below code snippet is written in Java .
class Solution{
public int solve(int[] arr,int p){
int maxk = Integer.MIN_VALUE;
int mink = Integer.MAX_VALUE;
int size = arr.length;
for(int i =0;i<size;i++){
int temp = arr[i];
for(int j = i;j<size;j++){
temp &=arr[j];
if(temp<=p){
if(temp>maxk)
maxk = temp;
}
else{
if(temp < mink)
mink = temp;
}
}
}
int min1 = Math.abs(mink -p);
int min2 = Math.abs(maxk -p);
return ( min1 < min2 ) ? min1 : min2;
}
}
It is simple brute force approach where 2 numbers let us say x and y , such that x <= k and y >=k are found where x and y are some different K = arr[i]&arr[i+1]&...arr[j] where i<=j for different i and j for x,y .
Answer will be just the minimum of |x-p| and |y-p| .
This is a Python implementation of the O(n) solution based on the broad idea from Yonlif's answer. There were doubts about whether this solution could work since no implementation was provided, so here's an explicit writeup.
Some caveats:
The code technically runs in O(n*B), where n is the number of integers and B is the number of unique bit positions set in any of the integers. With constant-width integers that's linear, but otherwise it's not generally linear in actual input size. You can get a true linear solution for exponentially large inputs with more bookkeeping.
Negative numbers in the array aren't handled, since their bit representation isn't specified in the question. See the comments on Yonlif's answer for hints on how to handle fixed-width two's complement signed integers.
The contentious part of the sliding window solution seems to be how to 'undo' bitwise &. The trick is to store the counts of set-bits in each bit-position of elements in your sliding window, not just the bitwise &. This means adding or removing an element from the window turns into adding or removing 1 from the bit-counters for each set-bit in the element.
On top of testing this code for correctness, it isn't too hard to prove that a sliding window approach can solve this problem. The bitwise & function on subarrays is weakly-monotonic with respect to subarray inclusion. Therefore the basic approach of increasing the right pointer when the &-value is too large, and increasing the left pointer when the &-value is too small, will cause our sliding window to equal an optimal sliding window at some point.
Here's a small example run on Dejan's testcase from another answer:
A = [26, 77, 21, 6], Target = 5
Active sliding window surrounded by []
[26], 77, 21, 6
left = 0, right = 0, AND = 26
----------------------------------------
[26, 77], 21, 6
left = 0, right = 1, AND = 8
----------------------------------------
[26, 77, 21], 6
left = 0, right = 2, AND = 0
----------------------------------------
26, [77, 21], 6
left = 1, right = 2, AND = 5
----------------------------------------
26, 77, [21], 6
left = 2, right = 2, AND = 21
----------------------------------------
26, 77, [21, 6]
left = 2, right = 3, AND = 4
----------------------------------------
26, 77, 21, [6]
left = 3, right = 3, AND = 6
So the code will correctly output 0, as the value of 5 was found for [77, 21]
Python code:
def find_bitwise_and(nums: List[int], target: int) -> int:
"""Find smallest difference between a subarray-& and target.
Given a list on nonnegative integers, and nonnegative target
returns the minimum value of abs(target - BITWISE_AND(B))
over all nonempty subarrays B
Runs in linear time on fixed-width integers.
"""
def get_set_bits(x: int) -> List[int]:
"""Return indices of set bits in x"""
return [i for i, x in enumerate(reversed(bin(x)[2:]))
if x == '1']
def counts_to_bitwise_and(window_length: int,
bit_counts: Dict[int, int]) -> int:
"""Given bit counts for a window of an array, return
bitwise AND of the window's elements."""
return sum((1 << key) for key, count in bit_counts.items()
if count == window_length)
current_AND_value = nums[0]
best_diff = abs(current_AND_value - target)
window_bit_counts = Counter(get_set_bits(nums[0]))
left_idx = right_idx = 0
while right_idx < len(nums):
# Expand the window to decrease & value
if current_AND_value > target or left_idx > right_idx:
right_idx += 1
if right_idx >= len(nums):
break
window_bit_counts += Counter(get_set_bits(nums[right_idx]))
# Shrink the window to increase & value
else:
window_bit_counts -= Counter(get_set_bits(nums[left_idx]))
left_idx += 1
current_AND_value = counts_to_bitwise_and(right_idx - left_idx + 1,
window_bit_counts)
# No nonempty arrays allowed
if left_idx <= right_idx:
best_diff = min(best_diff, abs(current_AND_value - target))
return best_diff
Related
I'm trying to improve my intuition around the following two sub-array problems.
Problem one
Return the length of the shortest, non-empty, contiguous sub-array of A with sum at least
K. If there is no non-empty sub-array with sum at least K, return -1
I've come across an O(N) solution online.
public int shortestSubarray(int[] A, int K) {
int N = A.length;
long[] P = new long[N+1];
for (int i = 0; i < N; ++i)
P[i+1] = P[i] + (long) A[i];
// Want smallest y-x with P[y] - P[x] >= K
int ans = N+1; // N+1 is impossible
Deque<Integer> monoq = new LinkedList(); //opt(y) candidates, as indices of P
for (int y = 0; y < P.length; ++y) {
// Want opt(y) = largest x with P[x] <= P[y] - K;
while (!monoq.isEmpty() && P[y] <= P[monoq.getLast()])
monoq.removeLast();
while (!monoq.isEmpty() && P[y] >= P[monoq.getFirst()] + K)
ans = Math.min(ans, y - monoq.removeFirst());
monoq.addLast(y);
}
return ans < N+1 ? ans : -1;
}
It seems to be maintaining a sliding window with a deque. It looks like a variant of Kadane's algorithm.
Problem two
Given an array of N integers (positive and negative), find the number of
contiguous sub array whose sum is greater or equal to K (also, positive or
negative)"
The best solution I've seen to this problem is O(nlogn) as described in the following answer.
tree = an empty search tree
result = 0
// This sum corresponds to an empty prefix.
prefixSum = 0
tree.add(prefixSum)
// Iterate over the input array from left to right.
for elem <- array:
prefixSum += elem
// Add the number of subarrays that have this element as the last one
// and their sum is not less than K.
result += tree.getNumberOfLessOrEqual(prefixSum - K)
// Add the current prefix sum the tree.
tree.add(prefixSum)
print result
My questions
Is my intuition that algorithm one is a variant of Kandane's algorithm correct?
If so, is there a variant of this algorithm (or another O(n) solution) that can be used to solve problem two?
Why can problem two only be solved in O(nlogn) time when they look so similar?
I am working on a code challenge problem -- "find lucky triples". "Lucky triple" is defined as "In a list lst, for any combination of triple like (lst[i], lst[j], lst[k]) where i < j < k, where lst[i] divides lst[j] and lst[j] divides lst[k].
My task is to find the number of lucky triples in a given list. The brute force way is to use three loops but it takes too much time to solve the problem. I wrote this one and the system respond "time exceed". The problems looks silly and easy but the array is unsorted so general methods like binary search do not work. I am stun in the problem for one day and hope someone can give me a hint. I am seeking a way to solve the problem faster, at least the time complexity should be lower than O(N^3).
A simple dynamic programming-like algorithm will do this in quadratic time and linear space. You just have to maintain a counter c[i] for each item in the list, that represents the number of previous integers that divides L[i].
Then, as you go through the list and test each integer L[k] with all previous item L[j], if L[j] divides L[k], you just add c[j] (which could be 0) to your global counter of triples, because that also implies that there exist exactly c[j] items L[i] such that L[i] divides L[j] and i < j.
int c[] = {0}
int nbTriples = 0
for k=0 to n-1
for j=0 to k-1
if (L[k] % L[j] == 0)
c[k]++
nbTriples += c[j]
return nbTriples
There may be some better algorithm that uses fancy discrete maths to do it faster, but if O(n^2) is ok, this will do just fine.
In regard to your comment:
Why DP? We have something that can clearly be modeled as having a left to right order (DP orange flag), and it feels like reusing previously computed values could be interesting, because the brute force algorithm does the exact same computations a lot of times.
How to get from that to a solution? Run a simple example (hint: it should better be by treating input from left to right). At step i, compute what you can compute from this particular point (ignoring everything on the right of i), and try to pinpoint what you compute over and over again for different i's: this is what you want to cache. Here, when you see a potential triple at step k (L[k] % L[j] == 0), you have to consider what happens on L[j]: "does it have some divisors on its left too? Each of these would give us a new triple. Let's see... But wait! We already computed that on step j! Let's cache this value!" And this is when you jump on your seat.
Full working solution in python:
c = [0] * len(l)
print c
count = 0
for i in range(0,len(l)):
j=0
for j in range(0, i):
if l[i] % l[j] == 0:
c[i] = c[i] + 1
count = count + c[j]
print j
print c
print count
Read up on the Sieve of Eratosthenes, a common technique for finding prime numbers, which could be adapted to find your 'lucky triples'. Essentially, you would need to iterate your list in increasing value order, and for each value, multiply it by an increasing factor until it is larger than the largest list element, and each time one of these multiples equals another value in the list, the multiple is divisible by the base number. If the list is sorted when given to you, then the i < j < k requirement would also be satisfied.
e.g. Given the list [3, 4, 8, 15, 16, 20, 40]:
Start at 3, which has multiples [6, 9, 12, 15, 18 ... 39] within the range of the list. Of those multiples, only 15 is contained in the list, so record under 15 that it has a factor 3.
Proceed to 4, which has multiples [8, 12, 16, 20, 24, 28, 32, 36, 40]. Mark those as having a factor 4.
Continue through the list. When you reach an element that has an existing known factor, then if you find any multiples of that number in the list, then you have a triple. In this case, for 16, this has a multiple 32 which is in the list. So now you know that 32 is divisible by 16, which is divisible by 4. Whereas for 15, that has no multiples in the list, so there is no value that can form a triplet with 3 and 15.
A precomputation step to the problem can help reduce time complexity.
Precomputation Step:
For every element(i), iterate the array to find which are the elements(j) such that lst[j]%lst[i]==0
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(a[j]%a[i] == 0)
// mark those j's. You decide how to store this data
}
}
This Precomputation Step will take O(n^2) time.
In the Ultimate Step, use the details of the Precomputation Step, to help find the triplets..
Forming a graph - an array of the indices which are multiples ahead of the current index. Then calculating the collective sum of multiples of these indices, referred from the graph. It has a complexity of O(n^2)
For example, for a list {1,2,3,4,5,6} there will be an array of the multiples. The graph will look like
{ 0:[1,2,3,4,5], 1:[3,5], 2: [5], 3:[],4:[], 5:[]}
So, total triplets will be {0->1 ->3/5} and {0->2 ->5} ie., 3
package com.welldyne.mx.dao.core;
import java.util.LinkedList;
import java.util.List;
public class LuckyTriplets {
public static void main(String[] args) {
int[] integers = new int[2000];
for (int i = 1; i < 2001; i++) {
integers[i - 1] = i;
}
long start = System.currentTimeMillis();
int n = findLuckyTriplets(integers);
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms");
System.out.println(n);
}
private static int findLuckyTriplets(int[] integers) {
List<Integer>[] indexMultiples = new LinkedList[integers.length];
for (int i = 0; i < integers.length; i++) {
indexMultiples[i] = getMultiples(integers, i);
}
int luckyTriplets = 0;
for (int i = 0; i < integers.length - 1; i++) {
luckyTriplets += getLuckyTripletsFromMultiplesMap(indexMultiples, i);
}
return luckyTriplets;
}
private static int getLuckyTripletsFromMultiplesMap(List<Integer>[] indexMultiples, int n) {
int sum = 0;
for (int i = 0; i < indexMultiples[n].size(); i++) {
sum += indexMultiples[(indexMultiples[n].get(i))].size();
}
return sum;
}
private static List<Integer> getMultiples(int[] integers, int n) {
List<Integer> multiples = new LinkedList<>();
for (int i = n + 1; i < integers.length; i++) {
if (isMultiple(integers[n], integers[i])) {
multiples.add(i);
}
}
return multiples;
}
/*
* if b is the multiple of a
*/
private static boolean isMultiple(int a, int b) {
return b % a == 0;
}
}
I just wanted to share my solution, which passed. Basically, the problem can be condensed to a tree problem. You need to pay attention to the wording of the question, it only treats numbers different on basis of the index not value. so {1,1,1} will have only 1 triple, but {1,1,1,1} will have 4. the constraint is {li,lj,lk} such that the divide and i<j<k
def solution(l):
count = 0
data = l
max_element = max(data)
tree_list = []
for p,element in enumerate(data):
if element == 0:
tree_list.append([])
else:
temp = []
for el in data[p+1:]:
if el%element == 0:
temp.append(el)
tree_list.append(temp)
for p,element_list in enumerate(tree_list):
data[p] = 0
temp = data[:]
for element in element_list:
pos_element = temp.index(element)
count += len(tree_list[pos_element])
temp[pos_element] = 0
return count
Given positive integers from 1 to N where N can go upto 10^9. Some K integers from these given integers are missing. K can be at max 10^5 elements. I need to find the minimum sum that can't be formed from remaining N-K elements in an efficient way.
Example; say we have N=5 it means we have {1,2,3,4,5} and let K=2 and missing elements are: {3,5} then remaining array is now {1,2,4} the minimum sum that can't be formed from these remaining elements is 8 because :
1=1
2=2
3=1+2
4=4
5=1+4
6=2+4
7=1+2+4
So how to find this un-summable minimum?
I know how to find this if i can store all the remaining elements by this approach:
We can use something similar to Sieve of Eratosthenes, used to find primes. Same idea, but with different rules for a different purpose.
Store the numbers from 0 to the sum of all the numbers, and cross off 0.
Then take numbers, one at a time, without replacement.
When we take the number Y, then cross off every number that is Y plus some previously-crossed off number.
When we have done this for every number that is remaining, the smallest un-crossed-off number is our answer.
However, its space requirement is high. Can there be a better and faster way to do this?
Here's an O(sort(K))-time algorithm.
Let 1 ≤ x1 ≤ x2 ≤ … ≤ xm be the integers not missing from the set. For all i from 0 to m, let yi = x1 + x2 + … + xi be the partial sum of the first i terms. If it exists, let j be the least index such that yj + 1 < xj+1; otherwise, let j = m. It is possible to show via induction that the minimum sum that cannot be made is yj + 1 (the hypothesis is that, for all i from 0 to j, the numbers x1, x2, …, xi can make all of the sums from 0 to yi and no others).
To handle the fact that the missing numbers are specified, there is an optimization that handles several consecutive numbers in constant time. I'll leave it as an exercise.
Let X be a bitvector initialized to zero. For each number Ni you set X = (X | X << Ni) | Ni. (i.e. you can make Ni and you can increase any value you could make previously by Ni).
This will set a '1' for every value you can make.
Running time is linear in N, and bitvector operations are fast.
process 1: X = 00000001
process 2: X = (00000001 | 00000001 << 2) | (00000010) = 00000111
process 4: X = (00000111 | 00000111 << 4) | (00001000) = 01111111
First number you can't make is 8.
Here is my O(K lg K) approach. I didn't test it very much because of lazy-overflow, sorry about that. If it works for you, I can explain the idea:
const int MAXK = 100003;
int n, k;
int a[MAXK];
long long sum(long long a, long long b) { // sum of elements from a to b
return max(0ll, b * (b + 1) / 2 - a * (a - 1) / 2);
}
void answer(long long ans) {
cout << ans << endl;
exit(0);
}
int main()
{
cin >> n >> k;
for (int i = 1; i <= k; ++i) {
cin >> a[i];
}
a[0] = 0;
a[k+1] = n+1;
sort(a, a+k+2);
long long ans = 0;
for (int i = 1; i <= k+1; ++i) {
// interval of existing numbers [lo, hi]
int lo = a[i-1] + 1;
int hi = a[i] - 1;
if (lo <= hi && lo > ans + 1)
break;
ans += sum(lo, hi);
}
answer(ans + 1);
}
EDIT: well, thanks God #DavidEisenstat in his answer wrote the description of the approach I used, so I don't have to write it. Basically, what he mentions as exercise is not adding the "existing numbers" one by one, but all at the same time. Before this,you just need to check if some of them breaks the invariant, which can be done using binary search. Hope it helped.
EDIT2: as #DavidEisenstat pointed in the comments, the binary search is not needed, since only the first number in every interval of existing numbers can break the invariant. Modified the code accordingly.
Given:
array of integers
value K,M
Question:
Find the maximum sum which we can obtain from all K element subsets of given array such that sum is less than value M?
is there a non dynamic programming solution available to this problem?
or if it is only dp[i][j][k] can only solve this type of problem!
can you please explain the algorithm.
Many people have commented correctly that the answer below from years ago, which uses dynamic programming, incorrectly encodes solutions allowing an element of the array to appear in a "subset" multiple times. Luckily there is still hope for a DP based approach.
Let dp[i][j][k] = true if there exists a size k subset of the first i elements of the input array summing up to j
Our base case is dp[0][0][0] = true
Now, either the size k subset of the first i elements uses a[i + 1], or it does not, giving the recurrence
dp[i + 1][j][k] = dp[i][j - a[i + 1]][k - 1] OR dp[i][j][k]
Put everything together:
given A[1...N]
initialize dp[0...N][0...M][0...K] to false
dp[0][0][0] = true
for i = 0 to N - 1:
for j = 0 to M:
for k = 0 to K:
if dp[i][j][k]:
dp[i + 1][j][k] = true
if j >= A[i] and k >= 1 and dp[i][j - A[i + 1]][k - 1]:
dp[i + 1][j][k] = true
max_sum = 0
for j = 0 to M:
if dp[N][j][K]:
max_sum = j
return max_sum
giving O(NMK) time and space complexity.
Stepping back, we've made one assumption here implicitly which is that A[1...i] are all non-negative. With negative numbers, initializing the second dimension 0...M is not correct. Consider a size K subset made up of a size K - 1 subset with sum exceeding M and one other sufficiently negative element of A[] such that overall sum no longer exceeds M. Similarly, our size K - 1 subset could sum to some extremely negative number and then with a sufficiently positive element of A[] sum to M. In order for our algorithm to still work in both cases we would need to increase the second dimension from M to the difference between the sum of all positive elements in A[] and the sum of all negative elements (the sum of the absolute values of all elements in A[]).
As for whether a non dynamic programming solution exists, certainly there is the naive exponential time brute force solution and variations that optimize the constant factor in the exponent.
Beyond that? Well your problem is closely related to subset sum and the literature for the big name NP complete problems is rather extensive. And as a general principle algorithms can come in all shapes and sizes -- it's not impossible for me to imagine doing say, randomization, approximation, (just choose the error parameter to be sufficiently small!) plain old reductions to other NP complete problems (convert your problem into a giant boolean circuit and run a SAT solver). Yes these are different algorithms. Are they faster than a dynamic programming solution? Some of them, probably. Are they as simple to understand or implement, without say training beyond standard introduction to algorithms material? Probably not.
This is a variant of the Knapsack or subset-problem, where in terms of time (at the cost of exponential growing space requirements as the input size grows), dynamic programming is the most efficient method that CORRECTLY solves this problem. See Is this variant of the subset sum problem easier to solve? for a similar question to yours.
However, since your problem is not exactly the same, I'll provide an explanation anyways. Let dp[i][j] = true, if there is a subset of length i that sums to j and false if there isn't. The idea is that dp[][] will encode the sums of all possible subsets for every possible length. We can then simply find the largest j <= M such that dp[K][j] is true. Our base case dp[0][0] = true because we can always make a subset that sums to 0 by picking one of size 0.
The recurrence is also fairly straightforward. Suppose we've calculated the values of dp[][] using the first n values of the array. To find all possible subsets of the first n+1 values of the array, we can simply take the n+1_th value and add it to all the subsets we've seen before. More concretely, we have the following code:
initialize dp[0..K][0..M] to false
dp[0][0] = true
for i = 0 to N:
for s = 0 to K - 1:
for j = M to 0:
if dp[s][j] && A[i] + j < M:
dp[s + 1][j + A[i]] = true
for j = M to 0:
if dp[K][j]:
print j
break
We're looking for a subset of K elements for which the sum of the elements is a maximum, but less than M.
We can place bounds [X, Y] on the largest element in the subset as follows.
First we sort the (N) integers, values[0] ... values[N-1], with the element values[0] is the smallest.
The lower bound X is the largest integer for which
values[X] + values[X-1] + .... + values[X-(K-1)] < M.
(If X is N-1, then we've found the answer.)
The upper bound Y is the largest integer less than N for which
values[0] + values[1] + ... + values[K-2] + values[Y] < M.
With this observation, we can now bound the second-highest term for each value of the highest term Z, where
X <= Z <= Y.
We can use exactly the same method, since the form of the problem is exactly the same. The reduced problem is finding a subset of K-1 elements, taken from values[0] ... values[Z-1], for which the sum of the elements is a maximum, but less than M - values[Z].
Once we've bound that value in the same way, we can put bounds on the third-largest value for each pair of the two highest values. And so on.
This gives us a tree structure to search, hopefully with much fewer combinations to search than N choose K.
Felix is correct that this is a special case of the knapsack problem. His dynamic programming algorithm takes O(K*M) size and O(K*K*M) amount of time. I believe his use of the variable N really should be K.
There are two books devoted to the knapsack problem. The latest one, by Kellerer, Pferschy and Pisinger [2004, Springer-Verlag, ISBN 3-540-40286-1] gives an improved dynamic programming algorithm on their page 76, Figure 4.2 that takes O(K+M) space and O(KM) time, which is huge reduction compared to the dynamic programming algorithm given by Felix. Note that there is a typo on the book's last line of the algorithm where it should be c-bar := c-bar - w_(r(c-bar)).
My C# implementation is below. I cannot say that I have extensively tested it, and I welcome feedback on this. I used BitArray to implement the concept of the sets given in the algorithm in the book. In my code, c is the capacity (which in the original post was called M), and I used w instead of A as the array that holds the weights.
An example of its use is:
int[] optimal_indexes_for_ssp = new SubsetSumProblem(12, new List<int> { 1, 3, 5, 6 }).SolveSubsetSumProblem();
where the array optimal_indexes_for_ssp contains [0,2,3] corresponding to the elements 1, 5, 6.
using System;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
public class SubsetSumProblem
{
private int[] w;
private int c;
public SubsetSumProblem(int c, IEnumerable<int> w)
{
if (c < 0) throw new ArgumentOutOfRangeException("Capacity for subset sum problem must be at least 0, but input was: " + c.ToString());
int n = w.Count();
this.w = new int[n];
this.c = c;
IEnumerator<int> pwi = w.GetEnumerator();
pwi.MoveNext();
for (int i = 0; i < n; i++, pwi.MoveNext())
this.w[i] = pwi.Current;
}
public int[] SolveSubsetSumProblem()
{
int n = w.Length;
int[] r = new int[c+1];
BitArray R = new BitArray(c+1);
R[0] = true;
BitArray Rp = new BitArray(c+1);
for (int d =0; d<=c ; d++) r[d] = 0;
for (int j = 0; j < n; j++)
{
Rp.SetAll(false);
for (int k = 0; k <= c; k++)
if (R[k] && k + w[j] <= c) Rp[k + w[j]] = true;
for (int k = w[j]; k <= c; k++) // since Rp[k]=false for k<w[j]
if (Rp[k])
{
if (!R[k]) r[k] = j;
R[k] = true;
}
}
int capacity_used= 0;
for(int d=c; d>=0; d--)
if (R[d])
{
capacity_used = d;
break;
}
List<int> result = new List<int>();
while (capacity_used > 0)
{
result.Add(r[capacity_used]);
capacity_used -= w[r[capacity_used]];
} ;
if (capacity_used < 0) throw new Exception("Subset sum program has an internal logic error");
return result.ToArray();
}
}
Bentley's Programming Pearls (2nd ed.), in the chapter about the maximum subarray problem, describes its two-dimensional version:
...we are given an n × n array of reals, and we must find the maximum sum contained in any rectangular subarray. What is the complexity of this problem?
Bentley mentions that, as of the book's publication date (2000), the problem of finding an optimal solution was open.
Is it still so? Which is the best known solution? Any pointer to recent literature?
The 1D solution to this problem (the maximal sub-array) is Theta(n) using an algorithm called "Kadane's Algorithm" (there are other algorithms I'm sure, but I have personal experience with this one). The 2D solution to this problem (the maximal sub-rectangle) is known to be O(n^3) using an implementation of Kadane's Algorithm (again I'm sure there's others, but I've used this before).
Although we know that the 2D solution can be found in Theta(n^3), no one has been able to prove whether or not n^3 is the lower bound. For many algorithms, when you increase a dimension you increase the lower bound on the algorithm by a constant magnitude. With this particular problem the complexity doesn't increase linearly, and therefore there is no known solution to work for any given dimension, thus the problem is still open.
In reference to a similar case: we know that 2 matrices can be multiplied together in n^3 time. There is also an algorithm out there that can do it in n^2.8 I believe. However, there is no math indicating that we can't get it lower than n^2.8, so it's still an "open" algorithm.
// Program to find maximum sum subarray in a given 2D array
#include <stdio.h>
#include <string.h>
#include <limits.h>
#define ROW 4
#define COL 5
// Implementation of Kadane's algorithm for 1D array. The function returns the
// maximum sum and stores starting and ending indexes of the maximum sum subarray
// at addresses pointed by start and finish pointers respectively.
int kadane(int* arr, int* start, int* finish, int n)
{
// initialize sum, maxSum and
int sum = 0, maxSum = INT_MIN, i;
// Just some initial value to check for all negative values case
*finish = -1;
// local variable
int local_start = 0;
for (i = 0; i < n; ++i)
{
sum += arr[i];
if (sum < 0)
{
sum = 0;
local_start = i+1;
}
else if (sum > maxSum)
{
maxSum = sum;
*start = local_start;
*finish = i;
}
}
// There is at-least one non-negative number
if (*finish != -1)
return maxSum;
// Special Case: When all numbers in arr[] are negative
maxSum = arr[0];
*start = *finish = 0;
// Find the maximum element in array
for (i = 1; i < n; i++)
{
if (arr[i] > maxSum)
{
maxSum = arr[i];
*start = *finish = i;
}
}
return maxSum;
}
// The main function that finds maximum sum rectangle in M[][]
void findMaxSum(int M[][COL])
{
// Variables to store the final output
int maxSum = INT_MIN, finalLeft, finalRight, finalTop, finalBottom;
int left, right, i;
int temp[ROW], sum, start, finish;
// Set the left column
for (left = 0; left < COL; ++left)
{
// Initialize all elements of temp as 0
memset(temp, 0, sizeof(temp));
// Set the right column for the left column set by outer loop
for (right = left; right < COL; ++right)
{
// Calculate sum between current left and right for every row 'i'
for (i = 0; i < ROW; ++i)
temp[i] += M[i][right];
// Find the maximum sum subarray in temp[]. The kadane() function
// also sets values of start and finish. So 'sum' is sum of
// rectangle between (start, left) and (finish, right) which is the
// maximum sum with boundary columns strictly as left and right.
sum = kadane(temp, &start, &finish, ROW);
// Compare sum with maximum sum so far. If sum is more, then update
// maxSum and other output values
if (sum > maxSum)
{
maxSum = sum;
finalLeft = left;
finalRight = right;
finalTop = start;
finalBottom = finish;
}
}
}
// Print final values
printf("(Top, Left) (%d, %d)\n", finalTop, finalLeft);
printf("(Bottom, Right) (%d, %d)\n", finalBottom, finalRight);
printf("Max sum is: %d\n", maxSum);
}
// Driver program to test above functions
int main()
{
int M[ROW][COL] = {{1, 2, -1, -4, -20},
{-8, -3, 4, 2, 1},
{3, 8, 10, 1, 3},
{-4, -1, 1, 7, -6}
};
findMaxSum(M);
return 0;
}
////////I found this program , hope it will help you
FYI, the new edition of the book has an answer, but it is so vague, I don't know what it would entail.
In any case, I would use divide and conquer + dynamic programming to solve this. Let's define MaxSum(x, y) as the maximum sum of any subarray inside the rectangle bounded by the top-left most corner of the N X N array, with height y and width x. (so the answer to the question would be in MaxSum(n-1, n-1))
MaxSum(x, y) is the max between:
1) MaxSum(x, y-1)
2) MaxSum(x-1, y)
3) Array[x, y] (the number in this N X N array for this specific location)
4) MaxEnding(x, y-1) + SUM of all elements from Array[MaxEndingXStart(x, y-1), y] to Array[x, y]
5) MaxEnding(x-1, y) + SUM of all elements from Array[x, MaxEndingYStart(x-1, y)] to Array[x, y]
MaxEnding(x, y-1) is the maximum sum of any subarray that INCLUDES the # in Array[x, y-1].
Likewise, MaxEnding(x-1, y) is the maximum sum of any subarray that INCLUDES the # in Array[x-1, y].
MaxEndingXStart(x, y-1) is the STARTING x coordinate of the subarray that has the maximum sum of any subarray that INCLUDEs the # in Array[x, y-1].
MaxEndingYStart (x-1, y) is the STARTING y coordinate of the subarray that has the maximum sum of any subarray that INCLUDES the # in Array[x-1, y].
The 2 sum's in #4 and #5 below can be computed easily by keeping the sum of all elements encountered in a specific row, as you go through each column, then subtracting 2 sums to get the sum for a specific section.
To implement this, you would need to do a bottom-up approach, since you need to compute Max(x, y-1), Max(x-1, y), MaxEnding(x, y-1), and MaxEnding(x-1, y).. so you can do lookups when you compute MaxEnding(x, y).
//first do some preprocessing and store Max(0, i) for all i from 0 to n-1.
//and store Max(i, 0) for all i from 0 to n-1.
for(int i =1; i < n; i++){
for(int j=1; j < n; j++) {
//LOGIC HERE
}
}