While working on an image processing task I have come across the following problem: There are n points in the unit square with coordinates $x_i$ and $y_i$, each assigned with a positive or negative weight $w_i$. Find a rectangle such that the sum of all weights of those points lying within the rectangle is positive and maximal.
By defining a proper grid, the problem can be rephrased as finding a submatrix in an n-by-n matrix A whose sum of elements is maximal. This is also known as the "maximal subrectangle problem" and has been discussed on SO before. While a brute force approach has a run-time of O(n^5), there is a kind of tricky solution with a run-time of O(n^3). It utilizes a solution for the corresponding one-dimensional problem, called "maximal subarray problem", with an O(n) run-time.
I have implemented both algorithms in R and can solve 100s of points in a few seconds. But with thousands of points it will be much too slow, probably even when outsourcing the loops to some Fortran or C code.
Now look at the matrix A. When assuming (w/o loss of generality) that all points have different x- or y-coordinates, A has a special form: In each row and column of A there is exactly one non-zero element. For matrices with this special property I assume there should be an algorithm performing the task in O(n^2) time, or even better.
Here is an example with the optimal rectangle added:
set.seed(723)
N <- 50; w <- rnorm(N)
x <- runif(N); y <- runif(N)
clr <- ifelse (w >= 0, "blue", "red")
plot(x, y, pch = 20, col = clr, xlim = c(0, 1), ylim = c(0, 1))
rect(0.075, 0.45, 0.31, 0.95, border="gray")
You see that there can be red, ie. negative, points in the optimal rectangle. It also shows that it will not suffice to solve the one-dimensional cases for the x- and y-coordinates.
I will translate the standard solution into Fortran, but I would surely like to have a more efficient algorithm at hand.
These guys (found from the wiki page) claim to have a simpler sub-cubic solution for the 2-dimensional case. It may be the one you're already aware of.
See the accepted answer for "Maximum sum subrectangle in a sparse matrix". For an nxn matrix with m non-zero elements, the solution there takes O(nm log n) time. So, for you, since you have exactly n non-zero elements, this would give O(n^2 log n) time. Probably you'll be able to handle cases with n being 50 times larger or more, vs. the standard O(n^3) solution.
The best I can do is O(n^2 log n).
If we look at the n+1 choose 2 calls made by Kadane's 2D algorithm to Kadane's 1D algorithm on an input of your type, all but O(n) successive pairs are on 1D arrays that differ only in one element. I'm going to present a divide-and-conquer variant of Kadane's 1D; by caching the outcomes of each recursive call, only the O(log n) that involve the changed array element have to be recomputed, reducing the (amortized) running time of the inner loop from Theta(n) to Theta(log n).
def maxsubarray(arr, a, b):
# this function returns a 4-tuple
# element 0 is the max over intervals of the form [i, j)
# element 1 is the max over intervals of the form [i, b)
# element 2 is the max over intervals of the form [a, j)
# element 3 is the max over intervals of the form [a, b), i.e., sum(arr[a:b])
n = b - a
if n == 0:
return (0, 0, 0, 0)
elif n == 1:
x = arr[a]
y = max(x, 0)
return (y, y, y, x)
else:
m = a + n // 2
l = maxsubarray(arr, a, m)
r = maxsubarray(arr, m, b)
return (max(l[0], r[0], l[1] + r[2]),
max(r[1], l[1] + r[3]),
max(l[2], l[3] + r[2]),
l[3] + r[3])
Related
This is a question that came up in the context of sorting points with integer coordinates into clockwise order, but this question is not about how to do that sorting.
This question is about the observation that 2-d vectors have a natural cyclic ordering. Unsigned integers with usual overflow behavior (or signed integers using twos-complement) also have a natural cyclic ordering. Can you easily map from the first ordering to the second?
So, the exact question is whether there is a map from pairs of twos-complement signed 32-bit integers to unsigned (or twos-complement signed) 64-bit integers such that any list of vectors that is in clockwise order maps to integers that are in decreasing (modulo overflow) order?
Some technical cases that people will likely ask about:
Yes, vectors that are multiples of each other should map to the same thing
No, I don't care which vector (if any) maps to 0
No, the images of antipodal vectors don't have to differ by 2^63 (although that is a nice-to-have)
The obvious answer is that since there are only around 0.6*2^64 distinct slopes, the answer is yes, such a map exists, but I'm looking for one that is easily computable. I understand that "easily" is subjective, but I'm really looking for something reasonably efficient and not terrible to implement. So, in particular, no counting every lattice point between the ray and the positive x-axis (unless you know a clever way to do that without enumerating them all).
An important thing to note is that it can be done by mapping to 65-bit integers. Simply project the vector out to where it hits the box bounded by x,y=+/-2^62 and round toward negative infinity. You need 63 bits to represent that integer and two more to encode which side of the box you hit. The implementation needs a little care to make sure you don't overflow, but only has one branch and two divides and is otherwise quite cheap. It doesn't work if you project out to 2^61 because you don't get enough resolution to separate some slopes.
Also, before you suggest "just use atan2", compute atan2(1073741821,2147483643) and atan2(1073741820,2147483641)
EDIT: Expansion on the "atan2" comment:
Given two values x_1 and x_2 that are coprime and just less than 2^31 (I used 2^31-5 and 2^31-7 in my example), we can use the extended Euclidean algorithm to find y_1 and y_2 such that y_1/x_1-y_2/x_2 = 1/(x_1*x_2) ~= 2^-62. Since the derivative of arctan is bounded by 1, the difference of the outputs of atan2 on these values is not going to be bigger than that. So, there are lots of pairs of vectors that won't be distinguishable by atan2 as vanilla IEEE 754 doubles.
If you have 80-bit extended registers and you are sure you can retain residency in those registers throughout the computation (and don't get kicked out by a context switch or just plain running out of extended registers), then you're fine. But, I really don't like the correctness of my code relying on staying resident in extended registers.
Here's one possible approach, inspired by a comment in your question. (For the tl;dr version, skip down to the definition of point_to_line at the bottom of this answer: that gives a mapping for the first quadrant only. Extension to the whole plane is left as a not-too-difficult exercise.)
Your question says:
in particular, no counting every lattice point between the ray and the positive x-axis (unless you know a clever way to do that without enumerating them all).
There is an algorithm to do that counting without enumerating the points; its efficiency is akin to that of the Euclidean algorithm for finding greatest common divisors. I'm not sure to what extent it counts as either "easily computable" or "clever".
Suppose that we're given a point (p, q) with integer coordinates and both p and q positive (so that the point lies in the first quadrant). We might as well also assume that q < p, so that the point (p, q) lies between the x-axis y = 0 and the diagonal line y = x: if we can solve the problem for the half of the first quadrant that lies below the diagonal, we can make use of symmetry to solve it generally.
Write M for the bound on the size of p and q, so that in your example we want M = 2^31.
Then the number of lattice points strictly inside the triangle bounded by:
the x-axis y = 0
the ray y = (q/p)x that starts at the origin and passes through (p, q), and
the vertical line x = M
is the sum as x ranges over integers in (0, M) of ⌈qx/p⌉ - 1.
For convenience, I'll drop the -1 and include 0 in the range of the sum; both those changes are trivial to compensate for. And now the core functionality we need is the ability to evaluate the sum of ⌈qx/p⌉ as x ranges over the integers in an interval [0, M). While we're at it, we might also want to be able to compute a closely-related sum: the sum of ⌊qx/p⌋ over that same range of x (and it'll turn out that it makes sense to evaluate both of these together).
For testing purposes, here are slow, naive-but-obviously-correct versions of the functions we're interested in, here written in Python:
def floor_sum_slow(p, q, M):
"""
Sum of floor(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
return sum(q * x // p for x in range(M))
def ceil_sum_slow(p, q, M):
"""
Sum of ceil(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
return sum((q * x + p - 1) // p for x in range(M))
And an example use:
>>> floor_sum_slow(51, 43, 2**28) # takes several seconds to complete
30377220771239253
>>> ceil_sum_slow(140552068, 161600507, 2**28)
41424305916577422
These sums can be evaluated much faster. The first key observation is that if q >= p, then we can apply the Euclidean "division algorithm" and write q = ap + r for some integers a and r. The sum then simplifies: the ap part contributes a factor of a * M * (M - 1) // 2, and we're reduced from computing floor_sum(p, q, M) to computing floor_sum(p, r, M). Similarly, the computation of ceil_sum(p, q, M) reduces to the computation of ceil_sum(p, q % p, M).
The second key observation is that we can express floor_sum(p, q, M) in terms of ceil_sum(q, p, N), where N is the ceiling of (q/p)M. To do this, we consider the rectangle [0, M) x (0, (q/p)M), and divide that rectangle into two triangles using the line y = (q/p)x. The number of lattice points within the rectangle that lie on or below the line is floor_sum(p, q, M), while the number of lattice points within the rectangle that lie above the line is ceil_sum(q, p, N). Since the total number of lattice points in the rectangle is (N - 1)M, we can deduce the value of floor_sum(p, q, M) from that of ceil_sum(q, p, N), and vice versa.
Combining those two ideas, and working through the details, we end up with a pair of mutually recursive functions that look like this:
def floor_sum(p, q, M):
"""
Sum of floor(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
a = q // p
r = q % p
if r == 0:
return a * M * (M - 1) // 2
else:
N = (M * r + p - 1) // p
return a * M * (M - 1) // 2 + (N - 1) * M - ceil_sum(r, p, N)
def ceil_sum(p, q, M):
"""
Sum of ceil(q * x / p) for 0 <= x < M.
Assumes p positive, q and M nonnegative.
"""
a = q // p
r = q % p
if r == 0:
return a * M * (M - 1) // 2
else:
N = (M * r + p - 1) // p
return a * M * (M - 1) // 2 + N * (M - 1) - floor_sum(r, p, N)
Performing the same calculation as before, we get exactly the same results, but this time the result is instant:
>>> floor_sum(51, 43, 2**28)
30377220771239253
>>> ceil_sum(140552068, 161600507, 2**28)
41424305916577422
A bit of experimentation should convince you that the floor_sum and floor_sum_slow functions give the same result in all cases, and similarly for ceil_sum and ceil_sum_slow.
Here's a function that uses floor_sum and ceil_sum to give an appropriate mapping for the first quadrant. I failed to resist the temptation to make it a full bijection, enumerating points in the order that they appear on each ray, but you can fix that by simply replacing the + gcd(p, q) term with + 1 in both branches.
from math import gcd
def point_to_line(p, q, M):
"""
Bijection from [0, M) x [0, M) to [0, M^2), preserving
the 'angle' ordering.
"""
if p == q == 0:
return 0
elif q <= p:
return ceil_sum(p, q, M) + gcd(p, q)
else:
return M * (M - 1) - floor_sum(q, p, M) + gcd(p, q)
Extending to the whole plane should be straightforward, though just a little bit messy due to the asymmetry between the negative range and the positive range in the two's complement representation.
Here's a visual demonstration for the case M = 7, printed using this code:
M = 7
for q in reversed(range(M)):
for p in range(M):
print(" {:02d}".format(point_to_line(p, q, M)), end="")
print()
Results:
48 42 39 36 32 28 27
47 41 37 33 29 26 21
46 40 35 30 25 20 18
45 38 31 24 19 16 15
44 34 23 17 14 12 11
43 22 13 10 09 08 07
00 01 02 03 04 05 06
This doesn't meet your requirement for an "easy" function, nor for a "reasonably efficient" one. But in principle it would work, and it might give some idea of how difficult the problem is. To keep things simple, let's consider just the case where 0 < y ≤ x, because the full problem can be solved by splitting the full 2D plane into eight octants and mapping each to its own range of integers in essentially the same way.
A point (x1, y1) is "anticlockwise" of (x2, y2) if and only if the slope y1/x1 is greater than the slope y2/x2. To map the slopes to integers in an order-preserving way, we can consider the sequence of all distinct fractions whose numerators and denominators are within range (i.e. up to 231), in ascending numerical order. Note that each fraction's numerical value is between 0 and 1 since we are just considering one octant of the plane.
This sequence of fractions is finite, so each fraction has an index at which it occurs in the sequence; so to map a point (x, y) to an integer, first reduce the fraction y/x to its simplest form (e.g. using Euclid's algorithm to find the GCD to divide by), then compute that fraction's index in the sequence.
It turns out this sequence is called a Farey sequence; specifically, it's the Farey sequence of order 231. Unfortunately, computing the index of a given fraction in this sequence turns out to be neither easy nor reasonably efficient. According to the paper
Computing Order Statistics in the Farey Sequence by Corina E. Pǎtraşcu and Mihai Pǎtraşcu, there is a somewhat complicated algorithm to compute the rank (i.e. index) of a fraction in O(n) time, where n in your case is 231, and there is unlikely to be an algorithm in time polynomial in log n because the algorithm can be used to factorise integers.
All of that said, there might be a much easier solution to your problem, because I've started from the assumption of wanting to map these fractions to integers as densely as possible (i.e. no "unused" integers in the target range), whereas in your question you wrote that the number of distinct fractions is about 60% of the available range of size 264. Intuitively, that amount of leeway doesn't seem like a lot to me, so I think the problem is probably quite difficult and you may need to settle for a solution that uses a larger output range, or a smaller input range. At the very least, by writing this answer I might save somebody else the effort of investigating whether this approach is feasible.
Just some random ideas / observations:
(edit: added two more and marked the first one as wrong as pointed out in the comments)
Divide into 16 22.5° segments instead of 8 45° segments
If I understand the problem correctly, the lines spread out "more" towards 45°, "wasting" resolution that you need for smaller angles. (Incorrect, see below)
In the mapping to 62 bit integers, there must be gaps. Identify enough low density areas to map down to 61 bits? Perhaps plot for a smaller problem to potentially see a pattern?
As the range for x and y is limited, for a given x0, all (legal) x < x0 with y > q must have a smaller angle. Could this help to break down the problem in some way? Perhaps cutting a triangle where points can easily be enumerated out of the problem for each quadrant?
I am trying to write a python algorithm to do the following.
Given a set of positive integers S, find the subset with the smallest sum, greater or equal to k.
For example:
S = [50, 103, 85, 21, 30]
k = 140
subset = [85, 50, 21] (with sum = 146)
The numbers in the initial set are all integers, and k can be arbitrarily large. Usually there will be about 100 numbers in the set.
Of course there's the brute force solution of going through all possible subsets, but that runs in O(2^n) which is unfeasable. I have been told that this problem is NP-Complete, but that there should be a Dynamic Programing approach that allows it to run in pseudo-polynomial time, like the knapsack problem, but so far, attempting to use DP still leads me to solutions that are O(2^n).
Is there such a way to appy DP to this problem? If so, how? I find DP hard to understand so I might have missed something.
Any help is much appreciated.
Well seeing that numbers are not integers but reals, best I can think of is O(2^(n/2) log (2^(n/2)).
It might look worse at first glance but notice that 2^(n/2) == sqrt(2^n)
So to achieve such complexity we will use technique known as meet in the middle:
Split set into 2 parts of sizes n/2 and n-n/2
Use brute force to generate all subsets (including empty one) and store them in arrays, let's call them A and B
Let's sort array B
Now for each element a in A, if B[-1] + a >=k we can use binary search to find smallest element b in B that satisfies a + b >= k
out of all such a + b pairs we found choose the smallest
OP changed question a little now its integers so here goes dynamic solution:
well not much to say, classical knapsack.
for each i in [1,n] we have 2 options for set item i:
1. Include in subset, state changes from (i, w) to (i+1, w + S[i])
2. Skip it, state changes from (i, w) to (i+1, w)
Every time we reach some w that`s >= k, we update answer
Pseudo-code:
visited = Set() //some set/hashtable object to store visited states
S = [...]//set of integers from input
int ats = -1;
void solve(int i, int w) //theres atmost n*k different states so complexity is O(n*k)
{
if(w >= k)
{
if(ats==-1)ats=w;
else ats=min(ats,w);
return;
}
if(i>n)return;
if(visited.count(i,w))return; //we already visited this state, can skip
visited.insert(i,w)=1;
solve(i+1, w + S[i]); //take item
solve(i+1, w); //skip item
}
solve(1,0);
print(ats);
I am trying to find an algorithm to solve the following equation:
∑ max(ai, x) = y
in which the ai are constants and x is the variable.
I can find an algorithm with O(n log n) time complexity as follows:
First of all, sort the ai in O(n log n) time, and arrange intervals
(−∞, a0), (a0, a1), …, (ai, ai+1), …, (an−1, an), (an, ∞)
Then, for each interval, assume x belongs to this interval, and solve the equation. We could get a x̂, and then test whether x̂ belongs to this interval or not. If x̂ belongs to the corresponding interval, we will assign x̂ to x, and return x. On the other hand, we will try the next interval until we get the solution.
The above method is an O(n log n) algorithm due to the sort. With the definition of the equation-solving problem, I expect an algorithm with O(n) time complexity. Is there any reference for this problem?
First of all, this only has a solution if the sum of all a_i is smaller than y. You should check this first, because the algorithm below depends on this property.
Assume that we have chosen some pivot p from all a_i and want to calculate the x that corresponds to the interval [p, q), where q is the next larger a_i. This is:
If you move p to the next larger a_i, x changes as follows:
, where p' is the new pivot and n is the old number of a_i that are smaller or equal to p. Under the assumption that the sum of all a_i is smaller than y, this clearly leads to a decrease of x. Similarly, if we choose a smaller p, x is increased.
Coming back to the first equation, we can observe the following: If x is smaller than p, we should choose a smaller p. If x is greater than the smallest of the greater a_is, we should choose a larger p. In every other case, we have found the right x.
This can be utilized in a quick select procedure. #MvG's comment brought me onto this track. All credits for the quick select idea go to him. Here is some pseudo code (modified version from Wikipedia):
findX(list, y)
left := 0
right := length(list) - 1
sumGreater := 0 // the sum of all a_i greater than the current interval
numSmaller := 0 // the number of all a_i smaller than the current interval
minGreater := inf //the minimum of all a_i greater than the current interval
loop
if left = right
return (y - sumGreater) / (numSmaller + 1)
pivotIndex := medianOfMedians(list, left, right)
//the partition function will also sum the elements larger than the pivot,
//count the elements smaller than the pivot, and find the minimum of the
//larger elements
(pivotIndex, partialSumGreater, partialNumSmaller, partialMinGreater)
:= partition(list, left, right, pivotIndex)
x := (y - sumGreater - partialSumGreater) / (numSmaller + partialNumSmaller + 1)
if(x >= list[pivotIndex] && x < min(partialMinGreater, minGreater))
return x
else if x < list[pivotIndex]
right := pivotIndex - 1
minGreater := list[pivotIndex]
sumGreater += partialSumGreater + list[pivotIndex]
else
left := pivotIndex + 1
numSmaller += partialNumSmaller + 1
The key idea is that the partitioning function gathers some additional statistics. This does not change the time complexity of the partitioning function because it requires O(n) additional operations, leaving a total time complexity of O(n) for the partitioning function. The medianOfMedians function is also linear in time. The remaining operations in the loop are constant time. Assuming that the median of medians yields good pivots, the total time of the entire algorithm is approximately O(n + n/2 + n/4 + n/8 ...) = O(n).
Since comments might get deleted, I'm turning my own comments into a coherent answer. Contrary to the original question, I'm using indices 1 through n, avoiding the a0 originally used. So this is consistent one-based indexing using inclusive indices.
Assume for the moment that bi are the coefficients from your input, but in sorted order, so bi ≤ bi+1. As you essentially already wrote, if bi ≤ x ≤ bi+1 then the result is i ⋅ x + bi+1 + ⋯ + bn since the first i terms will use the x and the other terms will use the bj. Solving for x you get x = (y − bi+1 − ⋯ - bn) / i and putting that back into your inequality you have i ⋅ bi ≤ y − bi+1 − ⋯ − bn ≤ i ⋅ bi+1. Concentrating on one of the inequalities, you want the largest i such that
i ⋅ bi ≤ y − bi+1 − ⋯ − bn (subsequently called “the inequality”)
But in order to make this work on unsorted ai, you'd need something similar to the median of medians. That is an algorithm which achieves O(n) guaranteed worst-case behavior for the problem of selecting a median, where the typical quickselect would take O(n²) in the worst case although it usually does quite well in practice.
Actually your problem is not that different from quickselect. You can pick a pivot coefficient, and split the remainder into larger and smaller values. Then you evaluate the inequality for the pivot element. If it is satisfied, you recurse into the list of larger elements, otherwise you recurse into the list of smaller elements, until at some point you have two adjacent elements, one which satisfies the inequality and one which does not.
This is O(n²) in the worst case, since you might need O(n) recursive calls, each of them taking O(n) time to process its input. Just like the O(n²) quickselect itself is suboptimal. The median-of-medians shows that that problem can indeed be solved in O(n). So we either need to find a similar solution here, or reformulate this problem here in terms of finding the median, or write some algorithm wich makes use of the median in a reasonable way.
Actually Nico Schertler found a way to achieve that last option: Take the algorithm I outlined above, but choose the pivot element to be the median. That way you can guarantee that each recursive call will process at most half as much input as the previous call. Since the median of medians itself is O(n) this can be done without exceeding the O(n) bound for each recursive call.
So in pseudocode it's like this (using inclusive indices throughout):
# f: Process whole problem with coefficients a_1 through a_n
f(y, a, n) := begin
if y < (sum of a_i for i from 1 through n): # O(n)
throw Error "Cannot satisfy equation" # Or omit check and risk division by zero
return g(a, 1, n, y) # O(n)
end
# g: Recursively process part of the problem, namely a_l through a_r
# Precondition: we know inequality holds for i = l - 1 and fails for i = r + 1
# a: the array as provided to f; will get modified in place
# l: left index (inclusive)
# r: right index (inclusive)
# y: (original y) - (sum of a_j for j from r + 1 through n)
g(a, l, r, y) := begin # process a_l through a_r O(r-l)
if r < l: # inequality holds in r but fails in l O(1)
return y / r # compute x for the case of i = r O(1)
m = median(a, l, r) # computed using median of medians O(r-l)
i = floor((l + r) / 2) # index of median, with same tie breaks O(1)
partition(a, l, r, m) # so a_l…a_(i-1) ≤ a_i=m ≤ a_(i+1)…a_r O(r-l)
rhs = y - (sum of a_j for j from i + 1 to r) # O((r-l)/2)
if i * a_i ≤ rhs: # condition holds, check larger i
return g(a, i + 1, r, y) # recurse in right half of list O((r-l)/2)
else: # condition fails, check smaller i
return g(a, l, i - 1, rhs - m) # recurse in left half of list O((r-l)/2)
end
The sum-subset problem states:
Given a set of integers, is there a non-empty subset whose sum is zero?
This problem is NP-complete in general. I'm curious if the complexity of this slight variant is known:
Given a set of integers, is there a subset of size k whose sum is zero?
For example, if k = 1, you can do a binary search to find the answer in O(log n). If k = 2, then you can get it down to O(n log n) (e.g. see Find a pair of elements from an array whose sum equals a given number). If k = 3, then you can do O(n^2) (e.g. see Finding three elements in an array whose sum is closest to a given number).
Is there a known bound that can be placed on this problem as a function of k?
As motivation, I was thinking about this question How do you partition an array into 2 parts such that the two parts have equal average? and trying to determine if it is actually NP-complete. The answer lies in whether or not there is a formula as described above.
Barring a general solution, I'd be very interested in knowing an optimal bound for k=4.
For k=4, space complexity O(n), time complexity O(n2 * log(n))
Sort the array. Starting from 2 smallest and 2 largest elements, calculate all lesser sums of 2 elements (a[i] + a[j]) in the non-decreasing order and all greater sums of 2 elements (a[k] + a[l]) in the non-increasing order. Increase lesser sum if total sum is less than zero, decrease greater one if total sum is greater than zero, stop when total sum is zero (success) or a[i] + a[j] > a[k] + a[l] (failure).
The trick is to iterate through all the indexes i and j in such a way, that (a[i] + a[j]) will never decrease. And for k and l, (a[k] + a[l]) should never increase. A priority queue helps to do this:
Put key=(a[i] + a[j]), value=(i = 0, j = 1) to priority queue.
Pop (sum, i, j) from priority queue.
Use sum in the above algorithm.
Put (a[i+1] + a[j]), i+1, j and (a[i] + a[j+1]), i, j+1 to priority queue only if these elements were not already used. To keep track of used elements, maintain an array of maximal used 'j' for each 'i'. It is enough to use only values for 'j', that are greater, than 'i'.
Continue from step 2.
For k>4
If space complexity is limited to O(n), I cannot find anything better, than use brute force for k-4 values and the above algorithm for the remaining 4 values. Time complexity O(n(k-2) * log(n)).
For very large k integer linear programming may give some improvement.
Update
If n is very large (on the same order as maximum integer value), it is possible to implement O(1) priority queue, improving complexities to O(n2) and O(n(k-2)).
If n >= k * INT_MAX, different algorithm with O(n) space complexity is possible. Precalculate a bitset for all possible sums of k/2 values. And use it to check sums of other k/2 values. Time complexity is O(n(ceil(k/2))).
The problem of determining whether 0 in W + X + Y + Z = {w + x + y + z | w in W, x in X, y in Y, z in Z} is basically the same except for not having annoying degenerate cases (i.e., the problems are inter-reducible with minimal resources).
This problem (and thus the original for k = 4) has an O(n^2 log n)-time, O(n)-space algorithm. The O(n log n)-time algorithm for k = 2 (to determine whether 0 in A + B) accesses A in sorted order and B in reverse sorted order. Thus all we need is an O(n)-space iterator for A = W + X, which can be reused symmetrically for B = Y + Z. Let W = {w1, ..., wn} in sorted order. For all x in X, insert a key-value item (w1 + x, (1, x)) into a priority queue. Repeatedly remove the min element (wi + x, (i, x)) and insert (wi+1 + x, (i+1, x)).
Question that is very similar:
Is this variant of the subset sum problem easier to solve?
It's still NP-complete.
If it were not, the subset-sum would also be in P, as it could be represented as F(1) | F(2) | ... F(n) where F is your function. This would have O(O(F(1)) + O(F(2)) + O(F(n))) which would still be polynomial, which is incorrect as we know it's NP-complete.
Note that if you have certain bounds on the inputs you can achieve polynomial time.
Also note that the brute-force runtime can be calculated with binomial coefficients.
The solution for k=4 in O(n^2log(n))
Step 1: Calculate the pairwise sum and sort the list. There are n(n-1)/2 sums. So the complexity is O(n^2log(n)). Keep the identities of the individuals which make the sum.
Step 2: For each element in the above list search for the complement and make sure they don't share "the individuals). There are n^2 searches, each with complexity O(log(n))
EDIT: The space complexity of the original algorithm is O(n^2). The space complexity can be reduced to O(1) by simulating a virtual 2D matrix (O(n), if you consider space to store sorted version of the array).
First about 2D matrix: sort the numbers and create a matrix X using pairwise sums. Now the matrix is ins such a way that all the rows and columns are sorted. To search for a value in this matrix, search the numbers on the diagonal. If the number is in between X[i,i] and X[i+1,i+1], you can basically halve the search space by to matrices X[i:N, 0:i] and X[0:i, i:N]. The resulting search algorithm is O(log^2n) (I AM NOT VERY SURE. CAN SOMEBODY CHECK IT?).
Now, instead of using a real matrix, use a virtual matrix where X[i,j] are calculated as needed instead of pre-computing them.
Resulting time complexity: O( (nlogn)^2 ).
PS: In the following link, it says the complexity of 2D sorted matrix search is O(n) complexity. If that is true (i.e. O(log^2n) is incorrect), then the finally complexity is O(n^3).
To build on awesomo's answer... if we can assume that numbers are sorted, we can do better than O(n^k) for given k; simply take all O(n^(k-1)) subsets of size (k-1), then do a binary search in what remains for a number that, when added to the first (k-1), gives the target. This is O(n^(k-1) log n). This means the complexity is certainly less than that.
In fact, if we know that the complexity is O(n^2) for k=3, we can do even better for k > 3: choose all (k-3)-subsets, of which there are O(n^(k-3)), and then solve the problem in O(n^2) on the remaining elements. This is O(n^(k-1)) for k >= 3.
However, maybe you can do even better? I'll think about this one.
EDIT: I was initially going to add a lot proposing a different take on this problem, but I've decided to post an abridged version. I encourage other posters to see whether they believe this idea has any merit. The analysis is tough, but it might just be crazy enough to work.
We can use the fact that we have a fixed k, and that sums of odd and even numbers behave in certain ways, to define a recursive algorithm to solve this problem.
First, modify the problem so that you have both even and odd numbers in the list (this can be accomplished by dividing by two if all are even, or by subtracting 1 from numbers and k from the target sum if all are odd, and repeating as necessary).
Next, use the fact that even target sums can be reached only by using an even number of odd numbers, and odd target sums can be reached using only an odd number of odd numbers. Generate appropriate subsets of the odd numbers, and call the algorithm recursively using the even numbers, the sum minus the sum of the subset of odd numbers being examined, and k minus the size of the subset of odd numbers. When k = 1, do binary search. If ever k > n (not sure this can happen), return false.
If you have very few odd numbers, this could allow you to very quickly pick up terms that must be part of a winning subset, or discard ones that cannot. You can transform problems with lots of even numbers to equivalent problems with lots of odd numbers by using the subtraction trick. The worst case must therefore be when the numbers of even and odd numbers are very similar... and that's where I am right now. A uselessly loose upper bound on this is many orders of magnitudes worse than brute-force, but I feel like this is probably at least as good as brute-force. Thoughts are welcome!
EDIT2: An example of the above, for illustration.
{1, 2, 2, 6, 7, 7, 20}, k = 3, sum = 20.
Subset {}:
{2, 2, 6, 20}, k = 3, sum = 20
= {1, 1, 3, 10}, k = 3, sum = 10
Subset {}:
{10}, k = 3, sum = 10
Failure
Subset {1, 1}:
{10}, k = 1, sum = 8
Failure
Subset {1, 3}:
{10}, k = 1, sum = 6
Failure
Subset {1, 7}:
{2, 2, 6, 20}, k = 1, sum = 12
Failure
Subset {7, 7}:
{2, 2, 6, 20}, k = 1, sum = 6
Success
The time complexity is trivially O(n^k) (number of k-sized subsets from n elements).
Since k is a given constant, a (possibly quite high-order) polynomial upper bounds the complexity as a function of n.
I have 2 sets of integers, A and B, not necessarily of the same size. For my needs, I take the distance between each 2 elements a and b (integers) to be just abs(a-b).
I am defining the distance between the two sets as follows:
If the sets are of the same size, minimize the sum of distances of all pairs [a,b] (a from A and b from B), minimization over all possible 'pairs partitions' (there are n! possible partitions).
If the sets are not of the same size, let's say A of size m and B of size n, with m < n, then minimize the distance from (1) over all subsets of B which are of size m.
My question is, is the following algorithm (just an intuitive guess) gives the right answer, according to the definition written above.
Construct a matrix D of size m X n, with D(i,j) = abs(A(i)-B(j))
Find the smallest element of D, accumulate it, and delete the row and the column of that element. Accumulate the next smallest entry, and keep accumulating until all rows and columns are deleted.
for example, if A={0,1,4} and B={3,4}, then D is (with the elements above and to the left):
3 4
0 3 4
1 2 3
4 1 0
And the distance is 0 + 2 = 2, coming from pairing 4 with 4 and 3 with 1.
Note that this problem is referred to sometimes as the skis and skiers problem, where you have n skis and m skiers of varying lengths and heights. The goal is to match skis with skiers so that the sum of the differences between heights and ski lengths is minimized.
To solve the problem you could use minimum weight bipartite matching, which requires O(n^3) time.
Even better, you can achieve O(n^2) time with O(n) extra memory using the simple dynamic programming algorithm below.
Optimally, you can solve the problem in linear time if the points are already sorted using the algorithm described in this paper.
O(n^2) dynamic programming algorithm:
if (size(B) > size(A))
swap(A, B);
sort(A);
sort(B);
opt = array(size(B));
nopt = array(size(B));
for (i = 0; i < size(B); i++)
opt[i] = abs(A[0] - B[i]);
for (i = 1; i < size(A); i++) {
fill(nopt, infinity);
for (j = 1; j < size(B); j++) {
nopt[j] = min(nopt[j - 1], opt[j - 1] + abs(A[i] - B[j]));
swap(opt, nopt);
}
return opt[size(B) - 1];
After each iteration i of the outer for loop above, opt[j] contains the optimal solution matching {A[0],..., A[i]} using the elements {B[0],..., B[j]}.
The correctness of this algorithm relies on the fact that in any optimal matching if a1 is matched with b1, a2 is matched with b2, and a1 < a2, then b1 <= b2.
In order to get the optimum, solve the assignment problem on D.
The assignment problem finds a perfect matching in a bipartite graph such that the total edge weight is minimized, which maps perfectly to your problem. It is also in P.
EDIT to explain how OP's problem maps onto assignment.
For simplicity of explanation, extend the smaller set with special elements e_k.
Let A be the set of workers, and B be the set of tasks (the contents are just labels).
Let the cost be the distance between an element in A and B (i.e. an entry of D). The distance between e_k and anything is 0.
Then, we want to find a perfect matching of A and B (i.e. every worker is matched with a task), such that the cost is minimized. This is the assignment problem.
No It's not a best answer, for example:
A: {3,7} and B:{0,4} you will choose: {(3,4),(0,7)} and distance is 8 but you should choose {(3,0),(4,7)} in this case distance is 6.
Your answer gives a good approximation to the minimum, but not necessarily the best minimum. You are following a "greedy" approach which is generally much easier, and gives good results, but can not guarantee the best answer.