Max coverage disjoint intervals - algorithm

Assume you have k<=10^5 intervals [a_i, b_i] \in [1,10^18] (some of them may overlap), and you need to choose a set of intervals mutually disjoint such that their union is maximal. Not maximum number of disjoint intervals, but the union must cover the most.
Can't try all possible subsets 2^k infeasible.
Greedy approaches ordering by a_i ( interval covering algorithm) and ordering by b_i ( maximum number of disjoint intervals algorithm ) didn't work
Can't figure out if there is a dynamic program solution.
Given the size of the input, I think the solution should be O(k log k) or O(k)
Examples
1. [1,4], [3,5], [5,9], [7, 18]
Sol [3,5]u[7,18]
[1,2], [2,6], [3,4], [5,7]
Sol [1,2]u[3,4]u[5,7]
[2,30], [25,39], [30,40]
Sol [2,30]

The problem can be solved in O(k log(k)).
First sort the intervals by their upper bounds (the b_is). Let I(1), I(2), ..., I(k) be the list of sorted intervals. That is,
b_1 <= b_2 <= ... <= b_k
Denote by w(i) the length of interval I(i). That is,
w(i) = b_i - a_i
Denote by f(i) the total length of the optimal solution among those whose last interval is I(i). That is, the solution corresponding to f(i) is a set which:
contains the interval I(i)
doesn't contain any interval whose upper bound is above b_i
has the maximum cover among the sets of (non-overlapping) intervals satisfying 1+2
Now we are going to compute f(1), f(2), ..., f(k) and return the maximum value of them all. Clearly, the optimal solution corresponds to one of the f(i)s and therefore the maximal f(i) is the optimal solution.
To compute each f(i) we use dynamic programming. We do this by relying on the following recurrence relation:
f(i) = w(i) + max{f(j) | b_j < a_i}
I'll demonstrate the computation with your first input example:
I(1)=[1, 4], w(1)=3
I(2)=[3, 5], w(2)=2
I(3)=[5, 9], w(3)=4
I(4)=[7, 18], w(4)=11
We compute f(i) for i=1, 2, 3, 4:
f(1) = w(1) + max{None} = 3
f(1) intervals: {I(1)}
f(2) = w(2) + max{None} = 2
f(2) intervals: {I(2)}
f(3) = w(3) + max{f(1)} = 4 + 1 = 5
f(3) intervals = {I(1), I(3)}
f(4) = w(4) + max{f(1), f(2)} = 11 + f(1) = 11 + 3 = 14
f(4) intervals = {I(1), I(4)}
The maximum f(i) is f(4) which corresponds to the set of intervals {I(1), I(4)}, the optimal solution.

There seems to be a O(k * log(k)) solution. It can be achieved with segment tree data structure.
We may at first populate some endPos array of segment endings, sort it. Memorise for each of the segments corresponding endPos index. For this let endPosIdx be such array that endPosIdxj will store an index in endPos where the j-th segment ends.
Next we will introduce a segment tree. It will process the following requests:
1. getMax(i) - get maximum value on the range [0, i].
2. update(i, value) - update maximum at i-th position with value.
i is and index in endPos array. Calling getMax(i) we ask for what maximum cover can we achieve if non of the segments ends after endPosi. Calling update(i, value) we say that now there exists a cover with length value ending at endPosi.
Sort all segments in increasing order by their starting position aj. Process them in that order. The gist is to find the largest cover if we will certainly take current segment in resulting set. Current cover will equal to the sum of the length of current segment and max cover of the segments ending before current. Let j be the index of current segment (they are sorted by start pos). Let i then be such max index that endPosi ≤ aj (i may be found from j by binary search). Then we can find
coverj = lengthj + getMax(i)
Next we should update segment tree calling update(endPosIdxj, coverj) and proceed to the next segment.
After processing of all the segments the solution can be found by calling getMax(size(endPos)).

Related

Segments with most points algorithm analysis

We define x1, x2,..., x_n to be a sequence of points (numbers) and [s_i, t_i] be a set of n segments for 1 ≤ i ≤ n. Point x_j is inside the segment i if s_i ≤ x_j ≤ t_i. I want to find the segment with the most points.
Now to solve this, I am thinking we can sort x and the intervals based on s. Keep a separate array, T, such that T[i] = maximum points in the segment i. Initialize all the values in this array to 0. Then, for each x, check all the intervals that fit the constraint and increment T[i] accordingly.
This in the worst case scenario can take O(n^2). But I feel like I have a lot of redundancy here. How do I make this more efficient?
Just to clarify, if you problem is one-dimensional, the points in X (x_1 to x_n) are numbers, and the segments are intervals.
You can easily solve this by sorting X and using the resulting indices. You can effectively calculate the number of points within a segment [s, t] by finding the two corresponding indices i and j. Find (using binary-search or whatever is most efficient) i such that x_i < s <= x_(i+1), and j such that x_j <= t < x_(j+1). Note the inequalities (in case s or t might be in X). The number of points within [s, t] is equal to j-i.
If it is possible that s < x_1 or t > x_n, simply append a point to both ends of X (a minimum and a maximum).
This has complexity O(n log n), limited by the sorting algorithm. If you can use something like counting sort that uses the values as indices into an array (or keys into a multiset), then you can improve on that by doing some more work.
Let S be the set of points containing every s and every t for all the segments [s, t]. The idea is to build an indexing array for X (kind of like for a counting sort).
First, build the array A such that A[x in X] = 1 and A[x not in X] = 0. Then, go through it again to build the array A_less such that A_less[i] equals the sum of all A[j] with j < i.
For example, if A = [1, 0, 0, 1, 0, 1, 0], then A_less = [0, 1, 1, 1, 2, 2, 3]. You can build this array using a simple counter.
You can now refer directly to this array to get the number of points which values are less than or equal to another. In the previous example, there are clearly three points in X, with values 0, 3, and 5. By refering to A_less, you can know that there are A_less[4] = 2 points with values less than or equal to 4.
Similarly, build A_less_equal such that A_less_equal[i] equals the sum of all A[j] with j <= i. Using the same example, A_less_equal = [1, 1, 1, 2, 2, 3, 3].
Now, for any segment [s, t], you can get the number of points it contains by computing A_less_equal[t] - A_less[s]. All of that has complexity O(n).
If your points are not integers (are at least, not easily usable as indices), then you can still use the same idea, replacing the arrays with sorted sets, the keys of which are every value in X or S (you need to add the values in S to be able to look them up at the end).

Maximum of sums of unsorted array and each of a number of sorted arrays

Given an unsorted array
A = a_1 ... a_n
And a set of sorted Arrays
B_i = b_i_1 ... b_i_n # for i from 1 to $large_number
I would like to find the maximums from the (not yet calculated) sum arrays
C_i = (a_1 + b_i_1) ... (a_n + b_i_n)
for each i.
Is there a trick to do better than just calculating all the C_i and finding their maximums in O($large_number * n)?
Can we do better when we know that the B arrays are just shifts from an endless sequence,
e.g.
S = 0 1 4 9 16 ...
B_i = S[i:i+n]
(The above sequence has the maybe advantageous property that (S_i - S_i-1 > S_i-1 - S_i-2))
There are $large_number * n data in your first problem, so there can't be any such trick.
You can prove this with an adversary argument. Suppose you have an algorithm that solves your problem without looking at all n * $large_number entries of b. I'm going to pick a fixed a, namely (-10, -20, -30, ..., -10n). The first $large_number * n - 1 the algorithm looks at an entry b_(i,j), I'll answer that it's 10j, for a sum of zero. The last time it looks at an entry, I'll answer that it's 10j+1, for a sum of 1.
If $large_number is Omega(n), your second problem requires you to look at n * $large_number entries of S, so it also can't have any such trick.
However, if you specify S, there may be something. And if $large_number <= n/2 (or whatever it is), then, all of the entries of S must be sorted, so you only have to look at the last B.
If we don't know anything I don't it's possible to do better than O($large_number * n)
However - If it's just shifts of an endless sequence we can do it in O($large_number + n):
We calculate B_0 ןמ O($large_number).
Than B_1 = (B_0 - S[0]) + S[n+1]
And in general: B_i = (B_i-1 - S[i-1]) + S[i-1+n].
So we can calculate all the other entries and the max in O(n).
This is for a general sequence - if we have some info about it, it might be possible to do better.
we know that the B arrays are just shifts from an endless sequence,
e.g.
S = 0 1 4 9 16 ...
B_i = S[i:i+n]
You can easily calculate S[i:i+n] as (sum of squares from 1 to i+n) - (sum of squares from 1 to i-1)
See https://math.stackexchange.com/questions/183316/how-to-get-to-the-formula-for-the-sum-of-squares-of-first-n-numbers
With the provided example, S1 = 0, S2 = 1, S3 = 4...
Let f(n) = SUM of Si for i=1 to n = (n-1)(n)(2n-1)/6
B_i = f(i+n) - f(i-1)
You then add SUM(A) to each sum.
Another approach is to calculate the difference between B_i and B_(i-1):
That would be: S[i:i+n] - S[i-1:i+n-1] = S(i+n) - S(i-1)
That way, you can just calculate the difference of the sums of each array with the previous one. In my understanding, since Ci = SUM(Bi)+SUM(A), SUM(A) becomes a constant that is irrelevant in finding the maximum.

Generate a random integer from 0 to N-1 which is not in the list

You are given N and an int K[].
The task at hand is to generate a equal probabilistic random number between 0 to N-1 which doesn't exist in K.
N is strictly a integer >= 0.
And K.length is < N-1. And 0 <= K[i] <= N-1. Also assume K is sorted and each element of K is unique.
You are given a function uniformRand(int M) which generates uniform random number in the range 0 to M-1 And assume this functions's complexity is O(1).
Example:
N = 7
K = {0, 1, 5}
the function should return any random number { 2, 3, 4, 6 } with equal
probability.
I could get a O(N) solution for this : First generate a random number between 0 to N - K.length. And map the thus generated random number to a number not in K. The second step will take the complexity to O(N). Can it be done better in may be O(log N) ?
You can use the fact that all the numbers in K[] are between 0 and N-1 and they are distinct.
For your example case, you generate a random number from 0 to 3. Say you get a random number r. Now you conduct binary search on the array K[].
Initialize i = K.length/2.
Find K[i] - i. This will give you the number of numbers missing from the array in the range 0 to i.
For example K[2] = 5. So 3 elements are missing from K[0] to K[2] (2,3,4)
Hence you can decide whether you have to conduct the remaining search in the first part of array K or the next part. This is because you know r.
This search will give you a complexity of log(K.length)
EDIT: For example,
N = 7
K = {0, 1, 4} // modified the array to clarify the algorithm steps.
the function should return any random number { 2, 3, 5, 6 } with equal probability.
Random number generated between 0 and N-K.length = random{0-3}. Say we get 3. Hence we require the 4th missing number in array K.
Conduct binary search on array K[].
Initial i = K.length/2 = 1.
Now we see K[1] - 1 = 0. Hence no number is missing upto i = 1. Hence we search on the latter part of the array.
Now i = 2. K[2] - 2 = 4 - 2 = 2. Hence there are 2 missing numbers up to index i = 2. But we need the 4th missing element. So we again have to search in the latter part of the array.
Now we reach an empty array. What should we do now? If we reach an empty array between say K[j] & K[j+1] then it simply means that all elements between K[j] and K[j+1] are missing from the array K.
Hence all elements above K[2] are missing from the array, namely 5 and 6. We need the 4th element out of which we have already discarded 2 elements. Hence we will choose the second element which is 6.
Binary search.
The basic algorithm:
(not quite the same as the other answer - the number is only generated at the end)
Start in the middle of K.
By looking at the current value and it's index, we can determine the number of pickable numbers (numbers not in K) to the left.
Similarly, by including N, we can determine the number of pickable numbers to the right.
Now randomly go either left or right, weighted based on the count of pickable numbers on each side.
Repeat in the chosen subarray until the subarray is empty.
Then generate a random number in the range consisting of the numbers before and after the subarray in the array.
The running time would be O(log |K|), and, since |K| < N-1, O(log N).
The exact mathematics for number counts and weights can be derived from the example below.
Extension with K containing a bigger range:
Now let's say (for enrichment purposes) K can also contain values N or larger.
Then, instead of starting with the entire K, we start with a subarray up to position min(N, |K|), and start in the middle of that.
It's easy to see that the N-th position in K (if one exists) will be >= N, so this chosen range includes any possible number we can generate.
From here, we need to do a binary search for N (which would give us a point where all values to the left are < N, even if N could not be found) (the above algorithm doesn't deal with K containing values greater than N).
Then we just run the algorithm as above with the subarray ending at the last value < N.
The running time would be O(log N), or, more specifically, O(log min(N, |K|)).
Example:
N = 10
K = {0, 1, 4, 5, 8}
So we start in the middle - 4.
Given that we're at index 2, we know there are 2 elements to the left, and the value is 4, so there are 4 - 2 = 2 pickable values to the left.
Similarly, there are 10 - (4+1) - 2 = 3 pickable values to the right.
So now we go left with probability 2/(2+3) and right with probability 3/(2+3).
Let's say we went right, and our next middle value is 5.
We are at the first position in this subarray, and the previous value is 4, so we have 5 - (4+1) = 0 pickable values to the left.
And there are 10 - (5+1) - 1 = 3 pickable values to the right.
We can't go left (0 probability). If we go right, our next middle value would be 8.
There would be 2 pickable values to the left, and 1 to the right.
If we go left, we'd have an empty subarray.
So then we'd generate a number between 5 and 8, which would be 6 or 7 with equal probability.
This can be solved by basically solving this:
Find the rth smallest number not in the given array, K, subject to
conditions in the question.
For that consider the implicit array D, defined by
D[i] = K[i] - i for 0 <= i < L, where L is length of K
We also set D[-1] = 0 and D[L] = N
We also define K[-1] = 0.
Note, we don't actually need to construct D. Also note that D is sorted (and all elements non-negative), as the numbers in K[] are unique and increasing.
Now we make the following claim:
CLAIM: To find the rth smallest number not in K[], we need to find right most occurrence of r' in D (which occurs at position defined by j), where r' is the largest number in D, which is < r. Such an r' exists, because D[-1] = 0. Once we find such an r' (and j), the number we are looking for is r-r' + K[j].
Proof: Basically the definition of r' and j tells us that there are exactlyr' numbers missing from 0 to K[j], and more than r numbers missing from 0 to K[j+1]. Thus all the numbers from K[j]+1 to K[j+1]-1 are missing (and these missing are at least r-r' in number), and the number we seek is among them, given by K[j] + r-r'.
Algorithm:
In order to find (r',j) all we need to do is a (modified) binary search for r in D, where we keep moving to the left even if we find r in the array.
This is an O(log K) algorithm.
If you are running this many times, it probably pays to speed up your generation operation: O(log N) time just isn't acceptable.
Make an empty array G. Starting at zero, count upwards while progressing through the values of K. If a value isn't in K add it to G. If it is in K don't add it and progress your K pointer. (This relies on K being sorted.)
Now you have an array G which has only acceptable numbers.
Use your random number generator to choose a value from G.
This requires O(N) preparatory work and each generation happens in O(1) time. After N look-ups the amortized time of all operations is O(1).
A Python mock-up:
import random
class PRNG:
def __init__(self, K,N):
self.G = []
kptr = 0
for i in range(N):
if kptr<len(K) and K[kptr]==i:
kptr+=1
else:
self.G.append(i)
def getRand(self):
rn = random.randint(0,len(self.G)-1)
return self.G[rn]
prng=PRNG( [0,1,5], 7)
for i in range(20):
print prng.getRand()

Optimizing a DP on Intervals/Points

Well the problem is quite easy to solve naively in O(n3) time. The problem is something like:
There are N unique points on a number line. You want to cover every
single point on the number line with some set of intervals. You can
place an interval anywhere, and it costs B + MX to create an
interval, where B is the initial cost of creating an interval, and
X is half the length of the interval, and M is the cost per
length of interval. You want to find the minimum cost to cover every
single interval.
Sample data:
Points = {0, 7, 100}
B = 20
M = 5
So the optimal solution would be 57.50 because you can build an interval [0,7] at cost 20 + 3.5×5 and build an interval at [100,100] at cost 100 + 0×5, which adds up to 57.50.
I have an O(n3) solution, where the DP is minimum cost to cover points from [left, right]. So the answer would be in DP[1][N]. For every pair (i,j) I just iterate over k = {i...j-1} and compute DP[i][k] + DP[k + 1][j].
However, this solution is O(n3) (kind of like matrix multiplication I think) so it's too slow on N > 2000. Any way to optimize this?
Here's a quadratic solution:
Sort all the points by coordinate. Call the points p.
We'll keep an array A such that A[k] is the minimum cost to cover the first k points. Set A[0] to zero and all other elements to infinity.
For each k from 0 to n-1 and for each l from k+1 to n, set A[l] = min(A[l], A[k] + B + M*(p[l-1] - p[k])/2);
You should be able to convince yourself that, at the end, A[n] is the minimum cost to cover all n points. (We considered all possible minimal covering intervals and we did so from "left to right" in a certain sense.)
You can speed this up so that it runs in O(n log n) time; replace step 3 with the following:
Set A[1] = B. For each k from 2 to n, set A[k] = A[k-1] + min(M/2 * (p[k-1] - p[k-2]), B).
The idea here is that we either extend the previous interval to cover the next point or we end the previous interval at p[k-2] and begin a new one at p[k-1]. And the only thing we need to know to make that decision is the distance between the two points.
Notice also that, when computing A[k], I only needed the value of A[k-1]. In particular, you don't need to store the whole array A; only its most recent element.

Sum-subset with a fixed subset size

The sum-subset problem states:
Given a set of integers, is there a non-empty subset whose sum is zero?
This problem is NP-complete in general. I'm curious if the complexity of this slight variant is known:
Given a set of integers, is there a subset of size k whose sum is zero?
For example, if k = 1, you can do a binary search to find the answer in O(log n). If k = 2, then you can get it down to O(n log n) (e.g. see Find a pair of elements from an array whose sum equals a given number). If k = 3, then you can do O(n^2) (e.g. see Finding three elements in an array whose sum is closest to a given number).
Is there a known bound that can be placed on this problem as a function of k?
As motivation, I was thinking about this question How do you partition an array into 2 parts such that the two parts have equal average? and trying to determine if it is actually NP-complete. The answer lies in whether or not there is a formula as described above.
Barring a general solution, I'd be very interested in knowing an optimal bound for k=4.
For k=4, space complexity O(n), time complexity O(n2 * log(n))
Sort the array. Starting from 2 smallest and 2 largest elements, calculate all lesser sums of 2 elements (a[i] + a[j]) in the non-decreasing order and all greater sums of 2 elements (a[k] + a[l]) in the non-increasing order. Increase lesser sum if total sum is less than zero, decrease greater one if total sum is greater than zero, stop when total sum is zero (success) or a[i] + a[j] > a[k] + a[l] (failure).
The trick is to iterate through all the indexes i and j in such a way, that (a[i] + a[j]) will never decrease. And for k and l, (a[k] + a[l]) should never increase. A priority queue helps to do this:
Put key=(a[i] + a[j]), value=(i = 0, j = 1) to priority queue.
Pop (sum, i, j) from priority queue.
Use sum in the above algorithm.
Put (a[i+1] + a[j]), i+1, j and (a[i] + a[j+1]), i, j+1 to priority queue only if these elements were not already used. To keep track of used elements, maintain an array of maximal used 'j' for each 'i'. It is enough to use only values for 'j', that are greater, than 'i'.
Continue from step 2.
For k>4
If space complexity is limited to O(n), I cannot find anything better, than use brute force for k-4 values and the above algorithm for the remaining 4 values. Time complexity O(n(k-2) * log(n)).
For very large k integer linear programming may give some improvement.
Update
If n is very large (on the same order as maximum integer value), it is possible to implement O(1) priority queue, improving complexities to O(n2) and O(n(k-2)).
If n >= k * INT_MAX, different algorithm with O(n) space complexity is possible. Precalculate a bitset for all possible sums of k/2 values. And use it to check sums of other k/2 values. Time complexity is O(n(ceil(k/2))).
The problem of determining whether 0 in W + X + Y + Z = {w + x + y + z | w in W, x in X, y in Y, z in Z} is basically the same except for not having annoying degenerate cases (i.e., the problems are inter-reducible with minimal resources).
This problem (and thus the original for k = 4) has an O(n^2 log n)-time, O(n)-space algorithm. The O(n log n)-time algorithm for k = 2 (to determine whether 0 in A + B) accesses A in sorted order and B in reverse sorted order. Thus all we need is an O(n)-space iterator for A = W + X, which can be reused symmetrically for B = Y + Z. Let W = {w1, ..., wn} in sorted order. For all x in X, insert a key-value item (w1 + x, (1, x)) into a priority queue. Repeatedly remove the min element (wi + x, (i, x)) and insert (wi+1 + x, (i+1, x)).
Question that is very similar:
Is this variant of the subset sum problem easier to solve?
It's still NP-complete.
If it were not, the subset-sum would also be in P, as it could be represented as F(1) | F(2) | ... F(n) where F is your function. This would have O(O(F(1)) + O(F(2)) + O(F(n))) which would still be polynomial, which is incorrect as we know it's NP-complete.
Note that if you have certain bounds on the inputs you can achieve polynomial time.
Also note that the brute-force runtime can be calculated with binomial coefficients.
The solution for k=4 in O(n^2log(n))
Step 1: Calculate the pairwise sum and sort the list. There are n(n-1)/2 sums. So the complexity is O(n^2log(n)). Keep the identities of the individuals which make the sum.
Step 2: For each element in the above list search for the complement and make sure they don't share "the individuals). There are n^2 searches, each with complexity O(log(n))
EDIT: The space complexity of the original algorithm is O(n^2). The space complexity can be reduced to O(1) by simulating a virtual 2D matrix (O(n), if you consider space to store sorted version of the array).
First about 2D matrix: sort the numbers and create a matrix X using pairwise sums. Now the matrix is ins such a way that all the rows and columns are sorted. To search for a value in this matrix, search the numbers on the diagonal. If the number is in between X[i,i] and X[i+1,i+1], you can basically halve the search space by to matrices X[i:N, 0:i] and X[0:i, i:N]. The resulting search algorithm is O(log^2n) (I AM NOT VERY SURE. CAN SOMEBODY CHECK IT?).
Now, instead of using a real matrix, use a virtual matrix where X[i,j] are calculated as needed instead of pre-computing them.
Resulting time complexity: O( (nlogn)^2 ).
PS: In the following link, it says the complexity of 2D sorted matrix search is O(n) complexity. If that is true (i.e. O(log^2n) is incorrect), then the finally complexity is O(n^3).
To build on awesomo's answer... if we can assume that numbers are sorted, we can do better than O(n^k) for given k; simply take all O(n^(k-1)) subsets of size (k-1), then do a binary search in what remains for a number that, when added to the first (k-1), gives the target. This is O(n^(k-1) log n). This means the complexity is certainly less than that.
In fact, if we know that the complexity is O(n^2) for k=3, we can do even better for k > 3: choose all (k-3)-subsets, of which there are O(n^(k-3)), and then solve the problem in O(n^2) on the remaining elements. This is O(n^(k-1)) for k >= 3.
However, maybe you can do even better? I'll think about this one.
EDIT: I was initially going to add a lot proposing a different take on this problem, but I've decided to post an abridged version. I encourage other posters to see whether they believe this idea has any merit. The analysis is tough, but it might just be crazy enough to work.
We can use the fact that we have a fixed k, and that sums of odd and even numbers behave in certain ways, to define a recursive algorithm to solve this problem.
First, modify the problem so that you have both even and odd numbers in the list (this can be accomplished by dividing by two if all are even, or by subtracting 1 from numbers and k from the target sum if all are odd, and repeating as necessary).
Next, use the fact that even target sums can be reached only by using an even number of odd numbers, and odd target sums can be reached using only an odd number of odd numbers. Generate appropriate subsets of the odd numbers, and call the algorithm recursively using the even numbers, the sum minus the sum of the subset of odd numbers being examined, and k minus the size of the subset of odd numbers. When k = 1, do binary search. If ever k > n (not sure this can happen), return false.
If you have very few odd numbers, this could allow you to very quickly pick up terms that must be part of a winning subset, or discard ones that cannot. You can transform problems with lots of even numbers to equivalent problems with lots of odd numbers by using the subtraction trick. The worst case must therefore be when the numbers of even and odd numbers are very similar... and that's where I am right now. A uselessly loose upper bound on this is many orders of magnitudes worse than brute-force, but I feel like this is probably at least as good as brute-force. Thoughts are welcome!
EDIT2: An example of the above, for illustration.
{1, 2, 2, 6, 7, 7, 20}, k = 3, sum = 20.
Subset {}:
{2, 2, 6, 20}, k = 3, sum = 20
= {1, 1, 3, 10}, k = 3, sum = 10
Subset {}:
{10}, k = 3, sum = 10
Failure
Subset {1, 1}:
{10}, k = 1, sum = 8
Failure
Subset {1, 3}:
{10}, k = 1, sum = 6
Failure
Subset {1, 7}:
{2, 2, 6, 20}, k = 1, sum = 12
Failure
Subset {7, 7}:
{2, 2, 6, 20}, k = 1, sum = 6
Success
The time complexity is trivially O(n^k) (number of k-sized subsets from n elements).
Since k is a given constant, a (possibly quite high-order) polynomial upper bounds the complexity as a function of n.

Resources