Count combinations for 0-1 knapsack - algorithm

I wonder what is the most efficent (time and memory) way to count the number of subsets with the sum less than or equal than some limit. For example, for the set {1, 2, 4} and limit of 3 such number whould be 4 (subsets are {}, {1}, {2}, {1, 2}). I tried coding a subsets in a bit vector (mask) and finding an answer in a following way (pseudo code):
solve(mask, sum, limit)
if visited[mask]
return
if sum <= limit
count = count + 1
visited[mask] = true
for i in 0..n - 1
if there is i-th bit
sum = sum - array[i]
mask = mask without i-th bit
count (mask, sum, limit)
solve(2^n - 1, knapsack sum, knapsack limit)
Arrays are zero-based, count can be a global variable and visited is an array of length 2^n. I understand that the problem has an exponential complexity but is there a better approach/ improvement to my idea? The algorithm runs fast for n ≤ 24 but my approach is pretty brute-force and I was thinking about existance of some clever way to find an answer for n = 30 for instance.

The most efficient for space is a recursive traversal of all subsets that just keeps a count. This will be O(2^n) time and O(n) memory where n is the size of the overall set.
All known solutions can be exponential in time because your program is a variation of subset-sum. That is known to be NP complete. But a pretty efficient DP solution is as follows in pseudocode with comments.
# Calculate the lowest sum and turn all elements positive.
# This turns the limit problem into one with only non-negative elements.
lowest_sum = 0
for element in elements:
if element < 0:
lowest_sum += element
element = -element
# Sort and calculate trailing sums. This allows us to break off
# the details of lots of ways to be below our bound.
elements = sort elements from largest to smallest
total = sum(elements)
trailing_sums = []
for element in elements:
total -= element
push total onto trailing_sums
# Now do dp
answer = 0
ways_to_reach_sum = {lowest_sum: 1}
n = length(answer)
for i in range(0, n):
new_ways_to_reach_sum = {}
for (sum, count) in ways_to_reach_sum:
# Do we consider ways to add this element?
if bound <= elements[i] + sum:
new_ways_to_reach_sum[sum] += count
# Make sure we keep track of ways to not add this element
if bound <= sum + trailing_sums[i]:
# All ways to compute the subset are part of the answer
answer += count * 2**(n - i)
else:
new_ways_to_reach_sum[sum] += count
# And finish processing this element.
ways_to_reach_sum = new_ways_to_reach_sum
# And just to be sure
for (sum, count) in ways_to_reach_sum:
if sum <= bound:
answer += count
# And now answer has our answer!

Related

Coin change(Dynamic programming)

I have a question about the coin change problem where we not only have to print the number of ways to change $n with the given coin denominations for eg {1,5,10,25}, but also print the ways
For example if the target = $50, and the coins are {1,5,10,25}, then the ways to actually get use the coins to get the target are
2 × $25
1 × $25 + 2 × $10 + 1 × $5
etc.
What is the best time complexity we could get to solve this problem?
I tried to modify the dynamic programming solution for the coin change problem where we only need the number of ways but not the actual ways
I am having trouble figuring out the time complexity.
I do use memorization so that I don't have to solve the same problem again for the given coin and sum value but still we need to iterate through all the solution and print them. So the time complexity is definitely more than O(ns) where n is the number of coins and s is the target
Is it exponential? Any help will be much appreciated
Printing Combinations
def coin_change_solutions(coins, S):
# create an S x N table for memoization
N = len(coins)
sols = [[[] for n in xrange(N + 1)] for s in xrange(S + 1)]
for n in range(0, N + 1):
sols[0][n].append([])
# fill table using bottom-up dynamic programming
for s in range(1, S+1):
for n in range(1, N+1):
without_last = sols[s][n - 1]
if (coins[n - 1] <= s):
with_last = [list(sol) + [coins[n-1]] for sol in sols[s - coins[n - 1]][n]]
else:
with_last = []
sols[s][n] = without_last + with_last
return sols[S][N]
print coin_change_solutions([1,2], 4)
# => [[1, 1, 1, 1], [1, 1, 2], [2, 2]]
without: we don't need to use the last coin to make the sum. All the coin solutions are found directly by recursing to solution[s][n-1]. We take all those coin combinations and copy them to with_last_sols.
with: we do need to use the last coin. So that coin must be in our solution. The remaining coins are found recursively via sol[s - coins[n - 1]][n]. Reading this entry will give us many possible choices for what the remaining coins should be. For each possible choice , sol, we append the last coin, coin[n - 1]:
# For example, suppose target is s = 4
# We're finding solutions that use the last coin.
# Suppose the last coin has a value of 2:
#
# find possible combinations that add up to 4 - 2 = 2:
# ===> [[1,1], [2]]
# then for each combination, add the last coin
# so that the combination adds up to 4)
# ===> [[1,1,2], [2,2]]
The final list of combinations is found by taking the combinations for the first case and the second case and concatenating the two lists.
without_last_sols = [[1,1,1,1]]
with_last_sols = [[1,1,2], [2,2]]
without_last_sols + with_last_sols = [[1,1,1,1], [1,1,2], [2,2]]
Time Complexity
In the worst case we have a coin set with all coins from 1 to n: coins
= [1,2,3,4,...,n] – the number of possible coin sum combinations, num solutions, is equal to the number of integer partitions of s, p(s).
It can be shown that the number of integer partitions, p(s) grows exponentially.
Hence num solutions = p(s) = O(2^s). Any solution must have this at a minimum so that it can print out all these possible solutions. Hence the problem is exponential in nature.
We have two loops: one loop for s and the other loop for n.
For each s and n, we compute sols[s][n]:
without: We look at the O(2^s) combinations in sol[s - coins[n - 1]][n]. For each combination, we copy it in O(n) time. So overall this takes: O(n×2^s) time.
with: We look at all O(2^s) combinations in sol[s][n]. For each combination list sol, we create copy of that new list in O(n) time and then append the last coin. Overall this case takes O(n×2^s).
Hence the time complexity is O(s×n)×O(n2^s + n2^s) = O(s×n^2×2^s).
Space Complexity
The space complexity is O(s×n^2×2^s) because we have a s×n table with
each entry storing O(2^s) possible combinations, (e.g. [[1, 1, 1, 1], [1, 1, 2], [2, 2]]), with each combination, (e.g. [1,1,1,1]) taking O(n) space.
What I tend to do is solve the problem recursively and then build a memoization solution from there.
Starting with a recursive one the approach is simple, pick a coin subtract from target and dont pick a coin.
Whilst you pick a coin you add it to a vector or your list, when you dont pick one you pop the one you added before. The code looks something like:
void print(vector<int>& coinsUsed)
{
for(auto c : coinsUsed)
{
cout << c << ",";
}
cout << endl;
}
int helper(vector<int>& coins, int target, int index, vector<int>& coinsUsed)
{
if (index >= coins.size() || target < 0) return 0;
if (target == 0)
{
print(coinsUsed);
return 1;
}
coinsUsed.push_back(coins[index]);
int with = helper(coins, target - coins[index], index, coinsUsed);
coinsUsed.pop_back();
int without = helper(coins, target, index + 1, coinsUsed);
return with + without;
}
int coinChange(vector<int>& coins, int target)
{
vector<int> coinsUsed;
return helper(coins, target, 0, coinsUsed);
}
You can call it like:
vector<int> coins = {1,5,10,25};
cout << "Total Ways:" << coinChange(coins, 10);
So this gives you the total ways and also the coins used in the process to reach the target stored in coinsUsed you can now memoize this as you please by storing the passed in values in a cache.
The time complexity of the recursive solution is exponential.
link to the running program: http://coliru.stacked-crooked.com/a/5ef0ed76b7a496fe
Let d_i be a denomination, the value of a coin in cents. In your example d_i = {1, 5, 10, 25}.
Let k be the number of denominations (coins), here k = 4.
We will use a 2D array numberOfCoins[1..k][0..n] to determine the minimum number of coins required to make a change. The optimal solution is given by:
numberOfCoins[k][n] = min(numberOfCoins[i − 1][j], numberOfCoins[i][j − d_i] + 1)
The equation above represents the fact that to build an optimal solution we either do not use d_i, so we need use a smaller coin (this is why i is decremented below):
numberOfCoins[i][j] = numberOfCoins[i − 1][j] // eq1
or we use d_i, so we add +1 to the number of coins needed and we decrement by d_i (the value of the coin we just used):
numberOfCoins[i][j] = numberOfCoins[i][j − d_i] + 1 // eq2
The time complexity is O(kn) but in cases where k is small, as is the case in your example, we have O(4n) = O(n).
We will use another 2D array, coinUsed, having the same dimensions as numberOfCoins, to mark which coins were used. Each entry will either tell us that we did not use the coin in coinUsed[i][j] by setting a "^" in that position (this correspond to eq1). Or we mark that the coin was used by setting a "<" in that position (corresponding to eq2).
Both arrays can be built as the algorithm is working. We will only have constant more instructions in the inner loop, therefore the time complexity of building both arrays is still O(kn).
To print the solution we need to iterate, in the worse case scenario over k + n+1 elements. For example, when the optimal solution is using all 1 cent denominations. But note that printing is done after building so the overall time complexity is O(kn) + O(k + n+1). As before, if k is small the complexity is O(kn) + O(k + n+1) = O(kn) + O(n+1) = O(kn) + O(n) = O((k+1)n) = O(n).

Find longest sequences with sufficient average score

I have a long list of scores between 0 and 1. How do I efficiently find all contiguous sublists longer than x elements such that the average score in each sublist is not less than y?
E.g., how do I find all contiguous sublists longer than 300 elements such that the average score of these sublists is not less than 0.8?
I'm mainly interested in the LONGEST sublists that fulfill these criteria, not actually all sublists. So I'm looking for all longest sublists.
If you want only the longest such substrings, this can be solved in O(n log n) time by transforming the problem slightly and then binary-searching over maximum solution lengths.
Let the input list of scores be x[1], ..., x[n]. Let's transform this list by subtracting y from each element, to form the list z[1], ..., z[n], whose elements may be positive or negative. Notice that any sublist x[i .. j] has average score at least y if and only if the sum of elements in the corresponding sublist in z (i.e., z[i] + z[i+1] + ... + z[j]) is at least 0. So, if we had a way to compute the maximum sum T of any sublist in z[] efficiently (spoiler: we do), this would, as a side effect, tell us if there is any sublist in x[] that has average score at least y: if T >= 0 then there is at least 1 such sublist, while if T < 0 then there is no sublist in x[] (not even a single-element sublist) that has average score at least y. But this doesn't yet give us all the information we need to answer your original question, since nothing forces the maximum-sum sublist in z to have maximum length: it could well be that a longer sublist exists that has lower overall average, while still having average at least y.
This can be addressed by generalising the problem of finding the sublist with maximum sum: instead of asking for a sublist with maximum sum overall, we will now ask for a sublist having maximum sum among all sublists having length at least some given k. I'll now describe an algorithm that, given a list of numbers z[1], ..., z[n], each of which can be positive or negative, and any positive integer k, will compute the maximum sum of any sublist of z[] having length at least k, as well as the location of a particular sublist that achieves this sum, and has longest possible length among all sublists having this sum. It's a slight generalisation of Kadane's algorithm.
FindMaxSumLongerThan(z[], k):
v = 0 # Sum of the rightmost k numbers in the current sublist
For i from 1 to k:
v = v + z[i]
best = v
bestStart = 1
bestEnd = k
# Now for each i, with k+1 <= i <= n, find the biggest sum ending at position i.
tail = -1 # Will contain the maximum sum among all lists ending at i-k
tailLen = 0 # The length of the longest list having the above sum
For i from k+1 to n:
If tail >= 0:
tail = tail + z[i-k]
tailLen = tailLen + 1
Else:
tail = z[i-k]
tailLen = 1
If tail >= 0:
nonnegTail = tail
nonnegTailLen = tailLen
Else:
nonnegTail = 0
nonnegTailLen = 0
v = v + z[i] - z[i-k] # Slide the window right 1 position
If v + nonnegTail > best:
best = v + nonnegTail
bestStart = i - k - nonnegTailLen + 1
bestEnd = i
The above algorithm takes O(n) time and O(1) space, returning the maximum sum in best and the beginning and ending positions of some sublist that achieves that sum in bestStart and bestEnd, respectively.
How is the above useful? For a given input list x[], suppose we first transform x[] into z[] by subtracting y from each element as described above; this will be the z[] passed into every call to FindMaxSumLongerThan(). We can view the value of best that results from calling the function with z[] and a given minimum sublist length k as a mathematical function of k: best(k). Since FindMaxSumLongerThan() finds the maximum sum of any sublist of z[] having length at least k, best(k) is a nonincreasing function of k. (Say we set k=5 and found that the maximum sum of any sublist is 42; then we are guaranteed to find a total of at least 42 if we try again with k=4 or k=3.) That means we can binary search on k to find the largest k such that best(k) >= 0: that k will then be the longest sublist of x[] that has average value at least y. The resulting bestStart and bestEnd will identify a particular sublist having this property; it's easy to modify the algorithm to find all (at most n -- one per rightmost position) of these sublists without increasing the time complexity.
I think that general solution is always O(N^2). I will demonstrate a code in Python and some optimizations you can implement to increase the performance by several orders of magnitude.
Let's generate some data:
from random import random
scores_list = [random() for i in range(10000)]
scores_len = len(scores_list)
Let's say these are our target values:
# Your average
avg = 0.55
# Your min lenght
min_len = 10
Here is a naive brute force solution
res = []
for i in range(scores_len - min_len):
for j in range(i+min_len, scores_len):
l = scores_list[i:j]
if sum(l) / (j - i) >= avg:
res.append(l)
That will run very slowly because it has to perform 10000^2 (10^8) operations.
Here is how we can do it better. It is still quadratic but there is some tricks wich allows it to perform much much faster:
res = []
i = 0
while i < scores_len - min_len:
j = i + min_len
di = scores_len
dj = 0
current_sum = sum(scores_list[i:j])
while j < scores_len:
current_sum += sum(scores_list[j-dj:j])
current_avg = current_sum/(j - i)
if current_avg >= avg:
res.append(scores_list[i:j])
dj = 1
di = 1
else:
dj = max(1, int((avg * (j - i) - current_sum)/(1 - avg)))
di = min(di, max(1, int(((j-i) * avg - current_sum)/avg)))
j += dj
i += di
For uniform distribution (which we have here) and for given target values it will perform only less than 10^6 operations (~7 * 10^5) and this is by two orders of magnitude less than brute force solution.
So basically if you have a few target sublists it will perform very good. And if you have a lot of them this algorithm will be about the same as a brute force one.

Minimum sum that cant be obtained from a set

Given a set S of positive integers whose elements need not to be distinct i need to find minimal non-negative sum that cant be obtained from any subset of the given set.
Example : if S = {1, 1, 3, 7}, we can get 0 as (S' = {}), 1 as (S' = {1}), 2 as (S' = {1, 1}), 3 as (S' = {3}), 4 as (S' = {1, 3}), 5 as (S' = {1, 1, 3}), but we can't get 6.
Now we are given one array A, consisting of N positive integers. Their are M queries,each consist of two integers Li and Ri describe i'th query: we need to find this Sum that cant be obtained from array elements ={A[Li], A[Li+1], ..., A[Ri-1], A[Ri]} .
I know to find it by a brute force approach to be done in O(2^n). But given 1 ≤ N, M ≤ 100,000.This cant be done .
So is their any effective approach to do it.
Concept
Suppose we had an array of bool representing which numbers so far haven't been found (by way of summing).
For each number n we encounter in the ordered (increasing values) subset of S, we do the following:
For each existing True value at position i in numbers, we set numbers[i + n] to True
We set numbers[n] to True
With this sort of a sieve, we would mark all the found numbers as True, and iterating through the array when the algorithm finishes would find us the minimum unobtainable sum.
Refinement
Obviously, we can't have a solution like this because the array would have to be infinite in order to work for all sets of numbers.
The concept could be improved by making a few observations. With an input of 1, 1, 3, the array becomes (in sequence):
(numbers represent true values)
An important observation can be made:
(3) For each next number, if the previous numbers had already been found it will be added to all those numbers. This implies that if there were no gaps before a number, there will be no gaps after that number has been processed.
For the next input of 7 we can assert that:
(4) Since the input set is ordered, there will be no number less than 7
(5) If there is no number less than 7, then 6 cannot be obtained
We can come to a conclusion that:
(6) the first gap represents the minimum unobtainable number.
Algorithm
Because of (3) and (6), we don't actually need the numbers array, we only need a single value, max to represent the maximum number found so far.
This way, if the next number n is greater than max + 1, then a gap would have been made, and max + 1 is the minimum unobtainable number.
Otherwise, max becomes max + n. If we've run through the entire S, the result is max + 1.
Actual code (C#, easily converted to C):
static int Calculate(int[] S)
{
int max = 0;
for (int i = 0; i < S.Length; i++)
{
if (S[i] <= max + 1)
max = max + S[i];
else
return max + 1;
}
return max + 1;
}
Should run pretty fast, since it's obviously linear time (O(n)). Since the input to the function should be sorted, with quicksort this would become O(nlogn). I've managed to get results M = N = 100000 on 8 cores in just under 5 minutes.
With numbers upper limit of 10^9, a radix sort could be used to approximate O(n) time for the sorting, however this would still be way over 2 seconds because of the sheer amount of sorts required.
But, we can use statistical probability of 1 being randomed to eliminate subsets before sorting. On the start, check if 1 exists in S, if not then every query's result is 1 because it cannot be obtained.
Statistically, if we random from 10^9 numbers 10^5 times, we have 99.9% chance of not getting a single 1.
Before each sort, check if that subset contains 1, if not then its result is one.
With this modification, the code runs in 2 miliseconds on my machine. Here's that code on http://pastebin.com/rF6VddTx
This is a variation of the subset-sum problem, which is NP-Complete, but there is a pseudo-polynomial Dynamic Programming solution you can adopt here, based on the recursive formula:
f(S,i) = f(S-arr[i],i-1) OR f(S,i-1)
f(-n,i) = false
f(_,-n) = false
f(0,i) = true
The recursive formula is basically an exhaustive search, each sum can be achieved if you can get it with element i OR without element i.
The dynamic programming is achieved by building a SUM+1 x n+1 table (where SUM is the sum of all elements, and n is the number of elements), and building it bottom-up.
Something like:
table <- SUM+1 x n+1 table
//init:
for each i from 0 to SUM+1:
table[0][i] = true
for each j from 1 to n:
table[j][0] = false
//fill the table:
for each i from 1 to SUM+1:
for each j from 1 to n+1:
if i < arr[j]:
table[i][j] = table[i][j-1]
else:
table[i][j] = table[i-arr[j]][j-1] OR table[i][j-1]
Once you have the table, you need the smallest i such that for all j: table[i][j] = false
Complexity of solution is O(n*SUM), where SUM is the sum of all elements, but note that the algorithm can actually be trimmed after the required number was found, without the need to go on for the next rows, which are un-needed for the solution.

Find subset with elements that are furthest apart from eachother

I have an interview question that I can't seem to figure out. Given an array of size N, find the subset of size k such that the elements in the subset are the furthest apart from each other. In other words, maximize the minimum pairwise distance between the elements.
Example:
Array = [1,2,6,10]
k = 3
answer = [1,6,10]
The bruteforce way requires finding all subsets of size k which is exponential in runtime.
One idea I had was to take values evenly spaced from the array. What I mean by this is
Take the 1st and last element
find the difference between them (in this case 10-1) and divide that by k ((10-1)/3=3)
move 2 pointers inward from both ends, picking out elements that are +/- 3 from your previous pick. So in this case, you start from 1 and 10 and find the closest elements to 4 and 7. That would be 6.
This is based on the intuition that the elements should be as evenly spread as possible. I have no idea how to prove it works/doesn't work. If anyone knows how or has a better algorithm please do share. Thanks!
This can be solved in polynomial time using DP.
The first step is, as you mentioned, sort the list A. Let X[i,j] be the solution for selecting j elements from first i elements A.
Now, X[i+1, j+1] = max( min( X[k,j], A[i+1]-A[k] ) ) over k<=i.
I will leave initialization step and memorization of subset step for you to work on.
In your example (1,2,6,10) it works the following way:
1 2 6 10
1 - - - -
2 - 1 5 9
3 - - 1 4
4 - - - 1
The basic idea is right, I think. You should start by sorting the array, then take the first and the last elements, then determine the rest.
I cannot think of a polynomial algorithm to solve this, so I would suggest one of the two options.
One is to use a search algorithm, branch-and-bound style, since you have a nice heuristic at hand: the upper bound for any solution is the minimum size of the gap between the elements picked so far, so the first guess (evenly spaced cells, as you suggested) can give you a good baseline, which will help prune most of the branches right away. This will work fine for smaller values of k, although the worst case performance is O(N^k).
The other option is to start with the same baseline, calculate the minimum pairwise distance for it and then try to improve it. Say you have a subset with minimum distance of 10, now try to get one with 11. This can be easily done by a greedy algorithm -- pick the first item in the sorted sequence such that the distance between it and the previous item is bigger-or-equal to the distance you want. If you succeed, try increasing further, if you fail -- there is no such subset.
The latter solution can be faster when the array is large and k is relatively large as well, but the elements in the array are relatively small. If they are bound by some value M, this algorithm will take O(N*M) time, or, with a small improvement, O(N*log(M)), where N is the size of the array.
As Evgeny Kluev suggests in his answer, there is also a good upper bound on the maximum pairwise distance, which can be used in either one of these algorithms. So the complexity of the latter is actually O(N*log(M/k)).
You can do this in O(n*(log n) + n*log(M)), where M is max(A) - min(A).
The idea is to use binary search to find the maximum separation possible.
First, sort the array. Then, we just need a helper function that takes in a distance d, and greedily builds the longest subarray possible with consecutive elements separated by at least d. We can do this in O(n) time.
If the generated array has length at least k, then the maximum separation possible is >=d. Otherwise, it's strictly less than d. This means we can use binary search to find the maximum value. With some cleverness, you can shrink the 'low' and 'high' bounds of the binary search, but it's already so fast that sorting would become the bottleneck.
Python code:
def maximize_distance(nums: List[int], k: int) -> List[int]:
"""Given an array of numbers and size k, uses binary search
to find a subset of size k with maximum min-pairwise-distance"""
assert len(nums) >= k
if k == 1:
return [nums[0]]
nums.sort()
def longest_separated_array(desired_distance: int) -> List[int]:
"""Given a distance, returns a subarray of nums
of length k with pairwise differences at least that distance (if
one exists)."""
answer = [nums[0]]
for x in nums[1:]:
if x - answer[-1] >= desired_distance:
answer.append(x)
if len(answer) == k:
break
return answer
low, high = 0, (nums[-1] - nums[0])
while low < high:
mid = (low + high + 1) // 2
if len(longest_separated_array(mid)) == k:
low = mid
else:
high = mid - 1
return longest_separated_array(low)
I suppose your set is ordered. If not, my answer will be changed slightly.
Let's suppose you have an array X = (X1, X2, ..., Xn)
Energy(Xi) = min(|X(i-1) - Xi|, |X(i+1) - Xi|), 1 < i <n
j <- 1
while j < n - k do
X.Exclude(min(Energy(Xi)), 1 < i < n)
j <- j + 1
n <- n - 1
end while
$length = length($array);
sort($array); //sorts the list in ascending order
$differences = ($array << 1) - $array; //gets the difference between each value and the next largest value
sort($differences); //sorts the list in ascending order
$max = ($array[$length-1]-$array[0])/$M; //this is the theoretical max of how large the result can be
$result = array();
for ($i = 0; i < $length-1; $i++){
$count += $differences[i];
if ($length-$i == $M - 1 || $count >= $max){ //if there are either no more coins that can be taken or we have gone above or equal to the theoretical max, add a point
$result.push_back($count);
$count = 0;
$M--;
}
}
return min($result)
For the non-code people: sort the list, find the differences between each 2 sequential elements, sort that list (in ascending order), then loop through it summing up sequential values until you either pass the theoretical max or there arent enough elements remaining; then add that value to a new array and continue until you hit the end of the array. then return the minimum of the newly created array.
This is just a quick draft though. At a quick glance any operation here can be done in linear time (radix sort for the sorts).
For example, with 1, 4, 7, 100, and 200 and M=3, we get:
$differences = 3, 3, 93, 100
$max = (200-1)/3 ~ 67
then we loop:
$count = 3, 3+3=6, 6+93=99 > 67 so we push 99
$count = 100 > 67 so we push 100
min(99,100) = 99
It is a simple exercise to convert this to the set solution that I leave to the reader (P.S. after all the times reading that in a book, I've always wanted to say it :P)

Efficient iteration over sorted partial sums

I have a list of N positive numbers sorted in ascending order, L[0] to L[N-1].
I want to iterate over subsets of M distinct list elements (without replacement, order not important), 1 <= M <= N, sorted according to their partial sum. M is not fixed, the final result should consider all possible subsets.
I only want the K smallest subsets efficiently (ideally polynomial in K). The obvious algorithm of enumerating all subsets with M <= K is O(K!).
I can reduce the problem to subsets of fixed size M, by placing K iterators (1 <= M <= K) in a min-heap and having the master iterator operate on the heap root.
Essentially I need the Python function call:
sorted(itertools.combinations(L, M), key=sum)[:K]
... but efficient (N ~ 200, K ~ 30), should run in less than 1sec.
Example:
L = [1, 2, 5, 10, 11]
K = 8
answer = [(1,), (2,), (1,2), (5,), (1,5), (2,5), (1,2,5), (10,)]
Answer:
As David's answer shows, the important trick is that for a subset S to be outputted, all subsets of S must have been previously outputted, in particular the subsets where only 1 element has been removed. Thus, every time you output a subset, you can add all 1-element extensions of this subset for consideration (a maximum of K), and still be sure that the next outputted subset will be in the list of all considered subsets up to this point.
Fully working, more efficient Python function:
def sorted_subsets(L, K):
candidates = [(L[i], (i,)) for i in xrange(min(len(L), K))]
for j in xrange(K):
new = candidates.pop(0)
yield tuple(L[i] for i in new[1])
new_candidates = [(L[i] + new[0], (i,) + new[1]) for i in xrange(new[1][0])]
candidates = sorted(candidates + new_candidates)[:K-j-1]
UPDATE, found an O(K log K) algorithm.
This is similar to the trick above, but instead of adding all 1-element extensions with the elements added greater than the max of the subset, you consider only 2 extensions: one that adds max(S)+1, and the other one that shifts max(S) to max(S) + 1 (that would eventually generate all 1-element extensions to the right).
import heapq
def sorted_subsets_faster(L, K):
candidates = [(L[0], (0,))]
for j in xrange(K):
new = heapq.heappop(candidates)
yield tuple(L[i] for i in new[1])
i = new[1][-1]
if i+1 < len(L):
heapq.heappush(candidates, (new[0] + L[i+1], new[1] + (i+1,)))
heapq.heappush(candidates, (new[0] - L[i] + L[i+1], new[1][:-1] + (i+1,)))
From my benchmarks, it is faster for ALL values of K.
Also, it is not necessary to supply in advance the value of K, we can just iterate and stop whenever, without changing the efficiency of the algorithm. Also note that the number of candidates is bounded by K+1.
It might be possible to improve even further by using a priority deque (min-max heap) instead of a priority queue, but frankly I'm satisfied with this solution. I'd be interested in a linear algorithm though, or a proof that it's impossible.
Here's some rough Python-ish pseudo-code:
final = []
L = L[:K] # Anything after the first K is too big already
sorted_candidates = L[]
while len( final ) < K:
final.append( sorted_candidates[0] ) # We keep it sorted so the first option
# is always the smallest sum not
# already included
# If you just added a subset of size A, make a bunch of subsets of size A+1
expansion = [sorted_candidates[0].add( x )
for x in L and x not already included in sorted_candidates[0]]
# We're done with the first element, so remove it
sorted_candidates = sorted_candidates[1:]
# Now go through and build a new set of sorted candidates by getting the
# smallest possible ones from sorted_candidates and expansion
new_candidates = []
for i in range(K - len( final )):
if sum( expansion[0] ) < sum( sorted_candidates[0] ):
new_candidates.append( expansion[0] )
expansion = expansion[1:]
else:
new_candidates.append( sorted_candidates[0] )
sorted_candidates = sorted_candidates[1:]
sorted_candidates = new_candidates
We'll assume that you will do things like removing the first element of an array in an efficient way, so the only real work in the loop is in building expansion and in rebuilding sorted_candidates. Both of these have fewer than K steps, so as an upper bound, you're looking at a loop that is O(K) and that is run K times, so O(K^2) for the algorithm.

Resources