Is there a different in the number of comparisons between merge sort in different case ?
For example ,
Case 1 : if I divide the array 2 parts and same size T(n)=T(n/2)+T(n/2)+n/2+n/2-1=T(n/2)+T(n/2)+n-1
Case 2 : T(n)=T(n/4)+T(3n/4)+(n/4)+(3n/4)-1=T(n)=T(n/4)+T(3n/4)+n-1
Since for merging 2 sub-array(let's say length m,n) I have to make at least m+n-1 comparisons, then I think the answer is yes but I am not sure.
And what's about dividing the array into $k$ sub-arrays in each iteration?
Is there a an optimal dividing for getting the lowest number of comparisons ?
I hope this is not a silly question, thanks!
You get the best possible worst-case performance from dividing the array into equal-size parts. Consider the opposite in the extreme case: letting one part be size 1 and the other n-1. That gives you linear recursion depth, and quadratic time.
You get n log n (plus/minus some constant) k-way comparisons if you split into k subarrays of size as close to n/k as possible, where log is the base-k logarithm. Note, however, that logarithms of different bases differ only by a constant factor, so you always get O(n log n) as long as k is a constant.
Update: If you do a k-way split, you need a k-way merge, and there are different ways to code that. The most natural way is perhaps to repeat n times: find which of the k subarrays has the smallest not-yet-picked element and pick that. This way, what you get is a total of n log_k n find-the-minimum-of-k-elements operations. For each find-the-minimum, you need to make k−1 compare operations between pairs of elements, so the total number of compares is (k−1)n log_k n. However, it's possible to organize k−1 compares so that you get more information than just which one is the minimum (otherwise selection sort would be optimal)! Is it possible to find a way to do the merge in a way that gets the number of compares down to the optimal n log_2 n that you get with k=2? Maybe, but it would probably be a very complicated merge procedure.
I was curious about k-way merge sort, so I did an experiment, sorting 216=65536 shuffled numbers with:
Ordinary two-way merge sort.
k-way merge sorts (splitting into k equal-size parts instead of 2) and using a heap for the k-way merge. From k=2 up to k=65536.
Timsort (Python's builtin sorting algorithm), which is an optimized merge sort.
The tracked numbers of comparisons for each one:
two_way: 965,599 True
k_way(2): 965,599 True
k_way(4): 1,180,226 True
k_way(16): 1,194,042 True
k_way(256): 1,132,726 True
k_way(65536): 1,071,758 True
timsort: 963,281 True
Code (Try it online!):
from random import shuffle
def two_way(a):
def merge(a, b):
c = []
i = j = 0
m, n = len(a), len(b)
while i < m and j < n:
if b[j] < a[i]:
c.append(b[j])
j += 1
else:
c.append(a[i])
i += 1
c += a[i:] or b[j:]
return c
def sort(a):
if i := len(a) // 2:
return merge(sort(a[:i]), sort(a[i:]))
return a[:]
return sort(a)
def k_way(a, k):
from heapq import merge
def sort(a):
if m := len(a) // k:
return list(merge(*(sort(a[j*m:(j+1)*m]) for j in range(k))))
return a[:]
return sort(a)
class Int:
def __init__(self, value):
self.value = value
def __lt__(self, other):
global comparisons
comparisons += 1
return self.value < other.value
def __repr__(self):
return str(self.value)
def is_sorted(a):
return all(x < y for x, y in zip(a, a[1:]))
def report(label, result):
print(f'{label:13} {comparisons:9,}', is_sorted(result))
a = list(map(Int, range(2**16)))
shuffle(a)
comparisons = 0
report('two_way:', two_way(a[:]))
for e in 1, 2, 4, 8, 16:
k = 2**e
comparisons = 0
report(f'k_way({k}):', k_way(a[:], k))
comparisons = 0
report('timsort:', sorted(a))
Related
I'm looking for an algorithm to generate or iterate through all permutations of a list of objects such that:
They are generated by fewest to least positional changes from the original. So first all the permutations with a single pair of elements swapped, then all the permutations with only two pairs of elements swapped, etc.
The list generated is complete, so for n objects in a list there should be n! total, unique permutations.
Ideally (but not necessarily) there should be a way of specifying (and generating) a particular permutation without having to generate the full list first and then reference the index.
The speed of the algorithm is not particularly important.
I've looked through all the permutation algorithms that I can find, and none so far have met criteria 1 and 2, let alone 3.
I have an idea how I could write this algorithm myself using recursion, and filtering for duplicates to only get unique permutations. However, if there is any existing algorithm I'd much rather use something proven.
This code answers your requirement #3, which is to compute permutation at index N directly.
This code relies on the following principle:
The first permutation is the identity; then the next (n choose 2) permutations just swap two elements; then the next (n choose 3)(subfactorial(3)) permutations derange 3 elements; then the next (n choose 4)(subfactorial(4)) permutations derange 4 elements; etc. To find the Nth permutation, first figure out how many elements it deranges by finding the largest K such that sum[k = 0 ^ K] (n choose k) subfactorial(k) ⩽ N.
This number K is found by function number_of_derangements_for_permutation_at_index in the code.
Then, the relevant subset of indices which must be deranged is computed efficiently using more_itertools.nth_combination.
However, I didn't have a function nth_derangement to find the relevant derangement of the deranged subset of indices. Hence the last step of the algorithm, which computes this derangement, could be optimised if there exists an efficient function to find the nth derangement of a sequence efficiently.
As a result, this last step takes time proportional to idx_r, where idx_r is the index of the derangement, a number between 0 and factorial(k), where k is the number of elements which are deranged by the returned permutation.
from sympy import subfactorial
from math import comb
from itertools import count, accumulate, pairwise, permutations
from more_itertools import nth_combination, nth
def number_of_derangements_for_permutation_at_index(n, idx):
#n = len(seq)
for k, (low_acc, high_acc) in enumerate(pairwise(accumulate((comb(n,k) * subfactorial(k) for k in count(2)), initial=1)), start=2):
if low_acc <= idx < high_acc:
return k, low_acc
def is_derangement(seq, perm):
return all(i != j for i,j in zip(seq, perm))
def lift_permutation(seq, deranged, permutation):
result = list(seq)
for i,j in zip(deranged, permutation):
result[i] = seq[j]
return result
# THIS FUNCTION NOT EFFICIENT
def nth_derangement(seq, idx):
return nth((p for p in permutations(seq) if is_derangement(seq, p)),
idx)
def nth_permutation(seq, idx):
if idx == 0:
return list(seq)
n = len(seq)
k, acc = number_of_derangements_for_permutation_at_index(n, idx)
idx_q, idx_r = divmod(idx - acc, subfactorial(k))
deranged = nth_combination(range(n), k, idx_q)
derangement = nth_derangement(deranged, idx_r) # TODO: FIND EFFICIENT VERSION
return lift_permutation(seq, deranged, derangement)
Testing for correctness on small data:
print( [''.join(nth_permutation('abcd', i)) for i in range(24)] )
# ['abcd',
# 'bacd', 'cbad', 'dbca', 'acbd', 'adcb', 'abdc',
# 'bcad', 'cabd', 'bdca', 'dacb', 'cbda', 'dbac', 'acdb', 'adbc',
# 'badc', 'bcda', 'bdac', 'cadb', 'cdab', 'cdba', 'dabc', 'dcab', 'dcba']
Testing for speed on medium data:
from math import factorial
seq = 'abcdefghij'
n = len(seq) # 10
N = factorial(n) // 2 # 1814400
perm = ''.join(nth_permutation(seq, N))
print(perm)
# fcjdibaehg
Imagine a graph with n! nodes labeled with every permutation of n elements. If we add edges to this graph such that nodes which can be obtained by swapping one pair of elements are connected, an answer to your problem is obtained by doing a breadth-first search from whatever node you like.
You can actually generate the graph or just let it be implied and just deduce at each stage what nodes should be adjacent (and of course, keep track of ones you've already visited, to avoid revisiting them).
I concede this probably doesn't help with point 3, but maybe is a viable strategy for getting points 1 and 2 answered.
To solve 1 & 2, you could first generate all possible permutations, keeping track of how many swaps occurred during generation for each list. Then sort them by number of swaps. Which I think is O(n! + nlgn) = O(n!)
I am learning Knuth shuffle and it says that the random number should be between 0 and i, not N-1 and I can not understand why.
I saw similar questions asked before but cannot find a clear explanation.
thanks for your attention.
The "true" Fisher-Yates shuffle looks like this:
for i from 0 to n - 1:
pick index j uniformly from i to n - 1 # [i, n-1]
swap array[i] and array[j]
You're proposing this variation:
for i from 0 to n - 1:
pick index j uniformly from 0 to n - 1 # [0, n-1]
swap array[i] and array[j]
There are a couple of ways to see why this modification will not produce a uniformly random permutation of the elements.
The first is a clever mathematical argument. The loop in the incorrect version of Fisher-Yates has n iterations. On each iteration, we pick one of n indices to swap with. This means that there are n choices for the index on the first iteration, n choices for the index on the second, ..., and n choices for the index on the last. Overall, that means that the number of different ways the algorithm can execute is n × n × ... × n = nn.
However, the number of different permutations of n elements is n!. In order for it to be possible for the broken shuffle to produce each of the n! permutations with equal probability, each permutation must be reachable by the same number of paths through the broken implementation (say, k ways). That would mean that k · n! = nn, meaning that nn must be a multiple of n!. However, for any n > 2, this isn't the case, and so the distribution cannot be uniform.
You can see this empirically. Here's some Python code that uses the broken shuffle 1,000,000 times on a three-element array and looks at the resulting frequencies of the different permutations:
from random import randint
from collections import defaultdict
def incorrect_shuffle(arr):
for i in range(len(arr)):
j = randint(0, len(arr) - 1)
arr[i], arr[j] = arr[j], arr[i]
freq = defaultdict(int)
for i in range(1000000):
sequence = ['a', 'b', 'c']
incorrect_shuffle(sequence)
freq["".join(sequence)] += 1
print(freq)
When I ran this, I got the following frequencies:
abc: 148455
acb: 185328
bac: 184830
bca: 185235
cab: 148017
cba: 148135
Notice that the permutations acb, bac, and bca are all much more likely than the other three.
For more about the sorts of distributions that arise from the broken shuffle, check out this earlier Stack Overflow question.
Is there an efficient algorithm to generate a permutation from one index provided? The permutations do not need to have any specific ordering and it just needs to return every permutation once per every possible index. The set I wish to permute is all integers from 0~255.
If I understand the question correctly, the problem is as follows: You are given two integers n and k, and you want to find the kth permutation of n integers. You don't care about it being the kth lexicographical permutation, but it's just easier to be lexicographical so let's stick with that.
This is not too bad to compute. The base permutation is 1,2,3,4...n. This is the k=0 case. Consider what happens if you were to swap the 1 and 2: by moving the 1, you are passing up every single permutation where 1 goes first, and there are (n-1)! of those (since you could have permuted 2,3,4..n if you fixed the 1 in place). Thus, the algorithm is as follows:
for i from 1 to n:
j = k / (n-i)! // integer division, so rounded down
k -= j * (n-i)!
place down the jth unplaced number
This will iteratively produce the kth lexicographical permutation, since it repeatedly solves a sub-problem with a smaller set of numbers to place, and decrementing k along the way.
There is an implementation in python in module more-itertools: nth_permutation.
Here is an implementation, adapted from the code of more_itertools.nth_permutation:
from sympy import factorial
def nth_permutation(iterable, index):
pool = list(iterable)
n = len(pool)
c = factorial(n)
index = index % c
result = [0] * n
q = index
for d in range(1, n + 1):
q, i = divmod(q, d)
if 0 <= n - d < n:
result[n - d] = i
if q == 0:
break
return tuple(map(pool.pop, result))
print( nth_permutation(range(6), 360) )
# (3, 0, 1, 2, 4, 5)
Consider a binary sequence b of length N. Initially, all the bits are set to 0. We define a flip operation with 2 arguments, flip(L,R), such that:
All bits with indices between L and R are "flipped", meaning a bit with value 1 becomes a bit with value 0 and vice-versa. More exactly, for all i in range [L,R]: b[i] = !b[i].
Nothing happens to bits outside the specified range.
You are asked to determine the number of possible different sequences that can be obtained using exactly K flip operations modulo an arbitrary given number, let's call it MOD.
More specifically, each test contains on the first line a number T, the number of queries to be given. Then there are T queries, each one being of the form N, K, MOD with the meaning from above.
1 ≤ N, K ≤ 300 000
T ≤ 250
2 ≤ MOD ≤ 1 000 000 007
Sum of all N-s in a test is ≤ 600 000
time limit: 2 seconds
memory limit: 65536 kbytes
Example :
Input :
1
2 1 1000
Output :
3
Explanation :
There is a single query. The initial sequence is 00. We can do the following operations :
flip(1,1) ⇒ 10
flip(2,2) ⇒ 01
flip(1,2) ⇒ 11
So there are 3 possible sequences that can be generated using exactly 1 flip.
Some quick observations that I've made, although I'm not sure they are totally correct :
If K is big enough, that is if we have a big enough number of flips at our disposal, we should be able to obtain 2n sequences.
If K=1, then the result we're looking for is N(N+1)/2. It's also C(n,1)+C(n,2), where C is the binomial coefficient.
Currently trying a brute force approach to see if I can spot a rule of some kind. I think this is a sum of some binomial coefficients, but I'm not sure.
I've also come across a somewhat simpler variant of this problem, where the flip operation only flips a single specified bit. In that case, the result is
C(n,k)+C(n,k-2)+C(n,k-4)+...+C(n,(1 or 0)). Of course, there's the special case where k > n, but it's not a huge difference. Anyway, it's pretty easy to understand why that happens.I guess it's worth noting.
Here are a few ideas:
We may assume that no flip operation occurs twice (otherwise, we can assume that it did not happen). It does affect the number of operations, but I'll talk about it later.
We may assume that no two segments intersect. Indeed, if L1 < L2 < R1 < R2, we can just do the (L1, L2 - 1) and (R1 + 1, R2) flips instead. The case when one segment is inside the other is handled similarly.
We may also assume that no two segments touch each other. Otherwise, we can glue them together and reduce the number of operations.
These observations give the following formula for the number of different sequences one can obtain by flipping exactly k segments without "redundant" flips: C(n + 1, 2 * k) (we choose 2 * k ends of segments. They are always different. The left end is exclusive).
If we had perform no more than K flips, the answer would be sum for k = 0...K of C(n + 1, 2 * k)
Intuitively, it seems that its possible to transform the sequence of no more than K flips into a sequence of exactly K flips (for instance, we can flip the same segment two more times and add 2 operations. We can also split a segment of more than two elements into two segments and add one operation).
By running the brute force search (I know that it's not a real proof, but looks correct combined with the observations mentioned above) that the answer this sum minus 1 if n or k is equal to 1 and exactly the sum otherwise.
That is, the result is C(n + 1, 0) + C(n + 1, 2) + ... + C(n + 1, 2 * K) - d, where d = 1 if n = 1 or k = 1 and 0 otherwise.
Here is code I used to look for patterns running a brute force search and to verify that the formula is correct for small n and k:
reachable = set()
was = set()
def other(c):
"""
returns '1' if c == '0' and '0' otherwise
"""
return '0' if c == '1' else '1'
def flipped(s, l, r):
"""
Flips the [l, r] segment of the string s and returns the result
"""
res = s[:l]
for i in range(l, r + 1):
res += other(s[i])
res += s[r + 1:]
return res
def go(xs, k):
"""
Exhaustive search. was is used to speed up the search to avoid checking the
same string with the same number of remaining operations twice.
"""
p = (xs, k)
if p in was:
return
was.add(p)
if k == 0:
reachable.add(xs)
return
for l in range(len(xs)):
for r in range(l, len(xs)):
go(flipped(xs, l, r), k - 1)
def calc_naive(n, k):
"""
Counts the number of reachable sequences by running an exhaustive search
"""
xs = '0' * n
global reachable
global was
was = set()
reachable = set()
go(xs, k)
return len(reachable)
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
def cnk(n, k):
if k > n:
return 0
return fact(n) // fact(k) // fact(n - k)
def solve(n, k):
"""
Uses the formula shown above to compute the answer
"""
res = 0
for i in range(k + 1):
res += cnk(n + 1, 2 * i)
if k == 1 or n == 1:
res -= 1
return res
if __name__ == '__main__':
# Checks that the formula gives the right answer for small values of n and k
for n in range(1, 11):
for k in range(1, 11):
assert calc_naive(n, k) == solve(n, k)
This solution is much better than the exhaustive search. For instance, it can run in O(N * K) time per test case if we compute the coefficients using Pascal's triangle. Unfortunately, it is not fast enough. I know how to solve it more efficiently for prime MOD (using Lucas' theorem), but O do not have a solution in general case.
Multiplicative modular inverses can't solve this problem immediately as k! or (n - k)! may not have an inverse modulo MOD.
Note: I assumed that C(n, m) is defined for all non-negative n and m and is equal to 0 if n < m.
I think I know how to solve it for an arbitrary MOD now.
Let's factorize the MOD into prime factors p1^a1 * p2^a2 * ... * pn^an. Now can solve this problem for each prime factor independently and combine the result using the Chinese remainder theorem.
Let's fix a prime p. Let's assume that p^a|MOD (that is, we need to get the result modulo p^a). We can precompute all p-free parts of the factorial and the maximum power of p that divides the factorial for all 0 <= n <= N in linear time using something like this:
powers = [0] * (N + 1)
p_free = [i for i in range(N + 1)]
p_free[0] = 1
for cur_p in powers of p <= N:
i = cur_p
while i < N:
powers[i] += 1
p_free[i] /= p
i += cur_p
Now the p-free part of the factorial is the product of p_free[i] for all i <= n and the power of p that divides n! is the prefix sum of the powers.
Now we can divide two factorials: the p-free part is coprime with p^a so it always has an inverse. The powers of p are just subtracted.
We're almost there. One more observation: we can precompute the inverses of p-free parts in linear time. Let's compute the inverse for the p-free part of N! using Euclid's algorithm. Now we can iterate over all i from N to 0. The inverse of the p-free part of i! is the inverse for i + 1 times p_free[i] (it's easy to prove it if we rewrite the inverse of the p-free part as a product using the fact that elements coprime with p^a form an abelian group under multiplication).
This algorithm runs in O(N * number_of_prime_factors + the time to solve the system using the Chinese remainder theorem + sqrt(MOD)) time per test case. Now it looks good enough.
You're on a good path with binomial-coefficients already. There are several factors to consider:
Think of your number as a binary-string of length n. Now we can create another array counting the number of times a bit will be flipped:
[0, 1, 0, 0, 1] number
[a, b, c, d, e] number of flips.
But even numbers of flips all lead to the same result and so do all odd numbers of flips. So basically the relevant part of the distribution can be represented %2
Logical next question: How many different combinations of even and odd values are available. We'll take care of the ordering later on, for now just assume the flipping-array is ordered descending for simplicity. We start of with k as the only flipping-number in the array. Now we want to add a flip. Since the whole flipping-array is used %2, we need to remove two from the value of k to achieve this and insert them into the array separately. E.g.:
[5, 0, 0, 0] mod 2 [1, 0, 0, 0]
[3, 1, 1, 0] [1, 1, 1, 0]
[4, 1, 0, 0] [0, 1, 0, 0]
As the last example shows (remember we're operating modulo 2 in the final result), moving a single 1 doesn't change the number of flips in the final outcome. Thus we always have to flip an even number bits in the flipping-array. If k is even, so will the number of flipped bits be and same applies vice versa, no matter what the value of n is.
So now the question is of course how many different ways of filling the array are available? For simplicity we'll start with mod 2 right away.
Obviously we start with 1 flipped bit, if k is odd, otherwise with 1. And we always add 2 flipped bits. We can continue with this until we either have flipped all n bits (or at least as many as we can flip)
v = (k % 2 == n % 2) ? n : n - 1
or we can't spread k further over the array.
v = k
Putting this together:
noOfAvailableFlips:
if k < n:
return k
else:
return (k % 2 == n % 2) ? n : n - 1
So far so well, there are always v / 2 flipping-arrays (mod 2) that differ by the number of flipped bits. Now we come to the next part permuting these arrays. This is just a simple permutation-function (permutation with repetition to be precise):
flipArrayNo(flippedbits):
return factorial(n) / (factorial(flippedbits) * factorial(n - flippedbits)
Putting it all together:
solutionsByFlipping(n, k):
res = 0
for i in [k % 2, noOfAvailableFlips(), step=2]:
res += flipArrayNo(i)
return res
This also shows that for sufficiently large numbers we can't obtain 2^n sequences for the simply reason that we can not arrange operations as we please. The number of flips that actually affect the outcome will always be either even or odd depending upon k. There's no way around this. The best result one can get is 2^(n-1) sequences.
For completeness, here's a dynamic program. It can deal easily with arbitrary modulo since it is based on sums, but unfortunately I haven't found a way to speed it beyond O(n * k).
Let a[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 1. Let b[n][k] be the number of binary strings of length n with k non-adjacent blocks of contiguous 1s that end in 0.
Then:
# we can append 1 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in 1, or to any arrangement of (k-1) non-adjacent blocks of contiguous
# 1's that ends in 0:
a[n][k] = a[n - 1][k] + b[n - 1][k - 1]
# we can append 0 to any arrangement of k non-adjacent blocks of contiguous 1's
# that ends in either 0 or 1:
b[n][k] = b[n - 1][k] + a[n - 1][k]
# complete answer would be sum (a[n][i] + b[n][i]) for i = 0 to k
I wonder if the following observations might be useful: (1) a[n][k] and b[n][k] are zero when n < 2*k - 1, and (2) on the flip side, for values of k greater than ⌊(n + 1) / 2⌋ the overall answer seems to be identical.
Python code (full matrices are defined for simplicity, but I think only one row of each would actually be needed, space-wise, for a bottom-up method):
a = [[0] * 11 for i in range(0,11)]
b = [([1] + [0] * 10) for i in range(0,11)]
def f(n,k):
return fa(n,k) + fb(n,k)
def fa(n,k):
global a
if a[n][k] or n == 0 or k == 0:
return a[n][k]
elif n == 2*k - 1:
a[n][k] = 1
return 1
else:
a[n][k] = fb(n-1,k-1) + fa(n-1,k)
return a[n][k]
def fb(n,k):
global b
if b[n][k] or n == 0 or n == 2*k - 1:
return b[n][k]
else:
b[n][k] = fb(n-1,k) + fa(n-1,k)
return b[n][k]
def g(n,k):
return sum([f(n,i) for i in range(0,k+1)])
# example
print(g(10,10))
for i in range(0,11):
print(a[i])
print()
for i in range(0,11):
print(b[i])
I have a list of N positive numbers sorted in ascending order, L[0] to L[N-1].
I want to iterate over subsets of M distinct list elements (without replacement, order not important), 1 <= M <= N, sorted according to their partial sum. M is not fixed, the final result should consider all possible subsets.
I only want the K smallest subsets efficiently (ideally polynomial in K). The obvious algorithm of enumerating all subsets with M <= K is O(K!).
I can reduce the problem to subsets of fixed size M, by placing K iterators (1 <= M <= K) in a min-heap and having the master iterator operate on the heap root.
Essentially I need the Python function call:
sorted(itertools.combinations(L, M), key=sum)[:K]
... but efficient (N ~ 200, K ~ 30), should run in less than 1sec.
Example:
L = [1, 2, 5, 10, 11]
K = 8
answer = [(1,), (2,), (1,2), (5,), (1,5), (2,5), (1,2,5), (10,)]
Answer:
As David's answer shows, the important trick is that for a subset S to be outputted, all subsets of S must have been previously outputted, in particular the subsets where only 1 element has been removed. Thus, every time you output a subset, you can add all 1-element extensions of this subset for consideration (a maximum of K), and still be sure that the next outputted subset will be in the list of all considered subsets up to this point.
Fully working, more efficient Python function:
def sorted_subsets(L, K):
candidates = [(L[i], (i,)) for i in xrange(min(len(L), K))]
for j in xrange(K):
new = candidates.pop(0)
yield tuple(L[i] for i in new[1])
new_candidates = [(L[i] + new[0], (i,) + new[1]) for i in xrange(new[1][0])]
candidates = sorted(candidates + new_candidates)[:K-j-1]
UPDATE, found an O(K log K) algorithm.
This is similar to the trick above, but instead of adding all 1-element extensions with the elements added greater than the max of the subset, you consider only 2 extensions: one that adds max(S)+1, and the other one that shifts max(S) to max(S) + 1 (that would eventually generate all 1-element extensions to the right).
import heapq
def sorted_subsets_faster(L, K):
candidates = [(L[0], (0,))]
for j in xrange(K):
new = heapq.heappop(candidates)
yield tuple(L[i] for i in new[1])
i = new[1][-1]
if i+1 < len(L):
heapq.heappush(candidates, (new[0] + L[i+1], new[1] + (i+1,)))
heapq.heappush(candidates, (new[0] - L[i] + L[i+1], new[1][:-1] + (i+1,)))
From my benchmarks, it is faster for ALL values of K.
Also, it is not necessary to supply in advance the value of K, we can just iterate and stop whenever, without changing the efficiency of the algorithm. Also note that the number of candidates is bounded by K+1.
It might be possible to improve even further by using a priority deque (min-max heap) instead of a priority queue, but frankly I'm satisfied with this solution. I'd be interested in a linear algorithm though, or a proof that it's impossible.
Here's some rough Python-ish pseudo-code:
final = []
L = L[:K] # Anything after the first K is too big already
sorted_candidates = L[]
while len( final ) < K:
final.append( sorted_candidates[0] ) # We keep it sorted so the first option
# is always the smallest sum not
# already included
# If you just added a subset of size A, make a bunch of subsets of size A+1
expansion = [sorted_candidates[0].add( x )
for x in L and x not already included in sorted_candidates[0]]
# We're done with the first element, so remove it
sorted_candidates = sorted_candidates[1:]
# Now go through and build a new set of sorted candidates by getting the
# smallest possible ones from sorted_candidates and expansion
new_candidates = []
for i in range(K - len( final )):
if sum( expansion[0] ) < sum( sorted_candidates[0] ):
new_candidates.append( expansion[0] )
expansion = expansion[1:]
else:
new_candidates.append( sorted_candidates[0] )
sorted_candidates = sorted_candidates[1:]
sorted_candidates = new_candidates
We'll assume that you will do things like removing the first element of an array in an efficient way, so the only real work in the loop is in building expansion and in rebuilding sorted_candidates. Both of these have fewer than K steps, so as an upper bound, you're looking at a loop that is O(K) and that is run K times, so O(K^2) for the algorithm.