Dynamic Programming Solution for a Variant of Coin Exchange - algorithm

I am practicing Dynamic Programming. I am focusing on the following variant of the coin exchange problem:
Let S = [1, 2, 6, 12, 24, 48, 60] be a constant set of integer coin denominations. Let n be a positive integer amount of money attainable via coins in S. Consider two persons A and B. In how many different ways can I split n among persons A and B so that each person gets the same amount of coins (disregarding the actual amount of money each gets)?
Example
n = 6 can be split into 4 different ways per person:
Person A gets {2, 2} and person B gets {1, 1}.
Person A gets {2, 1} and person B gets {2, 1}.
Person A gets {1, 1} and person B gets {2, 2}.
Person A gets {1, 1, 1} and person B gets {1, 1, 1}.
Notice that each way is non-redundant per person, i.e. we do not count both {2, 1} and {1, 2} as two different ways.
Previous research
I have studied at very similar DP problems, such as the coin exchange problem and the partition problem. In fact, there are questions in this site referring to almost the same problem:
Dynamic Programming for a variant of the coin exchange - Here, OP studies the recursion relationship, but seems confused introducing the parity constraint.
Coin Change :Dynamic Programming - Here, OP seems to pursue the reconstruction of the solution.
Coin change(Dynamic programming) - Here, OP seems to also pursue the reconstruction of the solution.
https://cs.stackexchange.com/questions/87230/dynamic-programming-for-a-variant-of-the-coin-exchange-problem - Here, OP seems to ask about a similar problem, yet parity, i.e. splitting into two persons, becomes the main issue.
I am interested mostly in the recursion relation that could help me solve this problem. Defining it will allow me to easily apply either a memoization of a tabulation approach to design an algorithm for this problem.
For example, this recursion:
def f(n, coins):
if n < 0:
return 0
if n == 0:
return 1
return sum([f(n - coin, coins) for coin in coins])
Is tempting, yet it does not work, because when executed:
# => f(6, [1, 2, 6]) # 14
Here's an example of a run for S' = {1, 2, 6} and n = 6, in order to help me clarify the pattern (there might be errors):

This is what you can try:
Let C(n, k, S) be the number of distinct representations of an amount n using some k coins from S.
Then C(n, k, S) = sum(C(n - s_i, k - 1, S[i:])) The summation is for every s_i from S. S[i:] means all the elements from S starting from i-th element to the end - we need this to prevent repeated combinations.
The initial conditions are C(0, 0, _) = 1 and C(n, k, _) = 0 if n < 0 or k < 0 or n > 0 and k < 1 .
The number you want to calculate:
R = sum(C(i, k, S) * C(n - i, k, S)) for i = 1..n-1, k = 1..min(i, n-i)/Smin where Smin - the smallest coin denomination from S.
The value min(i, n-i)/Smin represents the maximum number of coins that is possible when partitioning the given sum. For example if the sum n = 20 and i = 8 (1st person gets $8, 2nd gets $12) and the minimum coin denomination is $2, the maximum possible number of coins is 8/2 = 4. You can't get $8 with >4 coins.

Here is a table implementation and a little elaboration on algrid's beautiful answer. This produces an answer for f(500, [1, 2, 6, 12, 24, 48, 60]) in about 2 seconds.
The simple declaration of C(n, k, S) = sum(C(n - s_i, k - 1, S[i:])) means adding all the ways to get to the current sum, n using k coins. Then if we split n into all ways it can be partitioned in two, we can just add all the ways each of those parts can be made from the same number, k, of coins.
The beauty of fixing the subset of coins we choose from to a diminishing list means that any arbitrary combination of coins will only be counted once - it will be counted in the calculation where the leftmost coin in the combination is the first coin in our diminishing subset (assuming we order them in the same way). For example, the arbitrary subset [6, 24, 48], taken from [1, 2, 6, 12, 24, 48, 60], would only be counted in the summation for the subset [6, 12, 24, 48, 60] since the next subset, [12, 24, 48, 60] would not include 6 and the previous subset [2, 6, 12, 24, 48, 60] has at least one 2 coin.
Python code (see it here; confirm here):
import time
def f(n, coins):
t0 = time.time()
min_coins = min(coins)
m = [[[0] * len(coins) for k in xrange(n / min_coins + 1)] for _n in xrange(n + 1)]
# Initialize base case
for i in xrange(len(coins)):
m[0][0][i] = 1
for i in xrange(len(coins)):
for _i in xrange(i + 1):
for _n in xrange(coins[_i], n + 1):
for k in xrange(1, _n / min_coins + 1):
m[_n][k][i] += m[_n - coins[_i]][k - 1][_i]
result = 0
for a in xrange(1, n + 1):
b = n - a
for k in xrange(1, n / min_coins + 1):
result = result + m[a][k][len(coins) - 1] * m[b][k][len(coins) - 1]
total_time = time.time() - t0
return (result, total_time)
print f(500, [1, 2, 6, 12, 24, 48, 60])

Related

Find all unique sets of k non-negative integers summing to n

The accepted answer to this question provides an implementation of an algorithm that given two numbers k and n can generate all combinations (excluding permutations) of k positive integers which sum to n.
I'm looking for a very similar algorithm which essentially calculates the same thing except that the requirement that k > 0 is dropped, i.e. for k = 3, n = 4, the output should be
[0, 0, 0, 4], [0, 0, 1, 3], ... (in any order).
I have tried modifying the code snippet I linked but I have so far not had any success whatsoever. How can I efficiently implement this? (pseudo-code would be sufficient)
def partitions(Sum, K, lst, Minn = 0):
'''Enumerates integer partitions of Sum'''
if K == 0:
if Sum == 0:
print(lst)
return
for i in range(Minn, min(Sum + 1, Sum + 1)):
partitions(Sum - i, K - 1, lst + [i], i)
partitions(6, 3, [])
[0, 0, 6]
[0, 1, 5]
[0, 2, 4]
[0, 3, 3]
[1, 1, 4]
[1, 2, 3]
[2, 2, 2]
This code is quite close to linked answer idea, just low limit is 0 and correspondingly stop value n - size + 1 should be changed
You could use the code provided on the other thread provided as is.
Then you want to get all of the sets for set size 1 to k, and if your current set size is less than k then pad with 0's i.e
fun nonZeroSums (k, n)
for i in 1 to k
[pad with i - k 0's] concat sum_to_n(i, n)

no. of permutation of number from 1 to n in which i >i+1 and i-1

for a given N how many permutations of [1, 2, 3, ..., N] satisfy the following property.
Let P1, P2, ..., PN denote the permutation. The property we want to satisfy is that there exists an i between 2 and n-1 (inclusive) such that
Pj > Pj + 1 ∀ i ≤ j ≤ N - 1.
Pj > Pj - 1 ∀ 2 ≤ j ≤ i.
like for N=3
Permutations [1, 3, 2] and [2, 3, 1] satisfy the property.
Is there any direct formula or algorithm to find these set in programming.
There are 2^(n-1) - 2 such permutations. If n is the largest element, then the permutation is uniquely determined by the nonempty, proper subset of {1, 2, ..., n-1} which lies to the left of n in the permutation. This answer is consistent with the excellent answer of #גלעדברקן in view of the well-known fact that the elements in each row of Pascal's triangle sum to a power of two (hence the part of the row between the two ones is two less than a power of two).
Here is a Python enumeration which generates all n! permutations and checks them for validity:
import itertools
def validPerm(p):
n = max(p)
i = p.index(n)
if i == 0 or i == n-1:
return False
else:
before = p[:i]
after = p[i+1:]
return before == sorted(before) and after == sorted(after, reverse = True)
def validPerms(n):
nums = list(range(1,n+1))
valids = []
for p in itertools.permutations(nums):
lp = list(p)
if validPerm(lp): valids.append(lp)
return valids
For example,
>>> validPerms(4)
[[1, 2, 4, 3], [1, 3, 4, 2], [1, 4, 3, 2], [2, 3, 4, 1], [2, 4, 3, 1], [3, 4, 2, 1]]
which gives the expected number of 6.
On further edit: The above code was to verify the formula for nondegenerate unimodal permutations (to coin a phrase since "unimodal permutations" is used in the literature for the 2^(n-1) permutations with exactly one peak, but the 2 which either begin or end with n are arguably in some sense degenerate). From an enumeration point of view you would want to do something more efficient. The following is a Python implementation of the idea behind the answer of #גלעדברקן :
def validPerms(n):
valids = []
nums = list(range(1,n)) #1,2,...,n-1
snums = set(nums)
for i in range(1,n-1):
for first in itertools.combinations(nums,i):
#first will be already sorted
rest = sorted(snums - set(first),reverse = True)
valids.append(list(first) + [n] + rest)
return valids
It is functionally equivalent to the above code, but substantially more efficient.
Let's look at an example:
{1,2,3,4,5,6}
Clearly, any positioning of 6 at i will mean the right side of it will be sorted descending and the left side of it ascending. For example, i = 3
{1,2,6,5,4,3}
{1,3,6,5,4,2}
{1,4,6,5,3,2}
...
So for each positioning of N between 2 and n-1, we have (n - 1) choose (position - 1) arrangements. This leads to the answer:
sum [(n - 1) choose (i - 1)], for i = 2...(n - 1)
there are ans perm. and ans is as follows
ans equal to 2^(n-1) and
ans -= 2
as it need to be in between 2 <=i <= n-1 && we know that nC1 ans nCn = 1

Algorithm to generate Diagonal Latin Square matrix

I need for given N create N*N matrix which does not have repetitions in rows, cells, minor and major diagonals and values are 1, 2 , 3, ...., N.
For N = 4 one of matrices is the following:
1 2 3 4
3 4 1 2
4 3 2 1
2 1 4 3
Problem overview
The math structure you described is Diagonal Latin Square. Constructing them is the more mathematical problem than the algorithmic or programmatic.
To correctly understand what it is and how to create you should read following articles:
Latin squares definition
Magic squares definition
Diagonal Latin square construction <-- p.2 is answer to your question with proof and with other interesting properties
Short answer
One of the possible ways to construct Diagonal Latin Square:
Let N is the power of required matrix L.
If there are exist numbers A and B from range [0; N-1] which satisfy properties:
A relativly prime to N
B relatively prime to N
(A + B) relatively prime to N
(A - B) relatively prime to N
Then you can create required matrix with the following rule:
L[i][j] = (A * i + B * j) mod N
It would be nice to do this mathematically, but I'll propose the simplest algorithm that I can think of - brute force.
At a high level
we can represent a matrix as an array of arrays
for a given N, construct S a set of arrays, which contains every combination of [1..N]. There will be N! of these.
using an recursive & iterative selection process (e.g. a search tree), search through all orders of these arrays until one of the 'uniqueness' rules is broken
For example, in your N = 4 problem, I'd construct
S = [
[1,2,3,4], [1,2,4,3]
[1,3,2,4], [1,3,4,2]
[1,4,2,3], [1,4,3,2]
[2,1,3,4], [2,1,4,3]
[2,3,1,4], [2,3,4,1]
[2,4,1,3], [2,4,3,1]
[3,1,2,4], [3,1,4,2]
// etc
]
R = new int[4][4]
Then the algorithm is something like
If R is 'full', you're done
Evaluate does the next row from S fit into R,
if yes, insert it into R, reset the iterator on S, and go to 1.
if no, increment the iterator on S
If there are more rows to check in S, go to 2.
Else you've iterated across S and none of the rows fit, so remove the most recent row added to R and go to 1. In other words, explore another branch.
To improve the efficiency of this algorithm, implement a better data structure. Rather than a flat array of all combinations, use a prefix tree / Trie of some sort to both reduce the storage size of the 'options' and reduce the search area within each iteration.
Here's a method which is fast for N <= 9 : (python)
import random
def generate(n):
a = [[0] * n for _ in range(n)]
def rec(i, j):
if i == n - 1 and j == n:
return True
if j == n:
return rec(i + 1, 0)
candidate = set(range(1, n + 1))
for k in range(i):
candidate.discard(a[k][j])
for k in range(j):
candidate.discard(a[i][k])
if i == j:
for k in range(i):
candidate.discard(a[k][k])
if i + j == n - 1:
for k in range(i):
candidate.discard(a[k][n - 1 - k])
candidate_list = list(candidate)
random.shuffle(candidate_list)
for e in candidate_list:
a[i][j] = e
if rec(i, j + 1):
return True
a[i][j] = 0
return False
rec(0, 0)
return a
for row in generate(9):
print(row)
Output:
[8, 5, 4, 7, 1, 6, 2, 9, 3]
[2, 7, 5, 8, 4, 1, 3, 6, 9]
[9, 1, 2, 3, 6, 4, 8, 7, 5]
[3, 9, 7, 6, 2, 5, 1, 4, 8]
[5, 8, 3, 1, 9, 7, 6, 2, 4]
[4, 6, 9, 2, 8, 3, 5, 1, 7]
[6, 3, 1, 5, 7, 9, 4, 8, 2]
[1, 4, 8, 9, 3, 2, 7, 5, 6]
[7, 2, 6, 4, 5, 8, 9, 3, 1]

N-fold partition of an array with equal sum in each partition

Given an array of integers a, two numbers N and M, return N group of integers from a such that each group sums to M.
For example, say:
a = [1,2,3,4,5]
N = 2
M = 5
Then the algorithm could return [2, 3], [1, 4] or [5], [2, 3] or possibly others.
What algorithms could I use here?
Edit:
I wasn't aware that this problem is NP complete. So maybe it would help if I provided more details on my specific scenario:
So I'm trying to create a "match-up" application. Given the number of teams N and the number of players per team M, the application listens for client requests. Each client request will give a number of players that the client represents. So if I need 2 teams of 5 players, then if 5 clients send requests, each representing 1, 2, 3, 4, 5 players respectively, then my application should generate a match-up between clients [1, 4] and clients [2, 3]. It could also generate a match-up between [1, 4] and [5]; I don't really care.
One implication is that any client representing more than M or less than 0 players is invalid. Hope this could simplify the problem.
this appears to be a variation of the subset sum problem. as this problem is np-complete, there will be no efficient algorithm without further constraints.
note that it is already hard to find a single subset of the original set whose elements would sum up to M.
People give up too easily on NP-complete problems. Just because a problem is NP complete doesn't mean that there aren't more and less efficient algorithms in the general case. That is you can't guarantee that for all inputs there is an answer that can be computed faster than a brute force search, but for many problems you can certainly have methods that are faster than the full search for most inputs.
For this problem there are certainly 'perverse' sets of numbers that will result in worst case search times, because there may be say a large vector of integers, but only one solution and you have to end up trying a very large number of combinations.
But for non-perverse sets, there are probably many solutions, and an efficient way of 'tripping over' a good partitioning will run much faster than NP time.
How you solve this will depend a lot on what you expect to be the more common parameters. It also makes a difference if the integers are all positive, or if negatives are allowed.
In this case I'll assume that:
N is small relative to the length of the vector
All integers are positive.
Integers cannot be re-used.
Algorithm:
Sort the vector, v.
Eliminate elements bigger than M. They can't be part of any solution.
Add up all remaining numbers in v, divide by N. If the result is smaller than M, there is no solution.
Create a new array w, same size as v. For each w[i], sum all the numbers in v[i+1 - end]
So if v was 5 4 3 2 1, w would be 10, 6, 3, 1, 0.
While you have not found enough sets:
Chose the largest number, x, if it is equal to M, emit a solution set with just x, and remove it from the vector, remove the first element from w.
Still not enough sets? (likely), then again while you have not found enough sets:
A solution theory is ([a,b,c], R ) where [a,b,c] is a partial set of elements of v and a remainder R. R = M-sum[a,b,c]. Extending a theory is adding a number to the partial set, and subtracting that number from R. As you extend the theories, if R == 0, that is a possible solution.
Recursively create theories like so: loop over the elements v, as v[i] creating theories, ( [v[i]], R ), And now recursively extend extend each theory from just part of v. Binary search into v to find the first element equal to or smaller than R, v[j]. Start with v[j] and extend each theory with the elements of v from j until R > w[k].
The numbers from v[j] to v[k] are the only numbers that be used to extend a theory and still get R to 0. Numbers larger than v[j] will make R negative. Smaller larger than v[k], and there aren't any more numbers left in the array, even if you used them all to get R to 0
Here is my own Python solution that uses dynamic programming. The algorithm is given here.
def get_subset(lst, s):
'''Given a list of integer `lst` and an integer s, returns
a subset of lst that sums to s, as well as lst minus that subset
'''
q = {}
for i in range(len(lst)):
for j in range(1, s+1):
if lst[i] == j:
q[(i, j)] = (True, [j])
elif i >= 1 and q[(i-1, j)][0]:
q[(i, j)] = (True, q[(i-1, j)][1])
elif i >= 1 and j >= lst[i] and q[(i-1, j-lst[i])][0]:
q[(i, j)] = (True, q[(i-1, j-lst[i])][1] + [lst[i]])
else:
q[(i, j)] = (False, [])
if q[(i, s)][0]:
for k in q[(i, s)][1]:
lst.remove(k)
return q[(i, s)][1], lst
return None, lst
def get_n_subset(n, lst, s):
''' Returns n subsets of lst, each of which sums to s'''
solutions = []
for i in range(n):
sol, lst = get_subset(lst, s)
solutions.append(sol)
return solutions, lst
# print(get_n_subset(7, [1, 2, 3, 4, 5, 7, 8, 4, 1, 2, 3, 1, 1, 1, 2], 5))
# [stdout]: ([[2, 3], [1, 4], [5], [4, 1], [2, 3], [1, 1, 1, 2], None], [7, 8])

Transform a set of large integers into a set of small ones

How do we recode a set of strictly increasing (or strictly decreasing) positive integers P, to decrease the number of positive integers that can occur between the integers in our set?
Why would we want to do this: Say we want to randomly sample P but 1.) P is too large to enumerate, and 2.) members of P are related in a nonrandom way, but in a way that is too complicated to sample by. However, we know a member of P when we see it. Say we know P[0] and P[n] but can't entertain the idea of enumerating all of P or understanding precisely how members of P are related. Likewise, the number of all possible integers occurring between P[0] and P[n] are many times greater than the size of P, making the chance of randomly drawing a member of P very small.
Example: Let P[0] = 2101010101 & P[n] = 505050505. Now, maybe we're only interested in integers between P[0] and P[n] that have a specific quality (e.g. all integers in P[x] sum to Q or less, each member of P has 7 or less as the largest integer). So, not all positive integers P[n] <= X <= P[0] belong to P. The P I'm interested in is discussed in the comments below.
What I've tried: If P is a strictly decreasing set and we know P[0] and P[n], then we can treat each member as if it were subtracted from P[0]. Doing so decreases each number, perhaps greatly and maintains each member as a unique integer. For the P I'm interested in (below), one can treat each decreased value of P as being divided by a common denominator (9,11,99), which decreases the number of possible integers between members of P. I've found that used in conjunction, these approaches decrease the set of all P[0] <= X <= P[n] by a few orders of magnitude, making the chance of randomly drawing a member of P from all positive integers P[n] <= X <= P[0] still very small.
Note: As should be clear, we have to know something about P. If we don't, that basically means we have no clue of what we're looking for. When we randomly sample integers between P[0] and P[n] (recoded or not) we need to be able to say "Yup, that belongs to P.", if indeed it does.
A good answer could greatly increase the practical application of a computing algorithm I have developed. An example of the kind of P I'm interested in is given in comment 2. I am adamant about giving due credit.
While the original question is asking about a very generic scenario concerning integer encodings, I would suggest that it is unlikely that there exists an approach that works in complete generality. For example, if the P[i] are more or less random (from an information-theoretic standpoint), I would be surprised if anything should work.
So, instead, let us turn our attention to the OP's actual problem of generating partitions of an integer N containing exactly K parts. When encoding with combinatorial objects as integers, it behooves us to preserve as much of the combinatorial structure as possible.
For this, we turn to the classic text Combinatorial Algorithms by Nijenhuis and Wilf, specifically Chapter 13. In fact, in this chapter, they demonstrate a framework to enumerate and sample from a number of combinatorial families -- including partitions of N where the largest part is equal to K. Using the well-known duality between partitions with K parts and partitions where the largest part is K (take the transpose of the Ferrers diagram), we find that we only need to make a change to the decoding process.
Anyways, here's some source code:
import sys
import random
import time
if len(sys.argv) < 4 :
sys.stderr.write("Usage: {0} N K iter\n".format(sys.argv[0]))
sys.stderr.write("\tN = number to be partitioned\n")
sys.stderr.write("\tK = number of parts\n")
sys.stderr.write("\titer = number of iterations (if iter=0, enumerate all partitions)\n")
quit()
N = int(sys.argv[1])
K = int(sys.argv[2])
iters = int(sys.argv[3])
if (N < K) :
sys.stderr.write("Error: N<K ({0}<{1})\n".format(N,K))
quit()
# B[n][k] = number of partitions of n with largest part equal to k
B = [[0 for j in range(K+1)] for i in range(N+1)]
def calc_B(n,k) :
for j in xrange(1,k+1) :
for m in xrange(j, n+1) :
if j == 1 :
B[m][j] = 1
elif m - j > 0 :
B[m][j] = B[m-1][j-1] + B[m-j][j]
else :
B[m][j] = B[m-1][j-1]
def generate(n,k,r=None) :
path = []
append = path.append
# Invalid input
if n < k or n == 0 or k == 0:
return []
# Pick random number between 1 and B[n][k] if r is not specified
if r == None :
r = random.randrange(1,B[n][k]+1)
# Construct path from r
while r > 0 :
if n==1 and k== 1:
append('N')
r = 0 ### Finish loop
elif r <= B[n-k][k] and B[n-k][k] > 0 : # East/West Move
append('E')
n = n-k
else : # Northeast/Southwest move
append('N')
r -= B[n-k][k]
n = n-1
k = k-1
# Decode path into partition
partition = []
l = 0
d = 0
append = partition.append
for i in reversed(path) :
if i == 'N' :
if d > 0 : # apply East moves all at once
for j in xrange(l) :
partition[j] += d
d = 0 # reset East moves
append(1) # apply North move
l += 1
else :
d += 1 # accumulate East moves
if d > 0 : # apply any remaining East moves
for j in xrange(l) :
partition[j] += d
return partition
t = time.clock()
sys.stderr.write("Generating B table... ")
calc_B(N, K)
sys.stderr.write("Done ({0} seconds)\n".format(time.clock()-t))
bmax = B[N][K]
Bits = 0
sys.stderr.write("B[{0}][{1}]: {2}\t".format(N,K,bmax))
while bmax > 1 :
bmax //= 2
Bits += 1
sys.stderr.write("Bits: {0}\n".format(Bits))
if iters == 0 : # enumerate all partitions
for i in xrange(1,B[N][K]+1) :
print i,"\t",generate(N,K,i)
else : # generate random partitions
t=time.clock()
for i in xrange(1,iters+1) :
Q = generate(N,K)
print Q
if i%1000==0 :
sys.stderr.write("{0} written ({1:.3f} seconds)\r".format(i,time.clock()-t))
sys.stderr.write("{0} written ({1:.3f} seconds total) ({2:.3f} iterations per second)\n".format(i, time.clock()-t, float(i)/(time.clock()-t) if time.clock()-t else 0))
And here's some examples of the performance (on a MacBook Pro 8.3, 2GHz i7, 4 GB, Mac OSX 10.6.3, Python 2.6.1):
mhum$ python part.py 20 5 10
Generating B table... Done (6.7e-05 seconds)
B[20][5]: 84 Bits: 6
[7, 6, 5, 1, 1]
[6, 6, 5, 2, 1]
[5, 5, 4, 3, 3]
[7, 4, 3, 3, 3]
[7, 5, 5, 2, 1]
[8, 6, 4, 1, 1]
[5, 4, 4, 4, 3]
[6, 5, 4, 3, 2]
[8, 6, 4, 1, 1]
[10, 4, 2, 2, 2]
10 written (0.000 seconds total) (37174.721 iterations per second)
mhum$ python part.py 20 5 1000000 > /dev/null
Generating B table... Done (5.9e-05 seconds)
B[20][5]: 84 Bits: 6
100000 written (2.013 seconds total) (49665.478 iterations per second)
mhum$ python part.py 200 25 100000 > /dev/null
Generating B table... Done (0.002296 seconds)
B[200][25]: 147151784574 Bits: 37
100000 written (8.342 seconds total) (11987.843 iterations per second)
mhum$ python part.py 3000 200 100000 > /dev/null
Generating B table... Done (0.313318 seconds)
B[3000][200]: 3297770929953648704695235165404132029244952980206369173 Bits: 181
100000 written (59.448 seconds total) (1682.135 iterations per second)
mhum$ python part.py 5000 2000 100000 > /dev/null
Generating B table... Done (4.829086 seconds)
B[5000][2000]: 496025142797537184410324290349759736884515893324969819660 Bits: 188
100000 written (255.328 seconds total) (391.653 iterations per second)
mhum$ python part-final2.py 20 3 0
Generating B table... Done (0.0 seconds)
B[20][3]: 33 Bits: 5
1 [7, 7, 6]
2 [8, 6, 6]
3 [8, 7, 5]
4 [9, 6, 5]
5 [10, 5, 5]
6 [8, 8, 4]
7 [9, 7, 4]
8 [10, 6, 4]
9 [11, 5, 4]
10 [12, 4, 4]
11 [9, 8, 3]
12 [10, 7, 3]
13 [11, 6, 3]
14 [12, 5, 3]
15 [13, 4, 3]
16 [14, 3, 3]
17 [9, 9, 2]
18 [10, 8, 2]
19 [11, 7, 2]
20 [12, 6, 2]
21 [13, 5, 2]
22 [14, 4, 2]
23 [15, 3, 2]
24 [16, 2, 2]
25 [10, 9, 1]
26 [11, 8, 1]
27 [12, 7, 1]
28 [13, 6, 1]
29 [14, 5, 1]
30 [15, 4, 1]
31 [16, 3, 1]
32 [17, 2, 1]
33 [18, 1, 1]
I'll leave it to the OP to verify that this code indeed generates partitions according to the desired (uniform) distribution.
EDIT: Added an example of the enumeration functionality.
Below is a script that accomplishes what I've asked, as far as recoding integers that represent integer partitions of N with K parts. A better recoding method is needed for this approach to be practical for K > 4. This is definitely not a best or preferred approach. However, it's conceptually simple and easily argued as fundamentally unbiased. It's also very fast for small K. The script runs fine in Sage notebook and does not call Sage functions. It is NOT a script for random sampling. Random sampling per se is not the problem.
The method:
1.) Treat integer partitions as if their summands are concatenated together and padded with zeros according to size of largest summand in first lexical partition, e.g. [17,1,1,1] -> 17010101 & [5,5,5,5] -> 05050505
2.) Treat the resulting integers as if they are subtracted from the largest integer (i.e. the int representing the first lexical partition). e.g. 17010101 - 5050505 = 11959596
3.) Treat each resulting decreased integer as divided by a common denominator, e.g. 11959596/99 = 120804
So, if we wanted to choose a random partition we would:
1.) Choose a number between 0 and 120,804 (instead of a number between 5,050,505 and 17,010,101)
2.) Multiply the number by 99 and substract from 17010101
3.) Split the resulting integer according to how we treated each integer as being padded with 0's
Pro's and Con's: As stated in the body of the question, this particular recoding method doesn't do enough to greatly improve the chance of randomly selecting an integer representing a member of P. For small numbers of parts, e.g. K < 5 and substantially larger totals, e.g. N > 100, a function that implements this concept can be very fast because the approach avoids timely recursion (snake eating its tail) that slows other random partition functions or makes other functions impractical for dealing with large N.
At small K, the probability of drawing a member of P can be reasonable when considering how fast the rest of the process is. Coupled with quick random draws, decoding, and evaluation, this function can find uniform random partitions for combinations of N&K (e.g. N = 20000, K = 4) that are untennable with other algorithms. A better way to recode integers is greatly needed to make this a generally powerful approach.
import random
import sys
First, some generally useful and straightforward functions
def first_partition(N,K):
part = [N-K+1]
ones = [1]*(K-1)
part.extend(ones)
return part
def last_partition(N,K):
most_even = [int(floor(float(N)/float(K)))]*K
_remainder = int(N%K)
j = 0
while _remainder > 0:
most_even[j] += 1
_remainder -= 1
j += 1
return most_even
def first_part_nmax(N,K,Nmax):
part = [Nmax]
N -= Nmax
K -= 1
while N > 0:
Nmax = min(Nmax,N-K+1)
part.append(Nmax)
N -= Nmax
K -= 1
return part
#print first_partition(20,4)
#print last_partition(20,4)
#print first_part_nmax(20,4,12)
#sys.exit()
def portion(alist, indices):
return [alist[i:j] for i, j in zip([0]+indices, indices+[None])]
def next_restricted_part(part,N,K): # *find next partition matching N&K w/out recursion
if part == last_partition(N,K):return first_partition(N,K)
for i in enumerate(reversed(part)):
if i[1] - part[-1] > 1:
if i[0] == (K-1):
return first_part_nmax(N,K,(i[1]-1))
else:
parts = portion(part,[K-i[0]-1]) # split p
h1 = parts[0]
h2 = parts[1]
next = first_part_nmax(sum(h2),len(h2),(h2[0]-1))
return h1+next
""" *I don't know a math software that has this function and Nijenhuis and Wilf (1978)
don't give it (i.e. NEXPAR is not restricted by K). Apparently, folks often get the
next restricted part using recursion, which is unnecessary """
def int_to_list(i): # convert an int to a list w/out padding with 0'
return [int(x) for x in str(i)]
def int_to_list_fill(i,fill):# convert an int to a list and pad with 0's
return [x for x in str(i).zfill(fill)]
def list_to_int(l):# convert a list to an integer
return "".join(str(x) for x in l)
def part_to_int(part,fill):# convert an int to a partition of K parts
# and pad with the respective number of 0's
p_list = []
for p in part:
if len(int_to_list(p)) != fill:
l = int_to_list_fill(p,fill)
p = list_to_int(l)
p_list.append(p)
_int = list_to_int(p_list)
return _int
def int_to_part(num,fill,K): # convert an int to a partition of K parts
# and pad with the respective number of 0's
# This function isn't called by the script, but I thought I'd include
# it anyway because it would be used to recover the respective partition
_list = int_to_list(num)
if len(_list) != fill*K:
ct = fill*K - len(_list)
while ct > 0:
_list.insert(0,0)
ct -= 1
new_list1 = []
new_list2 = []
for i in _list:
new_list1.append(i)
if len(new_list1) == fill:
new_list2.append(new_list1)
new_list1 = []
part = []
for i in new_list2:
j = int(list_to_int(i))
part.append(j)
return part
Finally, we get to the total N and number of parts K. The following will print partitions satisfying N&K in lexical order, with associated recoded integers
N = 20
K = 4
print '#, partition, coded, _diff, smaller_diff'
first_part = first_partition(N,K) # first lexical partition for N&K
fill = len(int_to_list(max(first_part)))
# pad with zeros to 1.) ensure a strictly decreasing relationship w/in P,
# 2.) keep track of (encode/decode) partition summand values
first_num = part_to_int(first_part,fill)
last_part = last_partition(N,K)
last_num = part_to_int(last_part,fill)
print '1',first_part,first_num,'',0,' ',0
part = list(first_part)
ct = 1
while ct < 10:
part = next_restricted_part(part,N,K)
_num = part_to_int(part,fill)
_diff = int(first_num) - int(_num)
smaller_diff = (_diff/99)
ct+=1
print ct, part, _num,'',_diff,' ',smaller_diff
OUTPUT:
ct, partition, coded, _diff, smaller_diff
1 [17, 1, 1, 1] 17010101 0 0
2 [16, 2, 1, 1] 16020101 990000 10000
3 [15, 3, 1, 1] 15030101 1980000 20000
4 [15, 2, 2, 1] 15020201 1989900 20100
5 [14, 4, 1, 1] 14040101 2970000 30000
6 [14, 3, 2, 1] 14030201 2979900 30100
7 [14, 2, 2, 2] 14020202 2989899 30201
8 [13, 5, 1, 1] 13050101 3960000 40000
9 [13, 4, 2, 1] 13040201 3969900 40100
10 [13, 3, 3, 1] 13030301 3979800 40200
In short, integers in the last column could be a lot smaller.
Why a random sampling strategy based on this idea is fundamentally unbiased:
Each integer partition of N having K parts corresponds to one and only one recoded integer. That is, we don't pick a number at random, decode it, and then try to rearrange the elements to form a proper partition of N&K. Consequently, each integer (whether corresponding to partitions of N&K or not) has the same chance of being drawn. The goal is to inherently reduce the number of integers not corresponding to partitions of N with K parts, and so, to make the process of random sampling faster.

Resources