Consider a square 3 by 3 grid of non-negative integers. For each row i the sum of the integers is set to be r_i. Similarly for each column j the sum of integers in that column is set to be c_j. An instance of the problem is therefore described by 6 non-negative integers.
Is there an efficient algorithm to count how many different
assignments of integers to the grid there are given the row and column
sum constraints?
Clearly one could enumerate all possible matrices of non-negative integers with values up to sum r_i and check the constraints for each, but that would be insanely slow.
Example
Say the row constraints are 1 2 3 and the column constraints are 3 2 1. The possible integer grids are:
┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┐
│0 0 1│0 0 1│0 0 1│0 1 0│0 1 0│0 1 0│0 1 0│1 0 0│1 0 0│1 0 0│1 0 0│1 0 0│
│0 2 0│1 1 0│2 0 0│0 1 1│1 0 1│1 1 0│2 0 0│0 1 1│0 2 0│1 0 1│1 1 0│2 0 0│
│3 0 0│2 1 0│1 2 0│3 0 0│2 1 0│2 0 1│1 1 1│2 1 0│2 0 1│1 2 0│1 1 1│0 2 1│
└─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘
In practice my main interest is when the total sum of the grid will be at most 100 but a more general solution would be very interesting.
Is there an efficient algorithm to count how many different assignments of integers to the grid there are given the row and column sum constraints?
upd My answer is wrong for this particular problem, when N is fixed (i.e. becomes constant 3). In this case it is polynomial. Sorry for misleading information.
TL;DR: I think it's at least NP-hard. There is no polinomial algorithm, but maybe there're some heuristic speedups.
For N-by-N grid you have N equations for row sums, N equations for col sums and N^2 non-negative constraints :
For N > 2 this system has more than one possible solution in general. Because there're N^2 unknown variables x_ij and just 2N equations => for N > 2: N^2 > 2N.
You can eliminate 2N - 1 variables to leave with just one equation with K = N^2 - (2N-1) variables getting the sum S. Then you'll have to deal with integer partition problem to find out all possible combinations of K terms to get the S. This problem is NP-complete. And the number of combinations depends not only on the number of terms K, but also on the order of the value S.
This problem reminded me about Simplex method. My first thought was to find just one solution using something like that method and then traverse edges of the convex to find all the possible solutions. And I was hoping that there's an optimal algorithm for that. But no, integer simplex method, which is related to integer linear programming, is NP-hard :(
I hope, there're some kind heuristics for related problems you can use to speedup naive brute force solution.
I don't know of a matching algorithm, but I don't think it would be that difficult to work one out. Given any one solution, you can derive another solution by selecting four corners of a rectangular region of your grid, increasing two diagonal corners by some value and decreasing the other two by that same value. The range for that value will be constrained by the lowest value of each diagonal pair. If you determine the size of all such ranges, you should be able to multiply them together to determine the total possible solutions.
Assuming you described your grid like a familiar spreadsheet alphabetically for columns, and numerically for rows, you could describe all possible regions in the following list:
A1:B2, A1:B3, A1:C2, A1:C3, B1:C2, B1:C3, A2:B3, A2:C3, B2:C3
For each region, we tabulate a range based on the lowest value from each diagonal corner pair. You can incrementally reduce either pair until a member reaches zero because there's no upper bound for the other pair.
Selecting the first solution of your example, we can derive all other possible solutions using this technique.
A B C
┌─────┐
1 │0 0 1│ sum=1
2 │0 2 0│ sum=2
3 │3 0 0│ sum=3
└─────┘
3 2 1 = sums
A1:B2 - 1 solution (0,0,0,2)
A1:C2 - 1 solution (0,1,0,0)
A1:B3 1 solution (0,0,3,0)
A1:C3 2 solutions (0,1,3,0), (1,0,2,1)
B1:C2 2 solutions (0,1,2,0), (1,0,1,1)
B1:C3 1 solution (0,1,0,0)
A2:B3 3 solutions (0,2,3,0), (1,1,2,1), (2,0,1,2)
A2:C3 1 solution (0,0,3,0)
B2:C3 1 solution (2,0,0,0)
Multiply all solution counts together and you get 2*2*3=12 solutions.
Maybe a simple 4-nested-loop solution is fast enough, if the total sum is small?
function solve(rowsum, colsum) {
var count = 0;
for (var a = 0; a <= rowsum[0] && a <= colsum[0]; a++) {
for (var b = 0; b <= rowsum[0] - a && b <= colsum[1]; b++) {
var c = rowsum[0] - a - b;
for (var d = 0; d <= rowsum[1] && d <= colsum[0] - a; d++) {
var g = colsum[0] - a - d;
for (var e = 0; e <= rowsum[1] - d && e <= colsum[1] - b; e++) {
var f = rowsum[1] - d - e;
var h = colsum[1] - b - e;
var i = rowsum[2] - g - h;
if (i >= 0 && i == colsum[2] - c - f) ++count;
}
}
}
}
return count;
}
document.write(solve([1,2,3],[3,2,1]) + "<br>");
document.write(solve([22,33,44],[30,40,29]) + "<br>");
It won't help with the problem being #P-hard (if you allow matrices to be of any sizes -- see reference in the comment below), but there is a solution which doesn't amount to enumerate all the matrices but rather a smaller set of objects called semi-standard Young tableaux. Depending on your input, it could go faster, but still being of exponential complexity. Since it's an entire chapter in several algebraic combinatorics book or in Knuth's AOCP 3, I won't go into details here only pointing to the relevant wikipedia pages.
The idea is that using the Robinson–Schensted–Knuth correspondence each of these matrix is in bijection with a pair of tableaux of the same shape, where one of the tableau is filled with integers counted by the row sum, the other by the column sum. The number of tableau of shape U filled with numbers counted by V is called the Kostka Number K(U,V). As a consequence, you end up with a formula such as
#Mat(RowSum, ColSum) = \sum_shape K(shape, RowSum)*K(shape, ColSum)
Of course if RowSum == ColSum == Sum:
#Mat(Sum, Sum) = \sum_shape K(shape, Sum)^2
Here is your example in the SageMath system:
sage: sum(SemistandardTableaux(p, [3,2,1]).cardinality()^2 for p in Partitions(6))
12
Here are some larger examples:
sage: sums = [6,5,4,3,2,1]
sage: %time sum(SemistandardTableaux(p, sums).cardinality()^2 for p in Partitions(sum(sums)))
CPU times: user 228 ms, sys: 4.77 ms, total: 233 ms
Wall time: 224 ms
8264346
sage: sums = [7,6,5,4,3,2,1]
sage: %time sum(SemistandardTableaux(p, sums).cardinality()^2 for p in Partitions(sum(sums)))
CPU times: user 1.95 s, sys: 205 µs, total: 1.95 s
Wall time: 1.94 s
13150070522
sage: sums = [5,4,4,4,4,3,2,1]
sage: %time sum(SemistandardTableaux(p, sums).cardinality()^2 for p in Partitions(sum(sums)))
CPU times: user 1.62 s, sys: 221 µs, total: 1.62 s
Wall time: 1.61 s
1769107201498
It's clear that you won't get that fast enumerating matrices.
As requested by גלעד ברקן# here is a solution with different row and column sums:
sage: rsums = [5,4,3,2,1]; colsums = [5,4,3,3]
sage: %time sum(SemistandardTableaux(p, rsums).cardinality() * SemistandardTableaux(p, colsums).cardinality() for p in Partitions(sum(rsums)))
CPU times: user 88.3 ms, sys: 8.04 ms, total: 96.3 ms
Wall time: 92.4 ms
10233
I've tired to optimize the slow option. I get the all combinations and change the code only to get the total count. This is the fastest I could get:
private static int count(int[] rowSums, int[] colSums)
{
int count = 0;
int[] row0 = new int[3];
int sum = rowSums[0];
for (int r0 = 0; r0 <= sum; r0++)
for (int r1 = 0, max1 = sum - r0; r1 <= max1; r1++)
{
row0[0] = r0;
row0[1] = r1;
row0[2] = sum - r0 - r1;
count += getCombinations(rowSums[1], row0, colSums);
}
return count;
}
private static int getCombinations(int sum, int[] row0, int[] colSums)
{
int count = 0;
int max1 = Math.Min(colSums[1] - row0[1], sum);
int max2 = Math.Min(colSums[2] - row0[2], sum);
for (int r0 = 0, max0 = Math.Min(colSums[0] - row0[0], sum); r0 <= max0; r0++)
for (int r1 = 0; r1 <= max1; r1++)
{
int r01 = r0 + r1;
if (r01 <= sum)
if ((r01 + max2) >= sum)
count++;
}
return count;
}
Stopwatch w2 = Stopwatch.StartNew();
int res = count(new int[] { 1, 2, 3 }, new int[] { 3, 2, 1 });//12
int res1 = count(new int[] { 22, 33, 44 }, new int[] { 30, 40, 29 });//117276
int res2 = count(new int[] { 98, 99, 100}, new int[] { 100, 99, 98});//12743775
int res3 = count(new int[] { 198, 199, 200 }, new int[] { 200, 199, 198 });//201975050
w2.Stop();
Console.WriteLine("w2:" + w2.ElapsedMilliseconds);//322 - 370 on my computer
Aside my other answer using Robinson-Schensted-Knuth bijection, here is
another solution which doesn't need advanced combinatorics, but some trick in
programming solve this problem for arbitrary larger matrix. The first idea
that should be used to solve those kind of problems is to use recursion, avoiding recompution things thanks to some memoization
or better dynamic programming. Specifically once you have chosen a candidate
for the first row, you subtract this first row to the column sum and you are
left with the same problem only there is one less row. To avoid recomputing
thing you store the result. You can do this
either basically in a big table (memoization)
or in a more tricky way by storing all the solutions for matrices with n rows
and deducing the number of solutions for matrices with n+1 rows (dynamic programming).
Here is a recursive method using memoization in Python:
# Generator for the rows of sum s which are smaller that maxrow
def choose_one_row(s, maxrow):
if not maxrow:
if s == 0: yield []
else: return
else:
for i in range(0, maxrow[0]+1):
for res in choose_one_row(s-i, maxrow[1:]):
yield [i]+res
memo = dict()
def nmat(rsum, colsum):
# sanity check: sum by row and column must match
if sum(rsum) != sum(colsum): return 0
# base case rsum is empty
if not rsum: return 1
# convert to immutable tuple for memoization
rsum = tuple(rsum)
colsum = tuple(colsum)
# try if allready computed
try:
return memo[rsum, colsum]
except KeyError:
pass
# apply the recursive formula
res = 0
for row in choose_one_row(rsum[0], colsum):
res += nmat(rsum[1:], tuple(a - b for a, b in zip(colsum, row)))
# memoize the result
memo[(tuple(rsum), tuple(colsum))] = res
return res
Then after that:
sage: nmat([3,2,1], [3,2,1])
12
sage: %time nmat([6,5,4,3,2,1], [6,5,4,3,2,1])
CPU times: user 1.49 s, sys: 7.16 ms, total: 1.5 s
Wall time: 1.48 s
8264346
Related
I found this problem which seeks to use dynamic programming to minimize the absolute difference between height for n boys and m girls in a match.
If I understand it correctly we will be sorting the first j boys and k girls by height (ascending? or descending?) where j<=k. Why is j <=k?
I do not understand how I can use the recurrence mentioned in the link:
(j,k−1) and (j−1,k−1)
to find the optimal matching for the values (j,k), based on whether you pair up boy j with girl k or not.
I'm clearly misunderstanding some things here but my goal is to write pseudo code for this solution. Here are my steps:
1. Sort heights Array[Boys] and Array[Girls]
2. To pair Optimally for the least absolute difference in height, simply pair in order so Array[Pairs][1] = Array[Boys][1] + Array[Girls][1]
3. Return which boy was paired with which girl
Please help me to implement the solution proposed in the link.
As stated in the answer you provided, there is always a better matching available if there is a cross edge between two matchings, when the heights are sorted for all the boys and all the girls in ascending order.
So a Dynamic programming solution with complexity of O(n*m) is possible.
So we have a state represented by 2 indexes, lets call them i and j, where i refers for boys and j refers to the girls, then at each state (i, j) we can make a move to the state (i, j+1), i.e. the current ith boy does not choose the current jth girl or the move can be made to the state (i+1, j+1), i.e. the current jth girl is chosen by the current ith boy and we choose the minimum between these two choices at every level.
This can be easily implemented using a DP solution.
Reccurrence :
DP[i][j] = minimum(
DP[i+1][j+1] + abs(heightOfBoy[i] - heightofGirl[j]),
DP[i][j+1]
);
Below is the code in c++ for recursive DP Solution :
#include<bits/stdc++.h>
#define INF 1e9
using namespace std;
int n, m, htB[100] = {10,10,12,13,16}, htG[100] = {6,7,9,10,11,12,17}, dp[100][100];
int solve(int idx1, int idx2){
if(idx1 == n) return 0;
if(idx2 == m) return INF;
if(dp[idx1][idx2] != -1) return dp[idx1][idx2];
int v1, v2;
//include current
v1 = solve(idx1 + 1, idx2 + 1) + abs(htB[idx1] - htG[idx2]);
//do not include current
v2 = solve(idx1, idx2 + 1);
return dp[idx1][idx2] = min(v1, v2);
}
int main(){
n = 5, m = 7;
sort(htB, htB+n);sort(htG, htG+m);
for(int i = 0;i < 100;i++) for(int j = 0;j < 100;j++) dp[i][j] = -1;
cout << solve(0, 0) << endl;
return 0;
}
Output : 4
Link to solution on Ideone : http://ideone.com/K5FZ9x
The DP table output of the above solution :
4 4 4 1000000000 1000000000 1000000000 1000000000
-1 3 3 3 1000000000 1000000000 1000000000
-1 -1 3 3 3 1000000000 1000000000
-1 -1 -1 2 2 2 1000000000
-1 -1 -1 -1 1 1 1
The answer is stored in the DP[0][0] state.
You could turn that problem into a bipartite graph where the edge between a girl and a boy is the absolute difference between their heights like so abs(hG - hB). Then you can use the bipartite matching algorithm to solve for a minimal matching. See this for more info http://www.geeksforgeeks.org/maximum-bipartite-matching/
Hell all, I have some problem when compute the rank of binary matrix that only 1 or 0. The rank of binary matrix will based on the row reduction using boolean operations XOR. Let see the XOR operation:
1 xor 1 =0
1 xor 0= 1
0 xor 0= 0
0 xor 1= 1
Given a binary matrix as
A =
1 1 0 0 0 0
1 0 0 0 0 1
0 1 0 0 0 1
We can see the third row equals first row xor with second row. Hence, the rank of matrix A only 2, instead of 3 by rank matlab function.
I have one way to compute the extractly rank of binary matrix using this code
B=gf(A)
rank(B)
It will return 2. However, when I compute with large size of matrix, for example 400 by 400. It does not return the rank (never stop). Could you suggest to me the good way to find rank of binary matrix for large size? Thank all
UPDATE: this is computation time using tic toc
N=50; Elapsed time is=0.646823 seconds
N=100;Elapsed time is 3.123573 seconds.
N=150;Elapsed time is 7.438541 seconds.
N=200;Elapsed time is 11.349964 seconds.
N=400;Elapsed time is 66.815286 seconds.
Note that check rank is only the condition in my algorithm. However, it take very long long time, then it will affect to my method
Base on the suggestion of R. I will use Gaussian Elimination to find the rank. This is my code. However, it call the rank function (spend some computation times). Could you modify help me without using rank function?
function rankA=GaussEliRank(A)
mat = A;
[m n] = size(A); % read the size of the original matrix A
for i = 1 : n
j = find(mat(i:m, i), 1); % finds the FIRST 1 in i-th column starting at i
if isempty(j)
mat = mat( sum(mat,2)>0 ,:);
rankA=rank(mat); %%Here
return;
else
j = j + i - 1; % we need to add i-1 since j starts at i
temp = mat(j, :); % swap rows
mat(j, :) = mat(i, :);
mat(i, :) = temp;
% add i-th row to all rows that contain 1 in i-th column
% starting at j+1 - remember up to j are zeros
for k = find(mat( (j+1):m, i ))'
mat(j + k, :) = bitxor(mat(j + k, :), mat(i, :));
end
end
end
%remove all-zero rows if there are some
mat = mat( sum(mat,2)>0 ,:);
if any(sum( mat(:,1:n) ,2)==0) % no solution because matrix A contains
error('No solution.'); % all-zero row, but with nonzero RHS
end
rankA=rank(mat); %%Here
end
Let check the matrix A at here. Correct ans is 393 for rank of A.
Once you get the matrix into row echelon form with Gaussian elimination, the rank is the number of nonzero rows. You should be able to replace the code after the loop with something like rankA=sum(sum(mat,2)>0);.
Given: set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
Asked: Find the sum of all least common multiples (LCM) of all subsets of A of size at least 2.
The LCM of a setB = {b0, b1, ..., bk-1} is defined as the minimum integer Bmin such that bi | Bmin, for all 0 ≤ i < k.
Example:
Let N = 3 and A = {2, 6, 7}, then:
LCM({2, 6}) = 6
LCM({2, 7}) = 14
LCM({6, 7}) = 42
LCM({2, 6, 7}) = 42
----------------------- +
answer 104
The naive approach would be to simply calculate the LCM for all O(2N) subsets, which is not feasible for reasonably large N.
Solution sketch:
The problem is obtained from a competition*, which also provided a solution sketch. This is where my problem comes in: I do not understand the hinted approach.
The solution reads (modulo some small fixed grammar issues):
The solution is a bit tricky. If we observe carefully we see that the integers are between 2 and 500. So, if we prime factorize the numbers, we get the following maximum powers:
2 8
3 5
5 3
7 3
11 2
13 2
17 2
19 2
Other than this, all primes have power 1. So, we can easily calculate all possible states, using these integers, leaving 9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 states, which is nearly 70000. For other integers we can make a dp like the following: dp[70000][i], where i can be 0 to 100. However, as dp[i] is dependent on dp[i-1], so dp[70000][2] is enough. This leaves the complexity to n * 70000 which is feasible.
I have the following concrete questions:
What is meant by these states?
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
How is dp[i] computed from dp[i-1]?
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
*The original problem description can be found from this source (problem F). This question is a simplified version of that description.
Discussion
After reading the actual contest description (page 10 or 11) and the solution sketch, I have to conclude the author of the solution sketch is quite imprecise in their writing.
The high level problem is to calculate an expected lifetime if components are chosen randomly by fair coin toss. This is what's leading to computing the LCM of all subsets -- all subsets effectively represent the sample space. You could end up with any possible set of components. The failure time for the device is based on the LCM of the set. The expected lifetime is therefore the average of the LCM of all sets.
Note that this ought to include the LCM of sets with only one item (in which case we'd assume the LCM to be the element itself). The solution sketch seems to sabotage, perhaps because they handled it in a less elegant manner.
What is meant by these states?
The sketch author only uses the word state twice, but apparently manages to switch meanings. In the first use of the word state it appears they're talking about a possible selection of components. In the second use they're likely talking about possible failure times. They could be muddling this terminology because their dynamic programming solution initializes values from one use of the word and the recurrence relation stems from the other.
Does dp stand for dynamic programming?
I would say either it does or it's a coincidence as the solution sketch seems to heavily imply dynamic programming.
If so, what recurrence relation is being solved? How is dp[i] computed from dp[i-1]?
All I can think is that in their solution, state i represents a time to failure , T(i), with the number of times this time to failure has been counted, dp[i]. The resulting sum would be to sum all dp[i] * T(i).
dp[i][0] would then be the failure times counted for only the first component. dp[i][1] would then be the failure times counted for the first and second component. dp[i][2] would be for the first, second, and third. Etc..
Initialize dp[i][0] with zeroes except for dp[T(c)][0] (where c is the first component considered) which should be 1 (since this component's failure time has been counted once so far).
To populate dp[i][n] from dp[i][n-1] for each component c:
For each i, copy dp[i][n-1] into dp[i][n].
Add 1 to dp[T(c)][n].
For each i, add dp[i][n-1] to dp[LCM(T(i), T(c))][n].
What is this doing? Suppose you knew that you had a time to failure of j, but you added a component with a time to failure of k. Regardless of what components you had before, your new time to fail is LCM(j, k). This follows from the fact that for two sets A and B, LCM(A union B} = LCM(LCM(A), LCM(B)).
Similarly, if we're considering a time to failure of T(i) and our new component's time to failure of T(c), the resultant time to failure is LCM(T(i), T(c)). Note that we recorded this time to failure for dp[i][n-1] configurations, so we should record that many new times to failure once the new component is introduced.
Why do the big primes not contribute to the number of states?
Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
You're right, of course. However, the solution sketch states that numbers with large primes are handled in another (unspecified) fashion.
What would happen if we did include them? The number of states we would need to represent would explode into an impractical number. Hence the author accounts for such numbers differently. Note that if a number less than or equal to 500 includes a prime larger than 19 the other factors multiply to 21 or less. This makes such numbers amenable for brute forcing, no tables necessary.
The first part of the editorial seems useful, but the second part is rather vague (and perhaps unhelpful; I'd rather finish this answer than figure it out).
Let's suppose for the moment that the input consists of pairwise distinct primes, e.g., 2, 3, 5, and 7. Then the answer (for summing all sets, where the LCM of 0 integers is 1) is
(1 + 2) (1 + 3) (1 + 5) (1 + 7),
because the LCM of a subset is exactly equal to the product here, so just multiply it out.
Let's relax the restriction that the primes be pairwise distinct. If we have an input like 2, 2, 3, 3, 3, and 5, then the multiplication looks like
(1 + (2^2 - 1) 2) (1 + (2^3 - 1) 3) (1 + (2^1 - 1) 5),
because 2 appears with multiplicity 2, and 3 appears with multiplicity 3, and 5 appears with multiplicity 1. With respect to, e.g., just the set of 3s, there are 2^3 - 1 ways to choose a subset that includes a 3, and 1 way to choose the empty set.
Call a prime small if it's 19 or less and large otherwise. Note that integers 500 or less are divisible by at most one large prime (with multiplicity). The small primes are more problematic. What we're going to do is to compute, for each possible small portion of the prime factorization of the LCM (i.e., one of the ~70,000 states), the sum of LCMs for the problem derived by discarding the integers that could not divide such an LCM and leaving only the large prime factor (or 1) for the other integers.
For example, if the input is 2, 30, 41, 46, and 51, and the state is 2, then we retain 2 as 1, discard 30 (= 2 * 3 * 5; 3 and 5 are small), retain 41 as 41 (41 is large), retain 46 as 23 (= 2 * 23; 23 is large), and discard 51 (= 3 * 17; 3 and 17 are small). Now, we compute the sum of LCMs using the previously described technique. Use inclusion-exclusion to get rid of the subsets whose LCM whose small portion properly divides the state instead of being exactly equal. Maybe I'll work a complete example later.
What is meant by these states?
I think here, states refer to if the number is in set B = {b0, b1, ..., bk-1} of LCMs of set A.
Does dp stand for dynamic programming and if so, what recurrence relation is being solved?
dp in the solution sketch stands for dynamic programming, I believe.
How is dp[i] computed from dp[i-1]?
It's feasible that we can figure out the state of next group of LCMs from previous states. So, we only need array of 2, and toggle back and forth.
Why do the big primes not contribute to the number of states? Each of them occurs either 0 or 1 times. Should the number of states not be multiplied by 2 for each of these primes (leading to a non-feasible state space again)?
We can use Prime Factorization and exponents only to present the number.
Here is one example.
6 = (2^1)(3^1)(5^0) -> state "1 1 0" to represent 6
18 = (2^1)(3^2)(5^0) -> state "1 2 0" to represent 18
Here is how we can get LMC of 6 and 18 using Prime Factorization
LCM (6,18) = (2^(max(1,1)) (3^ (max(1,2)) (5^max(0,0)) = (2^1)(3^2)(5^0) = 18
2^9 > 500, 3^6 > 500, 5^4 > 500, 7^4>500, 11^3 > 500, 13^3 > 500, 17^3 > 500, 19^3 > 500
we can use only count of exponents of prime number 2,3,5,7,11,13,17,19 to represent the LCMs in the set B = {b0, b1, ..., bk-1}
for the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500.
9 * 6 * 4 * 4 * 3 * 3 * 3 * 3 <= 70000, so we only need two of dp[9][6][4][4][3][3][3][3] to keep tracks of all LCMs' states. So, dp[70000][2] is enough.
I put together a small C++ program to illustrate how we can get sum of LCMs of the given set A = {a0, a1, ..., aN-1} (1 ≤ N ≤ 100), with 2 ≤ ai ≤ 500. In the solution sketch, we need to loop through 70000 max possible of LCMs.
int gcd(int a, int b) {
int remainder = 0;
do {
remainder = a % b;
a = b;
b = remainder;
} while (b != 0);
return a;
}
int lcm(int a, int b) {
if (a == 0 || b == 0) {
return 0;
}
return (a * b) / gcd(a, b);
}
int sum_of_lcm(int A[], int N) {
// get the max LCM from the array
int max = A[0];
for (int i = 1; i < N; i++) {
max = lcm(max, A[i]);
}
max++;
//
int dp[max][2];
memset(dp, 0, sizeof(dp));
int pri = 0;
int cur = 1;
// loop through n x 70000
for (int i = 0; i < N; i++) {
for (int v = 1; v < max; v++) {
int x = A[i];
if (dp[v][pri] > 0) {
x = lcm(A[i], v);
dp[v][cur] = (dp[v][cur] == 0) ? dp[v][pri] : dp[v][cur];
if ( x % A[i] != 0 ) {
dp[x][cur] += dp[v][pri] + dp[A[i]][pri];
} else {
dp[x][cur] += ( x==v ) ? ( dp[v][pri] + dp[v][pri] ) : ( dp[v][pri] ) ;
}
}
}
dp[A[i]][cur]++;
pri = cur;
cur = (pri + 1) % 2;
}
for (int i = 0; i < N; i++) {
dp[A[i]][pri] -= 1;
}
long total = 0;
for (int j = 0; j < max; j++) {
if (dp[j][pri] > 0) {
total += dp[j][pri] * j;
}
}
cout << "total:" << total << endl;
return total;
}
int test() {
int a[] = {2, 6, 7 };
int n = sizeof(a)/sizeof(a[0]);
int total = sum_of_lcm(a, n);
return 0;
}
Output
total:104
The states are one more than the powers of primes. You have numbers up to 2^8, so the power of 2 is in [0..8], which is 9 states. Similarly for the other states.
"dp" could well stand for dynamic programming, I'm not sure.
The recurrence relation is the heart of the problem, so you will learn more by solving it yourself. Start with some small, simple examples.
For the large primes, try solving a reduced problem without using them (or their equivalents) and then add them back in to see their effect on the final result.
This is more of a puzzle than a coding problem. I need to find how many binary numbers can be generated satisfying certain constraints. The inputs are
(integer) Len - Number of digits in the binary number
(integer) x
(integer) y
The binary number has to be such that taking any x adjacent digits from the binary number should contain at least y 1's.
For example -
Len = 6, x = 3, y = 2
0 1 1 0 1 1 - Length is 6, Take any 3 adjacent digits from this and
there will be 2 l's
I had this C# coding question posed to me in an interview and I cannot figure out any algorithm to solve this. Not looking for code (although it's welcome), any sort of help, pointers are appreciated
This problem can be solved using dynamic programming. The main idea is to group the binary numbers according to the last x-1 bits and the length of each binary number. If appending a bit sequence to one number yields a number satisfying the constraint, then appending the same bit sequence to any number in the same group results in a number satisfying the constraint also.
For example, x = 4, y = 2. both of 01011 and 10011 have the same last 3 bits (011). Appending a 0 to each of them, resulting 010110 and 100110, both satisfy the constraint.
Here is pseudo code:
mask = (1<<(x-1)) - 1
count[0][0] = 1
for(i = 0; i < Len-1; ++i) {
for(j = 0; j < 1<<i && j < 1<<(x-1); ++j) {
if(i<x-1 || count1Bit(j*2+1)>=y)
count[i+1][(j*2+1)&mask] += count[i][j];
if(i<x-1 || count1Bit(j*2)>=y)
count[i+1][(j*2)&mask] += count[i][j];
}
}
answer = 0
for(j = 0; j < 1<<i && j < 1<<(x-1); ++j)
answer += count[Len][j];
This algorithm assumes that Len >= x. The time complexity is O(Len*2^x).
EDIT
The count1Bit(j) function counts the number of 1 in the binary representation of j.
The only input to this algorithm are Len, x, and y. It starts from an empty binary string [length 0, group 0], and iteratively tries to append 0 and 1 until length equals to Len. It also does the grouping and counting the number of binary strings satisfying the 1-bits constraint in each group. The output of this algorithm is answer, which is the number of binary strings (numbers) satisfying the constraints.
For a binary string in group [length i, group j], appending 0 to it results in a binary string in group [length i+1, group (j*2)%(2^(x-1))]; appending 1 to it results in a binary string in group [length i+1, group (j*2+1)%(2^(x-1))].
Let count[i,j] be the number of binary strings in group [length i, group j] satisfying the 1-bits constraint. If there are at least y 1 in the binary representation of j*2, then appending 0 to each of these count[i,j] binary strings yields a binary string in group [length i+1, group (j*2)%(2^(x-1))] which also satisfies the 1-bit constraint. Therefore, we can add count[i,j] into count[i+1,(j*2)%(2^(x-1))]. The case of appending 1 is similar.
The condition i<x-1 in the above algorithm is to keep the binary strings growing when length is less than x-1.
Using the example of LEN = 6, X = 3 and Y = 2...
Build an exhaustive bit pattern generator for X bits. A simple binary counter can do this. For example, if X = 3
then a counter from 0 to 7 will generate all possible bit patterns of length 3.
The patterns are:
000
001
010
011
100
101
110
111
Verify the adjacency requirement as the patterns are built. Reject any patterns that do not qualify.
Basically this boils down to rejecting any pattern containing fewer than 2 '1' bits (Y = 2). The list prunes down to:
011
101
110
111
For each member of the pruned list, add a '1' bit and retest the first X bits. Keep the new pattern if it passes the
adjacency test. Do the same with a '0' bit. For example this step proceeds as:
1011 <== Keep
1101 <== Keep
1110 <== Keep
1111 <== Keep
0011 <== Reject
0101 <== Reject
0110 <== Keep
0111 <== Keep
Which leaves:
1011
1101
1110
1111
0110
0111
Now repeat this process until the pruned set is empty or the member lengths become LEN bits long. In the end
the only patterns left are:
111011
111101
111110
111111
110110
110111
101101
101110
101111
011011
011101
011110
011111
Count them up and you are done.
Note that you only need to test the first X bits on each iteration because all the subsequent patterns were verified in prior steps.
Considering that input values are variable and wanted to see the actual output, I used recursive algorithm to determine all combinations of 0 and 1 for a given length :
private static void BinaryNumberWithOnes(int n, int dump, int ones, string s = "")
{
if (n == 0)
{
if (BinaryWithoutDumpCountContainsnumberOfOnes(s, dump,ones))
Console.WriteLine(s);
return;
}
BinaryNumberWithOnes(n - 1, dump, ones, s + "0");
BinaryNumberWithOnes(n - 1, dump, ones, s + "1");
}
and BinaryWithoutDumpCountContainsnumberOfOnes to determine if the binary number meets the criteria
private static bool BinaryWithoutDumpCountContainsnumberOfOnes(string binaryNumber, int dump, int ones)
{
int current = 0;
int count = binaryNumber.Length;
while(current +dump < count)
{
var fail = binaryNumber.Remove(current, dump).Replace("0", "").Length < ones;
if (fail)
{
return false;
}
current++;
}
return true;
}
Calling BinaryNumberWithOnes(6, 3, 2) will output all binary numbers that match
010011
011011
011111
100011
100101
100111
101011
101101
101111
110011
110101
110110
110111
111011
111101
111110
111111
Sounds like a nested for loop would do the trick. Pseudocode (not tested).
value = '0101010111110101010111' // change this line to format you would need
for (i = 0; i < (Len-x); i++) { // loop over value from left to right
kount = 0
for (j = i; j < (i+x); j++) { // count '1' bits in the next 'x' bits
kount += value[j] // add 0 or 1
if kount >= y then return success
}
}
return fail
The naive approach would be a tree-recursive algorithm.
Our recursive method would slowly build the number up, e.g. it would start at xxxxxx, return the sum of a call with 1xxxxx and 0xxxxx, which themselves will return the sum of a call with 10, 11 and 00, 01, etc. except if the x/y conditions are NOT satisfied for the string it would build by calling itself it does NOT go down that path, and if you are at a terminal condition (built a number of the correct length) you return 1. (note that since we're building the string up from left to right, you don't have to check x/y for the entire string, just also considering the newly added digit!)
By returning a sum over all calls then all of the returned 1s will pool together and be returned by the initial call, equalling the number of constructed strings.
No idea what the big O notation for time complexity is for this one, it could be as bad as O(2^n)*O(checking x/y conditions) but it will prune lots of branches off the tree in most cases.
UPDATE: One insight I had is that all branches of the recursive tree can be 'merged' if they have identical last x digits so far, because then the same checks would be applied to all digits hereafter so you may as well double them up and save a lot of work. This now requires building the tree explicitly instead of implicitly via recursive calls, and maybe some kind of hashing scheme to detect when branches have identical x endings, but for large length it would provide a huge speedup.
My approach is to start by getting the all binary numbers with the minimum number of 1's, which is easy enough, you just get every unique permutation of a binary number of length x with y 1's, and cycle each unique permutation "Len" times. By flipping the 0 bits of these seeds in every combination possible, we are guaranteed to iterate over all of the binary numbers that fit the criteria.
from itertools import permutations, cycle, combinations
def uniq(x):
d = {}
for i in x:
d[i]=1
return d.keys()
def findn( l, x, y ):
window = []
for i in xrange(y):
window.append(1)
for i in xrange(x-y):
window.append(0)
perms = uniq(permutations(window))
seeds=[]
for p in perms:
pr = cycle(p)
seeds.append([ pr.next() for i in xrange(l) ]) ###a seed is a binary number fitting the criteria with minimum 1 bits
bin_numbers=[]
for seed in seeds:
if seed in bin_numbers: continue
indexes = [ i for i, x in enumerate(seed) if x == 0] ### get indexes of 0 "bits"
exit = False
for i in xrange(len(indexes)+1):
if( exit ): break
for combo in combinations(indexes, i): ### combinatorically flipping the zero bits in the seed
new_num = seed[:]
for index in combo: new_num[index]+=1
if new_num in bin_numbers:
### if our new binary number has been seen before
### we can break out since we are doing a depth first traversal
exit=True
break
else:
bin_numbers.append(new_num)
print len(bin_numbers)
findn(6,3,2)
Growth of this approach is definitely exponential, but I thought I'd share my approach in case it helps someone else get to a lower complexity solution...
Set some condition and introduce simple help variable.
L = 6, x = 3 , y = 2 introduce d = x - y = 1
Condition: if the list of the next number hypotetical value and the previous x - 1 elements values has a number of 0-digits > d next number concrete value must be 1, otherwise add two brances with both 1 and 0 as concrete value.
Start: check(Condition) => both 0,1 due to number of total zeros in the 0-count check.
Empty => add 0 and 1
Step 1:Check(Condition)
0 (number of next value if 0 and previous x - 1 zeros > d(=1)) -> add 1 to sequence
1 -> add both 0,1 in two different branches
Step 2: check(Condition)
01 -> add 1
10 -> add 1
11 -> add 0,1 in two different branches
Step 3:
011 -> add 0,1 in two branches
101 -> add 1 (the next value if 0 and prev x-1 seq would be 010, so we prune and set only 1)
110 -> add 1
111 -> add 0,1
Step 4:
0110 -> obviously 1
0111 -> both 0,1
1011 -> both 0,1
1101 -> 1
1110 -> 1
1111 -> 0,1
Step 5:
01101 -> 1
01110 -> 1
01111 -> 0,1
10110 -> 1
10111 -> 0,1
11011 -> 0,1
11101 -> 1
11110 -> 1
11111 -> 0,1
Step 6 (Finish):
011011
011101
011110
011111
101101
101110
101111
110110
110111
111011
111101
111110
111111
Now count. I've tested for L = 6, x = 4 and y = 2 too, but consider to check the algorithm for special cases and extended cases.
Note: I'm pretty sure some algorithm with Disposition Theory bases should be a really massive improvement of my algorithm.
So in a series of Len binary digits, you are looking for a x-long segment that contains y 1's ..
See the execution: http://ideone.com/xuaWaK
Here's my Algorithm in Java:
import java.util.*;
import java.lang.*;
class Main
{
public static ArrayList<String> solve (String input, int x, int y)
{
int s = 0;
ArrayList<String> matches = new ArrayList<String>();
String segment = null;
for (int i=0; i<(input.length()-x); i++)
{
s = 0;
segment = input.substring(i,(i+x));
System.out.print(" i: "+i+" ");
for (char c : segment.toCharArray())
{
System.out.print("*");
if (c == '1')
{
s = s + 1;
}
}
if (s == y)
{
matches.add(segment);
}
System.out.println();
}
return matches;
}
public static void main (String [] args)
{
String input = "011010101001101110110110101010111011010101000110010";
int x = 6;
int y = 4;
ArrayList<String> matches = null;
matches = solve (input, x, y);
for (String match : matches)
{
System.out.println(" > "+match);
}
System.out.println(" Number of matches is " + matches.size());
}
}
The number of patterns of length X that contain at least Y 1 bits is countable. For the case x == y we know there is exactly one pattern of the 2^x possible patterns that meets the criteria. For smaller y we need to sum up the number of patterns which have excess 1 bits and the number of patterns that have exactly y bits.
choose(n, k) = n! / k! (n - k)!
numPatterns(x, y) {
total = 0
for (int j = x; j >= y; j--)
total += choose(x, j)
return total
}
For example :
X = 4, Y = 4 : 1 pattern
X = 4, Y = 3 : 1 + 4 = 5 patterns
X = 4, Y = 2 : 1 + 4 + 6 = 11 patterns
X = 4, Y = 1 : 1 + 4 + 6 + 4 = 15 patterns
X = 4, Y = 0 : 1 + 4 + 6 + 4 + 1 = 16
(all possible patterns have at least 0 1 bits)
So let M be the number of X length patterns that meet the Y criteria. Now, that X length pattern is a subset of N bits. There are (N - x + 1) "window" positions for the sub pattern, and 2^N total patterns possible. If we start with any of our M patterns, we know that appending a 1 to the right and shifting to the next window will result in one of our known M patterns. The question is, how many of the M patterns can we add a 0 to, shift right, and still have a valid pattern in M?
Since we are adding a zero, we have to be either shifting away from a zero, or we have to already be in an M where we have an excess of 1 bits. To flip that around, we can ask how many of the M patterns have exactly Y bits and start with a 1. Which is the same as "how many patterns of length X-1 have Y-1 bits", which we know how to answer:
shiftablePatternCount = M - choose(X-1, Y-1)
So starting with M possibilities, we are going to increase by shiftablePatternCount when we slide to the right. All patterns in the new window are in the set of M, with some patterns now duplicated. We are going to shift a number of times to fill up N by (N - X), each time increasing the count by shiftablePatternCount, so the full answer should be :
totalCountOfMatchingPatterns = M + (N - X)*shiftablePatternCount
edit - realized a mistake. I need to count the duplicates of the shiftable patterns that are generated. I think that's doable. (draft still)
I am not sure about my answer but here is my view.just take a look at it,
Len=4,
x=3,
y=2.
i just took out two patterns,cause pattern must contain at least y's 1.
X 1 1 X
1 X 1 X
X - represent don't care
now count for 1st expression is 2 1 1 2 =4
and for 2nd expression 1 2 1 2 =4
but 2 pattern is common between both so minus 2..so there will be total 6 pair which satisfy the condition.
I happen to be using a algoritem similar to your problem, trying to find a way to improve it, I found your question. So I will share
static int GetCount(int length, int oneBits){
int result = 0;
double count = Math.Pow(2, length);
for (int i = 1; i <= count - 1; i++)
{
string str = Convert.ToString(i, 2).PadLeft(length, '0');
if (str.ToCharArray().Count(c => c == '1') == oneBits)
{
result++;
}
}
return result;
}
not very efficent I think, but elegent solution.
I need to write an algorithm for a given problem: You have infinite pennies, nickels, dimes, and quarters. Write a class method that will output all combinations of coins such that the total is 99 cents.
It seems like a permutation nPr problem. Any algoritham for it?
Regards,
Priyank
I think this problem is most easily answered using recursion w a table of denominations
{5000, 2000, ... 1} // $50's to one penny
You would start with:
WaysToMakeChange(10000, 0) // ie. $100...highest denomination index is 0 ($50)
WaysToMakeChange(amount, maxdenomindex) would calculate using 0 or more of the maxdenom
the recurance is something like
WaysToMakeChange(amount - usedbymaxdenom, maxdenomindex - 1)
I programmed this and it can be optimized in many ways:
1) multithreading
2) caching. This is very important. B/c of the way the algorithm works, WaysToMakeChange(m,n) will be called many times with the same initial values:
For example. Changing $100 can be done by:
1 $50 + 0 $20's + 0 $10's + ways to $50 dollars with highest currency $5 (ie. WaysToMakeChange(5000, index for $5)
0 $50 + 2 $20's + 1 $10's + ways to $50 dollars with highest currency $5 (ie. WaysToMakeChange(5000, index for $5)
Clearly WaysToMakeChange(5000, index for $5) can be cached so that the subsequent call does not need to be made
3) Shortcircuiting the lowest recursion.
Suppose static const int denom[] = {5000, 2000, 1000, 500, 200, 100, 50, 25, 10, 5, 1};
The first test for WaysToMakeChange(int total, int coinIndex) should be something like:
if( coins[_countof(coins)-1] == 1 && coinIndex == _countof(coins) - 2){
return total / coins[_countof(coins)-2] + 1;
}
What does this mean? Well if your lowest denom is 1 then you only have to go as far as the second lowest denom (say a nickel). Then there are 1+ total/second lowest denom left. For example:
49c -> 5 nickels + 4 pennies. 4 nickels + 9 pennies....49 pennies = 1+ total/second lowest denom left
The easiest way is probably to spend a few moments thinking about the problem. There is a relatively nice, recursive, algorithm that lends itself neatly to either memoization or reworking into a dynamic programming solution.
This problem is classic Dynamic Programming problem. You can read about it here
http://www.algorithmist.com/index.php/Coin_Change
the python code is:
def count( n, m ):
if n == 0:
return 1
if n < 0:
return 0
if m <= 0 and n >= 1:
return 0
return count( n, m - 1 ) + count( n - S[m], m )
Here S[m] gives the value of the denomination and S is a sorted array of denominations
This problem seems like it is a diophantine equation, i.e. for a*x + b*y + ... = n, find a solution, where all letters are integers. The simplest, but not the most elegant solution would be an iterative one (displayed in python, note that I skip variable l because it resembles the number 1):
dioph_combinations = list()
for i in range(0, 99, 25):
for j in range(0, 99-i, 10):
for k in range(0, 99-i-j, 5):
for m in range(0, 99-i-j-k, 1):
if i + j + k + m == 99:
dioph_combinations.append( (i/25, j/10, k/5, m) )
The resulting list dioph_combinations will contain the possible combinations.