Related
I am trying to replace the sub and super diagonals of a matrix in Octave.
This is the code I am using:
A=[-3 -2 -1 0 1 2 3;0.1 0.2 0.2 0.5 0.6 -0.1 0]'
P=zeros(4,4)
for (k=1:7)
j=A(k,1)
diag(P,j)=A(k,2)
end
This is the error I got: diag(0,_): subscripts must be either integers 1 to (2^63)-1 or logicals
But all the little parts are okay. diag(P,-3) works fine, but when I ask to replace in the loop it refuses!
What can I do about it? Is this: diag(P,j)=e, not the right code to substitute super and sub diagonals?
The reason you're getting an error is that diag(P,j) is not a reference to the diagonal of P, it is a function that returns the values on that diagonal. So what you're doing is assigning the value A(k,2) to the return value of the function and, since it's never assigned to a variable name, the value is lost and nothing changes.
To fix your loop, you would need to provide indices into P and assign to those. One way is to use logical indexing to tell MATLAB which values in P to change. For example,
P = zeros(4)
M = logical(diag([1,1,1], -1))
P(M) = 3
gives us
P =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
M =
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
P =
0 0 0 0
3 0 0 0
0 3 0 0
0 0 3 0
The unfortunate part is that we can't specify both which diagonal we want to create and the size of the resulting matrix, so we have to calculate the number of elements on the diagonal before creating it.
A=[-3 -2 -1 0 1 2 3;0.1 0.2 0.2 0.5 0.6 -0.1 0].'
n=4; % Number of rows/columns in P...
% If we want a non-square matrix, we'll have to do more math
P=zeros(n);
for k=1:2*n-1 % Remove hardcoded values to make the code more general.
j=A(k,1);
diag_length = n-abs(j);
M=diag(true(1,diag_length),j); % Create logical array with true on jth diagonal
P(M)=A(k,2);
end
The result is:
P =
0.5000 0.6000 -0.1000 0
0.2000 0.5000 0.6000 -0.1000
0.2000 0.2000 0.5000 0.6000
0.1000 0.2000 0.2000 0.5000
Another approach is to use spdiags. One of the uses of spdiags takes the columns of one matrix and uses them to build the diagonals of the output matrix. You pass the indices of the diagonals to set, and the matrix of values for each of the diagonals, along with the matrix size.
If we only pass one value for each diagonal, spdiags will only set one value, so we'll have to duplicate the input vector n times. (spdiags will happily throw away values, but won't fill them in.)
A=[-3 -2 -1 0 1 2 3;0.1 0.2 0.2 0.5 0.6 -0.1 0].'
n = 4;
diag_idx = A(:,1).'; % indices of diagonals
diag_val = A(:,2).'; % corresponding values
diag_val = repmat(diag_val, n, 1); % duplicate values n times
P = spdiags(diag_val, diag_idx, n, n);
P = full(P);
That last line is because spdiags creates a sparse matrix. full turns it into a regular matrix. The final value of P is what you'd expect:
P =
0.5000 0.6000 -0.1000 0
0.2000 0.5000 0.6000 -0.1000
0.2000 0.2000 0.5000 0.6000
0.1000 0.2000 0.2000 0.5000
Of course, if you're into one-liners, you can combine all of these commands together.
P = full(spdiags(repmat(A(:,2).', n, 1), A(:,1).', n, n));
I have a list of items, a, b, c,..., each of which has a weight and a value.
The 'ordinary' Knapsack algorithm will find the selection of items that maximises the value of the selected items, whilst ensuring that the weight is below a given constraint.
The problem I have is slightly different. I wish to minimise the value (easy enough by using the reciprocal of the value), whilst ensuring that the weight is at least the value of the given constraint, not less than or equal to the constraint.
I have tried re-routing the idea through the ordinary Knapsack algorithm, but this can't be done. I was hoping there is another combinatorial algorithm that I am not aware of that does this.
In the german wiki it's formalized as:
finite set of objects U
w: weight-function
v: value-function
w: U -> R
v: U -> R
B in R # constraint rhs
Find subset K in U subject to:
sum( w(u) <= B ) | all w in K
such that:
max sum( v(u) ) | all u in K
So there is no restriction like nonnegativity.
Just use negative weights, negative values and a negative B.
The basic concept is:
sum( w(u) ) <= B | all w in K
<->
-sum( w(u) ) >= -B | all w in K
So in your case:
classic constraint: x0 + x1 <= B | 3 + 7 <= 12 Y | 3 + 10 <= 12 N
becomes: -x0 - x1 <= -B |-3 - 7 <=-12 N |-3 - 10 <=-12 Y
So for a given implementation it depends on the software if this is allowed. In terms of the optimization-problem, there is no problem. The integer-programming formulation for your case is as natural as the classic one (and bounded).
Python Demo based on Integer-Programming
Code
import numpy as np
import scipy.sparse as sp
from cylp.cy import CyClpSimplex
np.random.seed(1)
""" INSTANCE """
weight = np.random.randint(50, size = 5)
value = np.random.randint(50, size = 5)
capacity = 50
""" SOLVE """
n = weight.shape[0]
model = CyClpSimplex()
x = model.addVariable('x', n, isInt=True)
model.objective = value # MODIFICATION: default = minimize!
model += sp.eye(n) * x >= np.zeros(n) # could be improved
model += sp.eye(n) * x <= np.ones(n) # """
model += np.matrix(-weight) * x <= -capacity # MODIFICATION
cbcModel = model.getCbcModel()
cbcModel.logLevel = True
status = cbcModel.solve()
x_sol = np.array(cbcModel.primalVariableSolution['x'].round()).astype(int) # assumes existence
print("INSTANCE")
print(" weights: ", weight)
print(" values: ", value)
print(" capacity: ", capacity)
print("Solution")
print(x_sol)
print("sum weight: ", x_sol.dot(weight))
print("value: ", x_sol.dot(value))
Small remarks
This code is just a demo using a somewhat low-level like library and there are other tools available which might be better suited (e.g. windows: pulp)
it's the classic integer-programming formulation from wiki modifies as mentioned above
it will scale very well as the underlying solver is pretty good
as written, it's solving the 0-1 knapsack (only variable bounds would need to be changed)
Small look at the core-code:
# create model
model = CyClpSimplex()
# create one variable for each how-often-do-i-pick-this-item decision
# variable needs to be integer (or binary for 0-1 knapsack)
x = model.addVariable('x', n, isInt=True)
# the objective value of our IP: a linear-function
# cylp only needs the coefficients of this function: c0*x0 + c1*x1 + c2*x2...
# we only need our value vector
model.objective = value # MODIFICATION: default = minimize!
# WARNING: typically one should always use variable-bounds
# (cylp problems...)
# workaround: express bounds lower_bound <= var <= upper_bound as two constraints
# a constraint is an affine-expression
# sp.eye creates a sparse-diagonal with 1's
# example: sp.eye(3) * x >= 5
# 1 0 0 -> 1 * x0 + 0 * x1 + 0 * x2 >= 5
# 0 1 0 -> 0 * x0 + 1 * x1 + 0 * x2 >= 5
# 0 0 1 -> 0 * x0 + 0 * x1 + 1 * x2 >= 5
model += sp.eye(n) * x >= np.zeros(n) # could be improved
model += sp.eye(n) * x <= np.ones(n) # """
# cylp somewhat outdated: need numpy's matrix class
# apart from that it's just the weight-constraint as defined at wiki
# same affine-expression as above (but only a row-vector-like matrix)
model += np.matrix(-weight) * x <= -capacity # MODIFICATION
# internal conversion of type neeeded to treat it as IP (or else it would be
LP)
cbcModel = model.getCbcModel()
cbcModel.logLevel = True
status = cbcModel.solve()
# type-casting
x_sol = np.array(cbcModel.primalVariableSolution['x'].round()).astype(int)
Output
Welcome to the CBC MILP Solver
Version: 2.9.9
Build Date: Jan 15 2018
command line - ICbcModel -solve -quit (default strategy 1)
Continuous objective value is 4.88372 - 0.00 seconds
Cgl0004I processed model has 1 rows, 4 columns (4 integer (4 of which binary)) and 4 elements
Cutoff increment increased from 1e-05 to 0.9999
Cbc0038I Initial state - 0 integers unsatisfied sum - 0
Cbc0038I Solution found of 5
Cbc0038I Before mini branch and bound, 4 integers at bound fixed and 0 continuous
Cbc0038I Mini branch and bound did not improve solution (0.00 seconds)
Cbc0038I After 0.00 seconds - Feasibility pump exiting with objective of 5 - took 0.00 seconds
Cbc0012I Integer solution of 5 found by feasibility pump after 0 iterations and 0 nodes (0.00 seconds)
Cbc0001I Search completed - best objective 5, took 0 iterations and 0 nodes (0.00 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Cuts at root node changed objective from 5 to 5
Probing was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Clique was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 0 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Result - Optimal solution found
Objective value: 5.00000000
Enumerated nodes: 0
Total iterations: 0
Time (CPU seconds): 0.00
Time (Wallclock seconds): 0.00
Total time (CPU seconds): 0.00 (Wallclock seconds): 0.00
INSTANCE
weights: [37 43 12 8 9]
values: [11 5 15 0 16]
capacity: 50
Solution
[0 1 0 1 0]
sum weight: 51
value: 5
I am interested in writing a function generate(n,m) which exhaustively generating strings of length n(n-1)/2 consisting solely of +/- characters. These strings will then be transformed into an n × n symmetric (-1,0,1)-matrix in the following way:
toTriangle["+--+-+-++-"]
{{1, -1, -1, 1}, {-1, 1, -1}, {1, 1}, {-1}}
toMatrix[%, 0] // MatrixForm
| 0 1 -1 -1 1 |
| 1 0 -1 1 -1 |
matrixForm = |-1 -1 0 1 1 |
|-1 1 1 0 -1 |
| 1 -1 1 -1 0 |
Thus the given string represents the upper-right triangle of the matrix, which is then reflected to generate the rest of it.
Question: How can I generate all +/- strings such that the resulting matrix has precisely m -1's per row?
For example, generate(5,3) will give all strings of length 5(5-1)/2 = 10 such that each row contains precisely three -1's.
I'd appreciate any help with constructing such an algorithm.
This is the logic to generate every matrix for a given n and m. It's a bit convoluted, so I'm not sure how much faster than brute force an implementation would be; I assume the difference will become more pronounced for larger values.
(The following will generate an output of zeros and ones for convenience, where zero represents a plus and a one represents a minus.)
A square matrix where each row has m ones translates to a triangular matrix where these folded row/columns have m ones:
x 0 1 0 1 x 0 1 0 1 0 1 0 1
0 x 1 1 0 x 1 1 0 1 1 0
1 1 x 0 0 x 0 0 0 0
0 1 0 x 1 x 1 1
1 0 0 1 x x
Each of these groups overlaps with all the other groups; choosing values for the first k groups means that the vertical part of group k+1 is already determined.
We start by putting the number of ones required per row on the diagonal; e.g. for (5,2) that is:
2 . . . .
2 . . .
2 . .
2 .
2
Then we generate every bit pattern with m ones for the first group; there are (n-1 choose m) of these, and they can be efficiently generated, e.g. with Gosper's hack.
(4,2) -> 0011 0101 0110 1001 1010 1100
For each of these, we fill them in in the matrix, and subtract them from the numbers of required ones:
X 0 0 1 1
2 . . .
2 . .
1 .
1
and then recurse with the smaller triangle:
2 . . .
2 . .
1 .
1
If we come to a point where some of the numbers of required ones on the diagonal are zero, e.g.:
2 . . .
1 . .
0 .
1
then we can already put a zero in this column, and generate the possible bit patterns for fewer columns; in the example that would be (2,2) instead of (3,2), so there's only one possible bit pattern: 11. Then we distribute the bit pattern over the columns that have a non-zero required count under them:
2 . 0 . X 1 0 1
1 . . 0 . .
0 . 0 .
1 0
However, not all possible bit patterns will lead to valid solutions; take this example:
2 . . . . X 0 0 1 1
2 . . . 2 . . . 2 . . . X 0 1 1
2 . . 2 . . 2 . . 2 . . 2 . .
2 . 1 . 1 . 0 . 0 .
2 1 1 0 0
where we end up with a row that requires another 2 ones while both columns can no longer take any ones. The way to spot this situation is by looking at the list of required ones per column that is created by each option in the penultimate step:
pattern required
0 1 1 -> 2 0 0
1 0 1 -> 1 1 0
1 1 0 -> 1 0 1
If the first value in the list is x, then there must be at least x non-zero values after it; which is false for the first of the three options.
(There is room for optimization here: in a count list like 1,1,0,6,0,2,1,1 there are only 2 non-zero values before the 6, which means that the 6 will be decremented at most 2 times, so its minimum value when it becomes the first element will be 4; however, there are only 3 non-zero values after it, so at this stage you already know this list will not lead to any valid solutions. Checking this would add to the code complexity, so I'm not sure whether that would lead to an improvement in execution speed.)
So the complete algorithm for (n,m) starts with:
Create an n-sized list with all values set to m (count of ones required per group).
Generate all bit patterns of size n-1 with m ones; for each of these:
Subtract the pattern from a copy of the count list (without the first element).
Recurse with the pattern and the copy of the count list.
and the recursive steps after that are:
Receive the sequence so far, and a count list.
The length of the count list is n, and its first element is m.
Let k be the number of non-zero values in the count list (without the first element).
Generate all bit pattern of size k with m ones; for each of these:
Create a 0-filled list sized n-1.
Distribute the bit pattern over it, skipping the columns with a zero count.
Add the value list to the sequence so far.
Subtract the value list from a copy of the count list (without the first element).
If the first value in the copy of the count list is greater than the number of non-zeros after it, skip this pattern.
At the deepest recursion level, store the sequence, or else:
Recurse with the sequence so far, and the copy of the count list.
Here's a code snippet as a proof of concept; in a serious language, and using integers instead of arrays for the bitmaps, this should be much faster:
function generate(n, m) {
// if ((n % 2) && (m % 2)) return; // to catch (3,1)
var counts = [], pattern = [];
for (var i = 0; i < n - 1; i++) {
counts.push(m);
pattern.push(i < m ? 1 : 0);
}
do {
var c_copy = counts.slice();
for (var i = 0; i < n - 1; i++) c_copy[i] -= pattern[i];
recurse(pattern, c_copy);
}
while (revLexi(pattern));
}
function recurse(sequence, counts) {
var n = counts.length, m = counts.shift(), k = 0;
for (var i = 0; i < n - 1; i++) if (counts[i]) ++k;
var pattern = [];
for (var i = 0; i < k; i++) pattern.push(i < m ? 1 : 0);
do {
var values = [], pos = 0;
for (var i = 0; i < n - 1; i++) {
if (counts[i]) values.push(pattern[pos++]);
else values.push(0);
}
var s_copy = sequence.concat(values);
var c_copy = counts.slice();
var nonzero = 0;
for (var i = 0; i < n - 1; i++) {
c_copy[i] -= values[i];
if (i && c_copy[i]) ++nonzero;
}
if (c_copy[0] > nonzero) continue;
if (n == 2) {
for (var i = 0; i < s_copy.length; i++) {
document.write(["+ ", "− "][s_copy[i]]);
}
document.write("<br>");
}
else recurse(s_copy, c_copy);
}
while (revLexi(pattern));
}
function revLexi(seq) { // reverse lexicographical because I had this lying around
var max = true, pos = seq.length, set = 1;
while (pos-- && (max || !seq[pos])) if (seq[pos]) ++set; else max = false;
if (pos < 0) return false;
seq[pos] = 0;
while (++pos < seq.length) seq[pos] = set-- > 0 ? 1 : 0;
return true;
}
generate(5, 2);
Here are the number of results and the number of recursions for values of n up to 10, so you can compare them to check correctness. When n and m are both odd numbers, there are no valid results; this is calculated correctly, except in the case of (3,1); it is of course easy to catch these cases and return immediately.
(n,m) results number of recursions
(4,0) (4,3) 1 2 2
(4,1) (4,2) 3 6 7
(5,0) (5,4) 1 3 3
(5,1) (5,3) 0 12 20
(5,2) 12 36
(6,0) (6,5) 1 4 4
(6,1) (6,4) 15 48 76
(6,2) (6,3) 70 226 269
(7,0) (7,6) 1 5 5
(7,1) (7,5) 0 99 257
(7,2) (7,4) 465 1,627 2,313
(7,3) 0 3,413
(8,0) (8,7) 1 6 6
(8,1) (8,6) 105 422 1,041
(8,2) (8,5) 3,507 13,180 23,302
(8,3) (8,4) 19,355 77,466 93,441
(9,0) (9,8) 1 7 7
(9,1) (9,7) 0 948 4,192
(9,2) (9,6) 30,016 119,896 270,707
(9,3) (9,5) 0 1,427,457 2,405,396
(9,4) 1,024,380 4,851,650
(10,0) (10,9) 1 8 8
(10,1) (10,8) 945 4440 18930
(10,2) (10,7) 286,884 1,210,612 3,574,257
(10,3) (10,6) 11,180,820 47,559,340 88,725,087
(10,4) (10,5) 66,462,606 313,129,003 383,079,169
I doubt that you really want all variants for large n,m values - number of them is tremendous large.
This problem is equivalent to generation of m-regular graphs (note that if we replace all 1's by zeros and all -1's by 1 - we can see adjacency matrix of graph. Regular graph - degrees of all vertices are equal to m).
Here we can see that number of (18,4) regular graphs is about 10^9 and rises fast with n/m values. Article contains link to program genreg intended for such graphs generation. FTP links to code and executable don't work for me - perhaps too old.
Upd: Here is another link to source (though 1996 year instead of paper's 1999)
Simple approach to generate one instance of regular graph is described here.
For small n/m values you can also try brute-force: fill the first row with m ones (there are C(n,m) variants and for every variants fill free places in the second row and so on)
Written in Wolfram Mathematica.
generate[n_, m_] := Module[{},
x = Table[StringJoin["i", ToString[i], "j", ToString[j]],
{j, 1, n}, {i, 2, n}];
y = Transpose[x];
MapThread[(x[[#, ;; #2]] = y[[#, ;; #2]]) &,
{-Range[n - 1], Reverse#Range[n - 1]}];
Clear ## Names["i*"];
z = ToExpression[x];
Clear[s];
s = Reduce[Join[Total## == m & /# z,
0 <= # <= 1 & /# Union[Flatten#z]],
Union#Flatten[z], Integers];
Clear[t, u, v];
Array[(t[#] =
Partition[Flatten[z] /.
ToRules[s[[#]]], n - 1] /.
{1 -> -1, 0 -> 1}) &, Length[s]];
Array[Function[a,
(u[a] = StringJoin[Flatten[MapThread[
Take[#, 1 - #2] &,
{t[a], Reverse[Range[n]]}]] /.
{1 -> "+", -1 -> "-"}])], Length[s]];
Array[Function[a,
(v[a] = MapThread[Insert[#, 0, #2] &,
{t[a], Range[n]}])], Length[s]]]
Timing[generate[9, 4];]
Length[s]
{202.208, Null}
1024380
The program takes 202 seconds to generate 1,024,380 solutions. E.g. the last one
u[1024380]
----++++---++++-+-+++++-++++--------
v[1024380]
0 -1 -1 -1 -1 1 1 1 1
-1 0 -1 -1 -1 1 1 1 1
-1 -1 0 -1 1 -1 1 1 1
-1 -1 -1 0 1 1 -1 1 1
-1 -1 1 1 0 1 1 -1 -1
1 1 -1 1 1 0 -1 -1 -1
1 1 1 -1 1 -1 0 -1 -1
1 1 1 1 -1 -1 -1 0 -1
1 1 1 1 -1 -1 -1 -1 0
and the first ten strings
u /# Range[10]
++++----+++----+-+-----+----++++++++
++++----+++----+-+------+--+-+++++++
++++----+++----+-+-------+-++-++++++
++++----+++----+--+---+-----++++++++
++++----+++----+---+--+----+-+++++++
++++----+++----+----+-+----++-++++++
++++----+++----+--+-----+-+--+++++++
++++----+++----+--+------++-+-++++++
++++----+++----+---+---+--+--+++++++
I'm performing input-output calculations in Octave. I have several matrices/vectors in the formula:
F = f' * (I-A)^-1 * Y
All vectors probably contain zeroes. I would like to omit them from the calculation and just return 0 instead. Any help would be greatly appreciated!
Miranda
What do you mean when you say "omit them"?
If you want to remove zeros from a vector you can do this:
octave:1> x=[1,2,0,3,4,0,5];
octave:2> x(find(x==0))=[]
x =
1 2 3 4 5
The logic is: x==0 will test each element of x (in this case the test is if it equals zero) and will return a vector of 0's and 1's (0 if the test is false for that element and 1 otherwise)
So:
octave:1> x=[1,2,0,3,4,0,5];
octave:2> x==0
ans =
0 0 1 0 0 1 0
The find() function will return the index value of any non-zero element of it's argument, hence:
octave:3> find(x==0)
ans =
3 6
And then you are just indexing and removing when you do something like:
octave:5> x([3, 6]) = []
x =
1 2 3 4 5
But instead you do it with the output of the find() function (which is the vector [3,6] in this case)
You can do the same for matrices:
octave:7> A = [1,2,0;4,5,0]
A =
1 2 0
4 5 0
octave:8> A(find(A==0))=[]
A =
1
4
2
5
Then use the reshape() function to turn it back into a matrix.
I want to know in how many ways can we represent a number x as a sum of numbers from a given set of numbers {a1.a2,a3,...}. Each number can be taken more than once.
For example, if x=4 and a1=1,a2=2, then the ways of representing x=4 are:
1+1+1+1
1+1+2
1+2+1
2+1+1
2+2
Thus the number of ways =5.
I want to know if there exists a formula or some other fast method to do so. I can't brute force through it. I want to write code for it.
Note: x can be as large as 10^18. The number of terms a1,a2,a3,… can be up to 15, and each of a1,a2,a3,… can also be only up to 15.
Calculating the number of combinations can be done in O(log x), disregarding the time it takes to perform matrix multiplication on arbitrarily sized integers.
The number of combinations can be formulated as a recurrence. Let S(n) be the number of ways to make the number n by adding numbers from a set. The recurrence is
S(n) = a_1*S(n-1) + a_2*S(n-2) + ... + a_15*S(n-15),
where a_i is the number of times i occurs in the set. Also, S(n)=0 for n<0. This kind of recurrence can be formulated in terms of a matrix A of size 15*15 (or less is the largest number in the set is smaller). Then, if you have a column vector V containing
S(n-14) S(n-13) ... S(n-1) S(n),
then the result of the matrix multiplication A*V will be
S(n-13) S(n-12) ... S(n) S(n+1).
The A matrix is defined as follows:
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
a_15 a_14 a_13 a_12 a_11 a_10 a_9 a_8 a_7 a_6 a_5 a_4 a_3 a_2 a_1
where a_i is as defined above. The proof that the multiplication of this matrix with a vector of S(n_14) ... S(n) works can be immediately seen by performing the multiplication manually; the last element in the vector will be equal to the right hand side of the recurrence with n+1. Informally, the ones in the matrix shifts the elements in the column vector one row up, and the last row of the matrix calculates the newest term.
In order to calculate an arbitrary term S(n) of the recurrence is to calculate A^n * V, where V is equal to
S(-14) S(-13) ... S(-1) S(0) = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1.
In order to get the runtime down to O(log x), one can use exponentiation by squaring to calculate A^n.
In fact, it is sufficient to ignore the column vector altogether, the lower right element of A^n contains the desired value S(n).
In case the above explanation was hard to follow, I have provided a C program that calculates the number of combinations in the way I described above. Beware that it will overflow a 64-bits integer very quickly. You'll be able to get much further with a high-precision floating point type using GMP, though you won't get an exact answer.
Unfortunately, I can't see a fast way to get an exact answer for numbers such at x=10^18, since the answer can be much larger than 10^x.
#include <stdio.h>
typedef unsigned long long ull;
/* highest number in set */
#define N 15
/* perform the matrix multiplication out=a*b */
void matrixmul(ull out[N][N],ull a[N][N],ull b[N][N]) {
ull temp[N][N];
int i,j,k;
for(i=0;i<N;i++) for(j=0;j<N;j++) temp[i][j]=0;
for(k=0;k<N;k++) for(i=0;i<N;i++) for(j=0;j<N;j++)
temp[i][j]+=a[i][k]*b[k][j];
for(i=0;i<N;i++) for(j=0;j<N;j++) out[i][j]=temp[i][j];
}
/* take the in matrix to the pow-th power, return to out */
void matrixpow(ull out[N][N],ull in[N][N],ull pow) {
ull sq[N][N],temp[N][N];
int i,j;
for(i=0;i<N;i++) for(j=0;j<N;j++) temp[i][j]=i==j;
for(i=0;i<N;i++) for(j=0;j<N;j++) sq[i][j]=in[i][j];
while(pow>0) {
if(pow&1) matrixmul(temp,temp,sq);
matrixmul(sq,sq,sq);
pow>>=1;
}
for(i=0;i<N;i++) for(j=0;j<N;j++) out[i][j]=temp[i][j];
}
void solve(ull n,int *a) {
ull m[N][N];
int i,j;
for(i=0;i<N;i++) for(j=0;j<N;j++) m[i][j]=0;
/* create matrix from a[] array above */
for(i=2;i<=N;i++) m[i-2][i-1]=1;
for(i=1;i<=N;i++) m[N-1][N-i]=a[i-1];
matrixpow(m,m,n);
printf("S(%llu): %llu\n",n,m[N-1][N-1]);
}
int main() {
int a[]={1,1,0,0,0,0,0,1,0,0,0,0,0,0,0};
int b[]={1,1,1,1,1,0,0,0,0,0,0,0,0,0,0};
solve(13,a);
solve(80,a);
solve(15,b);
solve(66,b);
return 0;
}
If you want to find all possible ways of representing a number N from a given set of numbers then you should follow a dynamic programming solution as already proposed.
But if you just want to know the number of ways, then you are dealing with the restricted partition function problem.
The restricted partition function p(n, dm) ≡ p(n, {d1, d2, . . . ,
dm}) is a number of partitions of n into positive integers {d1, d2, .
. . , dm}, each not greater than n.
You should also check the wikipedia article on partition function without restrictions where no restrictions apply.
PS. If negative numbers are also allowed then there probably are (countably )infinite ways to represent your sum.
1+1+1+1-1+1
1+1+1+1-1+1-1+1
etc...
PS2. This is more a math question than a programming one
Since order in sum is important it holds:
S( n, {a_1, ..., a_k} ) = sum[ S( n - a_i, {a_1, ..., a_k} ) for i in 1, ..., k ].
That is enough for dynamic programming solution. If values S(i, set) are created from 0 to n, than complexity is O( n*k ).
Edit: Just an idea. Look at one summation as a sequence (s_1, s_2, ..., s_m). Sum of first part of sequence will be larger than n/2 at one point, let it be for index j:
s_1 + s_2 + ... + s_{j-1} < n / 2,
s_1 + s_2 + ... + s_j = S >= n / 2.
There are at most k different sums S, and for each S there are at most k possible last elements s_j. All of possibilities (S,s_j) split sequence sum in 3 parts.
s_1 + s_2 + ... + s_{j-1} = L,
s_j,
s_{j+1} + ... + s_m = R.
It hold n/2 >= L, R > n/2 - max{a_i}. With that, upper formula have more complicated form:
S( n, set ) = sum[ S( n-L-s_j, set )*S( R, set ) for all combinations of (S,s_j) ].
I'm not sure, but I think that with each step it will be needed to 'create' range of
S(x,set) values where range will grow linearly by factor max{a_i}.
Edit 2: #Andrew samples. It is easy to implement first method and it works for 'small' x. Here is python code:
def S( x, ai_s ):
s = [0] * (x+1)
s[0] = 1
for i in xrange(1,x+1):
s[i] = sum( s[i-ai] if i-ai >= 0 else 0 for ai in ai_s )
return s[x]
S( 13, [1,2,8] )
S( 15, [1,2,3,4,5] )
This implementation has problem with memory for large x (>10^5 in python). Since only last max(a_i) values are needed it is possible to implement it with circular buffer.
These values grow very fast, e.g. S(100000, [1,2,8] ) is ~ 10^21503.