Compute mean of columns for groups of rows in Octave - matrix

I have a matrix, for example:
1 2
3 4
4 5
And I also have a rule of grouping the rows, which is defined as a vector of group IDs like this:
1
2
1
Which means that the first and the third rows belong to the same group (ID 1) and the second row belong to another group (ID 2). So, I would like to compute the mean value for each group. Here is the result for my example:
2.5 3.5
3 4
More formally, there is a matrix A of size (m, n), a number of groups k and a vector v of size (m, 1), values of which are integers in range from 1 to k. The result is a matrix R of size (k, n), where each row with index r corresponds to the mean value of the group r.
Here is my solution (which does what I need) using for-loop in Octave:
R = zeros(k, n);
for r = 1:k
R(r, :) = mean(A((v == r), :), 1);
end
I wonder whether it could be vectorized. So, what I need is to replace the for-loop with a vectorized solution, which is going to be much more efficient than the iterative one.
Here is one of my many attempts (which do not work) to solve the problem in a vectorized way:
R = mean(A((v == 1:k), :);

As long as our data is of floating point, you can just do it manually by doing the sum yourself and then divide, by making use of accumdim. Like so:
octave:1> A = [1 2; 3 4; 4 5];
octave:2> subs = [1; 2; 1];
octave:3> accumdim (subs, A) ./ accumdim (subs, ones (rows (subs), 1))
ans =
2.5000 3.5000
3.0000 4.0000

You can consider it as a matrix multiplication problem. For instance, for your example this corresponds to
A = [1 2; 3 4; 4 5];
B = [0.5,0,0.5;0,1,0];
C = B*A
The main issue, is to construct B from your list of indicies in an efficient manner. My suggestion is to use the implicit expansion of ==.
A = [1 2; 3 4; 4 5]; % Input data
idx = [1;2;1]; % Input Grouping
k = 2; % number of groups, ( = max(idx) )
m = 3; % Number of "observations"
Btmp = (idx == 1:k)'; % Mark locations
B = Btmp ./sum(Btmp,2); % Normalise
C = B*A
C =
2.5000 3.5000
3.0000 4.0000

Related

Matlab's bsxfun() - what explains the performance differences when expanding along different dimensions?

In my line of work (econometrics/statistics), I frequently have to multiply matrices of different sizes and then perform additional operations on the resulting matrix. I have always relied on bsxfun() to vectorize the code, which in general I find it to be more efficient than repmat(). But what I don't understand is why sometimes the performance of bsxfun() can be very different when expanding the matrices along different dimensions.
Consider this specific example:
x = ones(j, k, m);
beta = rand(k, m, s);
exp_xBeta = zeros(j, m, s);
for im = 1 : m
for is = 1 : s
xBeta = x(:, :, im) * beta(:, im, is);
exp_xBeta(:, im, is) = exp(xBeta);
end
end
y = mean(exp_xBeta, 3);
Context:
We have data from m markets and within each market we want to calculate the expectation of exp(X * beta) where X is a j x k matrix, and beta is a k x 1 random vector. We compute this expectation by monte-carlo integration - make s draws of beta, compute exp(X * beta) for each draw, and then take the mean. Typically we get data with m > k > j, and we use a very large s. In this example I simply let X to be a matrix of ones.
I did 3 versions of vectorization using bsxfun(), they differ by how X and beta are shaped:
Vectorization 1
x1 = x; % size [ j k m 1 ]
beta1 = permute(beta, [4 1 2 3]); % size [ 1 k m s ]
tic
xBeta = bsxfun(#times, x1, beta1);
exp_xBeta = exp(sum(xBeta, 2));
y1 = permute(mean(exp_xBeta, 4), [1 3 2 4]); % size [ j m ]
time1 = toc;
Vectorization 2
x2 = permute(x, [4 1 2 3]); % size [ 1 j k m ]
beta2 = permute(beta, [3 4 1 2]); % size [ s 1 k m ]
tic
xBeta = bsxfun(#times, x2, beta2);
exp_xBeta = exp(sum(xBeta, 3));
y2 = permute(mean(exp_xBeta, 1), [2 4 1 3]); % size [ j m ]
time2 = toc;
Vectorization 3
x3 = permute(x, [2 1 3 4]); % size [ k j m 1 ]
beta3 = permute(beta, [1 4 2 3]); % size [ k 1 m s ]
tic
xBeta = bsxfun(#times, x3, beta3);
exp_xBeta = exp(sum(xBeta, 1));
y3 = permute(mean(exp_xBeta, 4), [2 3 1 4]); % size [ j m ]
time3 = toc;
And this is how they performed (typically we get data with m > k > j, and we used a very large s):
j = 5, k = 15, m = 100, s = 2000:
For-loop version took 0.7286 seconds.
Vectorized version 1 took 0.0735 seconds.
Vectorized version 2 took 0.0369 seconds.
Vectorized version 3 took 0.0503 seconds.
j = 10, k = 15, m = 150, s = 5000:
For-loop version took 2.7815 seconds.
Vectorized version 1 took 0.3565 seconds.
Vectorized version 2 took 0.2657 seconds.
Vectorized version 3 took 0.3433 seconds.
j = 15, k = 35, m = 150, s = 5000:
For-loop version took 3.4881 seconds.
Vectorized version 1 took 1.0687 seconds.
Vectorized version 2 took 0.8465 seconds.
Vectorized version 3 took 0.9414 seconds.
Why is version 2 consistently always the fastest version? Initially, I thought the performance advantage was because s was set to dimension 1, which Matlab might be able to compute faster since it stored data in column-major order. But Matlab's profiler told me that the time taken to compute that mean was rather insignificant and was more or less the same among all 3 versions. Matlab spent most of the time evaluating the line with bsxfun(), and that's also where the run-time difference was the biggest among the 3 versions.
Any thought on why version 1 is always the slowest and version 2 is always the fastest?
I've updated my test code here:
Code
EDIT: earlier version of this post was incorrect. beta should be of size (k, m, s).
bsxfun is of course one of the good tools to vectorize things, but if you can somehow introduce matrix-multiplication that would be best way to go about it, as matrix multiplications are really fast on MATLAB.
It seems here you can use matrix-multiplication to get exp_xBeta like so -
[m1,n1,r1] = size(x);
n2 = size(beta,2);
exp_xBeta_matmult = reshape(exp(reshape(permute(x,[1 3 2]),[],n1)*beta),m1,r1,n2)
Or directly get y as shown below -
y_matmult = reshape(mean(exp(reshape(permute(x,[1 3 2]),[],n1)*beta),2),m1,r1)
Explanation
To explain it in a bit more detail, we have the sizes as -
x : (j, k, m)
beta : (k, s)
Our end goal is to use the "eliminate" the k's from x and beta using matrix-multiplication. So, we can "push" the k in x to the end with permute and reshape to a 2D keeping k as the rows, i.e. ( j * m , k ) and then perform matrix-multiplication with beta ( k , s ) to give us ( j * m , s ). The product can then be reshaped to a 3D array ( j , m , s ) and perform elementwise exponential which would be exp_xBeta.
Now, if the final goal is y, which is getting the mean along the third dimension of exp_xBeta, it would be equivalent to calculating the mean along the rows of the matrix-multiplication product (j * m, s ) and then reshaping to ( j , m ) to get us y directly.
I did some more experiments this morning. It seems that it has to do with the fact that Matlab stores data in column major order after all.
In doing these experiments, I also added vectorization version 4 which does the same thing but orders the dimensions slightly different than versions 1-3.
To recap, here are how x and beta are ordered in all 4 versions:
Vectorization 1:
x : (j, k, m, 1)
beta : (1, k, m, s)
Vectorization 2:
x : (1, j, k, m)
beta : (s, 1, k, m)
Vectorization 3:
x : (k, j, m, 1)
beta : (k, 1, m, s)
Vectorization 4:
x : (1, k, j, m)
beta : (s, k, 1, m)
code : bsxfun_test.m
The two most costly operations in this code are:
(a) xBeta = bsxfun(#times, x, beta);
(b) exp_xBeta = exp(sum(xBeta, dimK));
where dimK is the dimension of k.
In (a), bsxfun() has to expand x along the dimension of s and beta along the dimension of j. When s is much larger than other dimensions, we should see some performance advantage in vectorizations 2 and 4, since they assign s as the first dimension.
j = 100; k = 100; m = 100; s = 1000;
Vectorized version 1 took 2.4719 seconds.
Vectorized version 2 took 2.1419 seconds.
Vectorized version 3 took 2.5071 seconds.
Vectorized version 4 took 2.0825 seconds.
If instead s is trivial and k is huge, then vectorization 3 should be the fastest since it puts k in dimension 1:
j = 10; k = 10000; m = 100; s = 1;
Vectorized version 1 took 0.0329 seconds.
Vectorized version 2 took 0.1442 seconds.
Vectorized version 3 took 0.0253 seconds.
Vectorized version 4 took 0.1415 seconds.
If we swap the value of k and j in the last example, vectorization 1 becomems the fastest since j is assigned to dimension 1:
j = 10000; k = 10; m = 100; s = 1;
Vectorized version 1 took 0.0316 seconds.
Vectorized version 2 took 0.1402 seconds.
Vectorized version 3 took 0.0385 seconds.
Vectorized version 4 took 0.1608 seconds.
But in general when k and j are close, j > k does not necessary imply vectorization 1 is faster than vectorization 3 since the operations performed in (a) and (b) are different.
In practice, I often have to run computation with s >>>> m > k > j. In such cases, it seems that ordering them in vectorization 2 or 4 gives the best results:
j = 10; k = 30; m = 100; s = 5000;
Vectorized version 1 took 0.4621 seconds.
Vectorized version 2 took 0.3373 seconds.
Vectorized version 3 took 0.3713 seconds.
Vectorized version 4 took 0.3533 seconds.
j = 15; k = 50; m = 150; s = 5000;
Vectorized version 1 took 1.5416 seconds.
Vectorized version 2 took 1.2143 seconds.
Vectorized version 3 took 1.2842 seconds.
Vectorized version 4 took 1.2684 seconds.
Takeaway: if bsxfun() has to expand along a dimension of size much bigger than other dimensions, assign that dimension to dimension 1!
Refer this other question and answer
If you are gonna process matrices of different dimensions using bsxfun, make sure that the biggest dimension of the matrices is kept in first dimension.
Here is my small example test:
%// Inputs
%// Taking one very big and one small vector, so that the difference could be seen clearly
a = rand(1000000,1);
b = rand(1,5);
%//---------------- testing with inbuilt function
%// preferred orientation [1]
t1 = timeit(#() bsxfun(#times, a, b))
%// not preferred [2]
t2 = timeit(#() bsxfun(#times, b.', a.'))
%//---------------- testing with anonymous function
%// preferred orientation [1]
t3 = timeit(#() bsxfun(#(x,y) x*y, a, b))
%// not preferred [2]
t4 = timeit(#() bsxfun(#(x,y) x*y, b.', a.'))
[1] Preferred orientation      -     larger dimension as first dimension
[2] Not preferred                 -     smaller dimension as first dimension
Small Note: The output given by all four methods are same even though their dimensions may differ.
Results:
t1 =
0.0461
t2 =
0.0491
t3 =
0.0740
t4 =
7.5249
>> t4/t3
ans =
101.6878
Method 3 is roughly 100 times faster than Method 4
To conclude:
Although the difference between preferred and unfavored orientation for built-in function is minimum,
The difference becomes huge for anonymous function. So it might be a best practice to use bigger dimension as dimension 1.

Vectorized search for permutations (with repetitions) that contain given subpermutations (with repetitions)

This question is can be viewed continuation/extension/generalization of a previous question of mine from here.
Some definitions: I have a set of integers S = {1,2,...,s}, say s = 20, and two matrices N and M whose rows are finite sequences of numbers from S (i.e. permutations with possible repetitions), of order n and m respectively, where 1 <= n <= m. Let us think of N as a collection of candidate sub-sequences for the sequences from M.
Example: [2 3 4 3] is a sub-sequence of [1 2 2 3 5 4 1 3] that occurs with multiplicity 2 (=in how many different ways one can find the sub-seq. in the main seq.), whereas [3 2 2 3] is not a sub-sequence of it. In particular, a valid sub-sequence by definition must preserve the order of the indices.
Problem statement:
(P1) For each row of M, obtain the number of sub-sequences of it, with multiplicity and without multiplicity, that occur in N as rows (it can be zero if none are contained in N);
(P2) For each row of N, find out how many times, with multiplicity and without multiplicity, it is contained in M as a sub-sequence (again, this number can be zero);
Example: Let N = [1 2 2; 2 3 4] and M = [1 1 2 2 3; 1 2 2 3 4; 1 2 3 5 6]. Then (P1) returns [2; 3; 0] for 'with multiplicities' and [1; 2; 0] for 'without multiplicities'. (P2) returns [3; 2] for 'with multiplicities' and [2; 1] without multiplicities.
Order of magnitude: M could typically have up to 30-40 columns and a few thousand rows, although I currently have M with only a few hundred rows and ~10 columns. N could be approaching the size of
M or could be also much smaller.
What I have so far: Not much, to be honest. I believe I might be able to slightly modify my not-very-well-vectorized solution from my previous question to tackle permutations with repetitions, but I am still thinking on that and will update as soon as I have something working. But given my (lack of) experience so far, it would be in all likelihood very suboptimal :(
Thanks!
Introduction : Owing to the repetitions in the input data in each row, the combination finding process doesn't have the sort of "uniqueness" among elements which was exploited in your previous problem and hence the loops used here. Also, note that the without multiplicity codes don't use nchoosek and as such, I feel more optimistic about them for performance.
Notations :
p1wim -> P1 with multiplicity
p2wim -> P2 with multiplicity
p1wom -> P1 without multiplicity
p2wom -> P2 without multiplicity
Codes :
I. Code for P1, 2 with multiplicity
permN = permute(N,[3 2 1]);
p1wim(size(M,1),1)=0;
p2wim(size(N,1),1)=0;
for k1 = 1:size(M,1)
d1 = nchoosek(M(k1,:),3);
t1 = all(bsxfun(#eq,d1,permN),2);
p1wim(k1) = sum(t1(:));
p2wim = p2wim + squeeze(sum(t1,1));
end
II. Code for P1, 2 without multiplicity
eqmat = bsxfun(#eq,M,permute(N,[3 4 2 1])); %// equality matrix
[m,n,p,q] = size(eqmat); %// get sizes
inds = zeros(size(M,1),p,q); %// pre-allocate for indices array
vec1 = [1:m]'; %//' setup constants to loop
vec2 = [0:q-1]*m*n*p;
vec3 = permute([0:p-1]*m*n,[1 3 2]);
for iter = 1:p
[~,ind1] = max(eqmat(:,:,iter,:),[],2);
inds(:,iter,:) = reshape(ind1,m,1,q);
ind2 = squeeze(ind1);
ind3 = bsxfun(#plus,vec1,(ind2-1)*m); %//' setup forward moving equalities
ind4 = bsxfun(#plus,ind3,vec2);
ind5 = bsxfun(#plus,ind4,vec3);
eqmat(ind5(:)) = 0;
end
p1wom = sum(all(diff(inds,[],2)>0,2),3);
p2wom = squeeze(sum(all(diff(inds,[],2)>0,2),1));
As usual, I would encourage you to use gpuArrays too with your favorite parfor.
This approach uses only one loop over the rows of M (P1) or N (P2). The code makes use of linear indexing and the very powerful bsxfun function. Note that if the number of columns is large you may experience problems because of nchoosek.
[mr mc] = size(M);
[nr nc] = size(N);
%// P1
combs = nchoosek(1:mc, nc)-1;
P1mu = NaN(mr,1);
P1nm = NaN(mr,1);
for r = 1:mr
aux = M(r+mr*combs);
P1mu(r) = sum(ismember(aux, N, 'rows'));
P1nm(r) = sum(ismember(unique(aux, 'rows'), N, 'rows'));
end
%// P2. Multiplicity defined to span across different rows
rr = reshape(repmat(1:mr, size(combs,1), 1),[],1);
P2mu = NaN(nr,1);
P2nm = NaN(nr,1);
for r = 1:nr
aux = M(bsxfun(#plus, rr, mr*repmat(combs, mr, 1)));
P2mu(r) = sum(all(bsxfun(#eq, N(r,:), aux), 2));
P2nm(r) = sum(all(bsxfun(#eq, N(r,:), unique(aux, 'rows')), 2));
end
%// P2. Multiplicity defined restricted to within one row
rr = reshape(repmat(1:mr, size(combs,1), 1),[],1);
P2mur = NaN(nr,1);
P2nmr = NaN(nr,1);
for r = 1:nr
aux = M(bsxfun(#plus, rr, mr*repmat(combs, mr, 1)));
P2mur(r) = sum(all(bsxfun(#eq, N(r,:), aux), 2));
aux2 = unique([aux rr], 'rows'); %// concat rr to differentiate rows...
aux2 = aux2(:,1:end-1); %// ...and now remove it
P2nmr(r) = sum(all(bsxfun(#eq, N(r,:), aux2), 2));
end
Results for your example data:
P1mu =
2
3
0
P1nm =
1
2
0
P2mu =
3
2
P2nm =
1
1
P2mur =
3
2
P2nmr =
2
1
Some optimizations to the code would be possible. Not sure they are worth the effort:
Replace repmat by another bsxfun (using a 3rd dimension). That may save some memory
Transpose original matrices and work down colunmns, instead of along rows. That may be faster.

optimization of pairwise L2 distance computations

I need help optimizing this loop. matrix_1 is a (nx 2) int matrix and matrix_2 is a (m x 2), m & n very.
index_j = 1;
for index_k = 1:size(Matrix_1,1)
for index_l = 1:size(Matrix_2,1)
M2_Index_Dist(index_j,:) = [index_l, sqrt(bsxfun(#plus,sum(Matrix_1(index_k,:).^2,2),sum(Matrix_2(index_l,:).^2,2)')-2*(Matrix_1(index_k,:)*Matrix_2(index_l,:)'))];
index_j = index_j + 1;
end
end
I need M2_Index_Dist to provide a ((n*m) x 2) matrix with the index of matrix_2 in the first column and the distance in the second column.
Output example:
M2_Index_Dist = [ 1, 5.465
2, 56.52
3, 6.21
1, 35.3
2, 56.52
3, 0
1, 43.5
2, 9.3
3, 236.1
1, 8.2
2, 56.52
3, 5.582]
Here's how to apply bsxfun with your formula (||A-B|| = sqrt(||A||^2 + ||B||^2 - 2*A*B)):
d = real(sqrt(bsxfun(#plus, dot(Matrix_1,Matrix_1,2), ...
bsxfun(#minus, dot(Matrix_2,Matrix_2,2).', 2 * Matrix_1*Matrix_2.')))).';
You can avoid the final transpose if you change your interpretation of the matrix.
Note: There shouldn't be any complex values to handle with real but it's there in case of very small differences that may lead to tiny negative numbers.
Edit: It may be faster without dot:
d = sqrt(bsxfun(#plus, sum(Matrix_1.*Matrix_1,2), ...
bsxfun(#minus, sum(Matrix_2.*Matrix_2,2)', 2 * Matrix_1*Matrix_2.'))).';
Or with just one call to bsxfun:
d = sqrt(bsxfun(#plus, sum(Matrix_1.*Matrix_1,2), sum(Matrix_2.*Matrix_2,2)') ...
- 2 * Matrix_1*Matrix_2.').';
Note: This last order of operations gives identical results to you, rather than with an error ~1e-14.
Edit 2: To replicate M2_Index_Dist:
II = ndgrid(1:size(Matrix_2,1),1:size(Matrix_2,1));
M2_Index_Dist = [II(:) d(:)];
If I understand correctly, this does what you want:
ind = repmat((1:size(Matrix_2,1)).',size(Matrix_1,1),1); %'// first column: index
d = pdist2(Matrix_2,Matrix_1); %// compute distance between each pair of rows
d = d(:); %// second column: distance
result = [ind d]; %// build result from first column and second column
As you see, this code calls pdist2 to compute the distance between every pair of rows of your matrices. By default this function uses Euclidean distance.
If you don't have pdist2 (which is part of the the Statistics Toolbox), you can replace line 2 above with bsxfun:
d = squeeze(sqrt(sum(bsxfun(#minus,Matrix_2,permute(Matrix_1, [3 2 1])).^2,2)));

Enumerate matrix combinations with fixed row and column sums

I'm attempting to find an algorithm (not a matlab command) to enumerate all possible NxM matrices with the constraints of having only positive integers in each cell (or 0) and fixed sums for each row and column (these are the parameters of the algorithm).
Exemple :
Enumerate all 2x3 matrices with row totals 2, 1 and column totals 0, 1, 2:
| 0 0 2 | = 2
| 0 1 0 | = 1
0 1 2
| 0 1 1 | = 2
| 0 0 1 | = 1
0 1 2
This is a rather simple example, but as N and M increase, as well as the sums, there can be a lot of possibilities.
Edit 1
I might have a valid arrangement to start the algorithm:
matrix = new Matrix(N, M) // NxM matrix filled with 0s
FOR i FROM 0 TO matrix.rows().count()
FOR j FROM 0 TO matrix.columns().count()
a = target_row_sum[i] - matrix.rows[i].sum()
b = target_column_sum[j] - matrix.columns[j].sum()
matrix[i, j] = min(a, b)
END FOR
END FOR
target_row_sum[i] being the expected sum on row i.
In the example above it gives the 2nd arrangement.
Edit 2:
(based on j_random_hacker's last statement)
Let M be any matrix verifying the given conditions (row and column sums fixed, positive or null cell values).
Let (a, b, c, d) be 4 cell values in M where (a, b) and (c, d) are on the same row, and (a, c) and (b, d) are on the same column.
Let Xa be the row number of the cell containing a and Ya be its column number.
Example:
| 1 a b |
| 1 2 3 |
| 1 c d |
-> Xa = 0, Ya = 1
-> Xb = 0, Yb = 2
-> Xc = 2, Yc = 1
-> Xd = 2, Yd = 2
Here is an algorithm to get all the combinations verifying the initial conditions and making only a, b, c and d varying:
// A matrix array containing a single element, M
// It will be filled with all possible combinations
matrices = [M]
I = min(a, d)
J = min(b, c)
FOR i FROM 1 TO I
tmp_matrix = M
tmp_matrix[Xa, Ya] = a - i
tmp_matrix[Xb, Yb] = b + i
tmp_matrix[Xc, Yc] = c - i
tmp_matrix[Xd, Yd] = d + i
matrices.add(tmp_matrix)
END FOR
FOR j FROM 1 TO J
tmp_matrix = M
tmp_matrix[Xa, Ya] = a + j
tmp_matrix[Xb, Yb] = b - j
tmp_matrix[Xc, Yc] = c + j
tmp_matrix[Xd, Yd] = d - j
matrices.add(tmp_matrix)
END FOR
It should then be possible to find every possible combination of matrix values:
Apply the algorithm on the first matrix for every possible group of 4 cells ;
Recursively apply the algorithm on each sub-matrix obtained by the previous iteration, for every possible group of 4 cells except any group already used in a parent execution ;
The recursive depth should be (N*(N-1)/2)*(M*(M-1)/2), each execution resulting in ((N*(N-1)/2)*(M*(M-1)/2) - depth)*(I+J+1) sub-matrices. But this creates a LOT of duplicate matrices, so this could probably be optimized.
Are you needing this to calculate Fisher's exact test? Because that requires what you're doing, and based on that page, it seems there will in general be a vast number of solutions, so you probably can't do better than a brute force recursive enumeration if you want every solution. OTOH it seems Monte Carlo approximations are successfully used by some software instead of full-blown enumerations.
I asked a similar question, which might be helpful. Although that question deals with preserving frequencies of letters in each row and column rather than sums, some results can be translated across. E.g. if you find any submatrix (pair of not-necessarily-adjacent rows and pair of not-necessarily-adjacent columns) with numbers
xy
yx
Then you can rearrange these to
yx
xy
without changing any row or column sums. However:
mhum's answer proves that there will in general be valid matrices that cannot be reached by any sequence of such 2x2 swaps. This can be seen by taking his 3x3 matrices and mapping A -> 1, B -> 2, C -> 4 and noticing that, because no element appears more than once in a row or column, frequency preservation in the original matrix is equivalent to sum preservation in the new matrix. However...
someone's answer links to a mathematical proof that it actually will work for matrices whose entries are just 0 or 1.
More generally, if you have any submatrix
ab
cd
where the (not necessarily unique) minimum is d, then you can replace this with any of the d+1 matrices
ef
gh
where h = d-i, g = c+i, f = b+i and e = a-i, for any integer 0 <= i <= d.
For a NXM matrix you have NXM unknowns and N+M equations. Put random numbers to the top-left (N-1)X(M-1) sub-matrix, except for the (N-1, M-1) element. Now, you can find the closed form for the rest of N+M elements trivially.
More details: There are total of T = N*M elements
There are R = (N-1)+(M-1)-1 randomly filled out elements.
Remaining number of unknowns: T-S = N*M - (N-1)*(M-1) +1 = N+M

Code a linear programming exercise by hand

I have been doing linear programming problems in my class by graphing them but I would like to know how to write a program for a particular problem to solve it for me. If there are too many variables or constraints I could never do this by graphing.
Example problem, maximize 5x + 3y with constraints:
5x - 2y >= 0
x + y <= 7
x <= 5
x >= 0
y >= 0
I graphed this and got a visible region with 3 corners. x=5 y=2 is the optimal point.
How do I turn this into code? I know of the simplex method. And very importantly, will all LP problems be coded in the same structure? Would brute force work?
There are quite a number of Simplex Implementations that you will find if you search.
In addition to the one mentioned in the comment (Numerical Recipes in C),
you can also find:
Google's own Simplex-Solver
Then there's COIN-OR
GNU has its own GLPK
If you want a C++ implementation, this one in Google Code is actually accessible.
There are many implementations in R including the boot package. (In R, you can see the implementation of a function by typing it without the parenthesis.)
To address your other two questions:
Will all LPs be coded the same way? Yes, a generic LP solver can be written to load and solve any LP. (There are industry standard formats for reading LP's like mps and .lp
Would brute force work? Keep in mind that many companies and big organizations spend a long time on fine tuning the solvers. There are LP's that have interesting properties that many solvers will try to exploit. Also, certain computations can be solved in parallel. The algorithm is exponential, so at some large number of variables/constraints, brute force won't work.
Hope that helps.
I wrote this is matlab yesterday, which could be easily transcribed to C++ if you use Eigen library or write your own matrix class using a std::vector of a std::vector
function [x, fval] = mySimplex(fun, A, B, lb, up)
%Examples paramters to show that the function actually works
% sample set 1 (works for this data set)
% fun = [8 10 7];
% A = [1 3 2; 1 5 1];
% B = [10; 8];
% lb = [0; 0; 0];
% ub = [inf; inf; inf];
% sample set 2 (works for this data set)
fun = [7 8 10];
A = [2 3 2; 1 1 2];
B = [1000; 800];
lb = [0; 0; 0];
ub = [inf; inf; inf];
% generate a new slack variable for every row of A
numSlackVars = size(A,1); % need a new slack variables for every row of A
% Set up tableau to store algorithm data
tableau = [A; -fun];
tableau = [tableau, eye(numSlackVars + 1)];
lastCol = [B;0];
tableau = [tableau, lastCol];
% for convienience sake, assign the following:
numRows = size(tableau,1);
numCols = size(tableau,2);
% do simplex algorithm
% step 0: find num of negative entries in bottom row of tableau
numNeg = 0; % the number of negative entries in bottom row
for i=1:numCols
if(tableau(numRows,i) < 0)
numNeg = numNeg + 1;
end
end
% Remark: the number of negatives is exactly the number of iterations needed in the
% simplex algorithm
for iterations = 1:numNeg
% step 1: find minimum value in last row
minVal = 10000; % some big number
minCol = 1; % start by assuming min value is the first element
for i=1:numCols
if(tableau(numRows, i) < minVal)
minVal = tableau(size(tableau,1), i);
minCol = i; % update the index corresponding to the min element
end
end
% step 2: Find corresponding ratio vector in pivot column
vectorRatio = zeros(numRows -1, 1);
for i=1:(numRows-1) % the size of ratio vector is numCols - 1
vectorRatio(i, 1) = tableau(i, numCols) ./ tableau(i, minCol);
end
% step 3: Determine pivot element by finding minimum element in vector
% ratio
minVal = 10000; % some big number
minRatio = 1; % holds the element with the minimum ratio
for i=1:numRows-1
if(vectorRatio(i,1) < minVal)
minVal = vectorRatio(i,1);
minRatio = i;
end
end
% step 4: assign pivot element
pivotElement = tableau(minRatio, minCol);
% step 5: perform pivot operation on tableau around the pivot element
tableau(minRatio, :) = tableau(minRatio, :) * (1/pivotElement);
% step 6: perform pivot operation on rows (not including last row)
for i=1:size(vectorRatio,1)+1 % do last row last
if(i ~= minRatio) % we skip over the minRatio'th element of the tableau here
tableau(i, :) = -tableau(i,minCol)*tableau(minRatio, :) + tableau(i,:);
end
end
end
% Now we can interpret the algo tableau
numVars = size(A,2); % the number of cols of A is the number of variables
x = zeros(size(size(tableau,1), 1)); % for efficiency
% Check for basicity
for col=1:numVars
count_zero = 0;
count_one = 0;
for row = 1:size(tableau,1)
if(tableau(row,col) < 1e-2)
count_zero = count_zero + 1;
elseif(tableau(row,col) - 1 < 1e-2)
count_one = count_one + 1;
stored_row = row; % we store this (like in memory) column for later use
end
end
if(count_zero == (size(tableau,1) -1) && count_one == 1) % this is the case where it is basic
x(col,1) = tableau(stored_row, numCols);
else
x(col,1) = 0; % this is the base where it is not basic
end
end
% find function optimal value at optimal solution
fval = x(1,1) * fun(1,1); % just needed for logic to work here
for i=2:numVars
fval = fval + x(i,1) * fun(1,i);
end
end

Resources