Here is my little script for simulating Levy motion:
clear all;
clc; close all;
t = 0; T = 1000; I = T-t;
dT = T/I; t = 0:dT:T; tau = T/I;
alpha = 1.5;
sigma = dT^(1/alpha);
mu = 0; beta = 0;
N = 1000;
X = zeros(N, length(I));
for k=1:N
L = zeros(1,I);
for i = 1:I-1
L( (i + 1) * tau ) = L(i*tau) + stable2( alpha, beta, sigma, mu, 1);
end
X(k,1:length(L)) = L;
end
q = 0.1:0.1:0.9;
quant = qlines2(X, q, t(1:length(X)), tau);
hold all
for i = 1:length(quant)
plot( t, quant(i) * t.^(1/alpha), ':k' );
end
Where stable2 returns a stable random variable with given parameters (you may replace it with normrnd(mu, sigma) for this case, it's not crucial); qlines2 returns quantiles needed for plotting.
But I don't want to talk about math here. My problem is that this implementation is pretty slow, and I would like to speed it up. Unfortunately, computer science is not my main field - I heard something about methods like memoization, vectorization and that there is a lot of other techniques, but I don't know how to use them.
For example, I'm pretty sure I should replace this filthy double for-loop somehow, but I'm not sure what to do instead.
EDIT: Maybe I should use (and learn...) another language (Python, C, any functional one)? I always though that Matlab/OCTAVE is designed for numerical computation, but if change, then for which one?
The crucial bit is, as you said, the for loops, Matlab does not like those, so vectorization is indeed the keyword. (Together with preallocating the space.
I just altered you for loop section somewhat so that you do not have to reset L over and over again, instead we save all Ls in a bigger matrix (also I elimiated the length(L) command).
L = zeros(N,I);
for k=1:N
for i = 1:I-1
L(k,(i + 1) * tau ) = L(k,i*tau) + normrnd(mu, sigma);
end
X(k,1:I) = L(k,1:I);
end
Now you can already see that X(k,1:I) = L(k,1:I); in the loop is obsolete and that also means that we can switch the order of the loops. This is crucial, because the i-steps are recursive (depend on the previous step) that means we cannot vectorize this loop, we can only vectorize the k-loop.
Now your original code needed 9.3 seconds on my machine, the new code still needs about the same time)
L = zeros(N,I);
for i = 1:I-1
for k=1:N
L(k,(i + 1) * tau ) = L(k,i*tau) + normrnd(mu, sigma);
end
end
X = L;
But now we can apply the vectorization, instead of looping throu all rows (the loop over k) we can instead eliminate this loop, and doing all rows at "once".
L = zeros(N,I);
for i = 1:I-1
L(:,(i + 1) * tau ) = L(:,i*tau) + normrnd(mu, sigma); %<- this is not yet what you want, see comment below
end
X = L;
This code need only 0.045 seconds on my machine. I hope you still get the same output, because I have no idea what you are calculating, but I also hope you could see how you go about vectorizing code.
PS: I just noticed that we now use the same random number in the last example for the whole column, this is obviously not what you want. Instad you should generate a whole vector of random numbers, e.g:
L = zeros(N,I);
for i = 1:I-1
L(:,(i + 1) * tau ) = L(:,i*tau) + normrnd(mu, sigma,N,1);
end
X = L;
PPS: Great question!
I have a function in MATLAB which performs the Gram-Schmidt Orthogonalisation with a very important weighting applied to the inner-products (I don't think MATLAB's built in function supports this).
This function works well as far as I can tell, however, it is too slow on large matrices.
What would be the best way to improve this?
I have tried converting to a MEX file but I lose parallelisation with the compiler I'm using and so it is then slower.
I was thinking of running it on a GPU as the element-wise multiplications are highly parallelised. (But I'd prefer the implementation to be easily portable)
Can anyone vectorise this code or make it faster? I am not sure how to do it elegantly ...
I know the stackoverflow minds here are amazing, consider this a challenge :)
Function
function [Q, R] = Gram_Schmidt(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
v = zeros(n, 1);
for j = 1:n
v = A(:,j);
for i = 1:j-1
R(i,j) = sum( v .* conj( Q(:,i) ) .* w ) / ...
sum( Q(:,i) .* conj( Q(:,i) ) .* w );
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
end
end
where A is an m x n matrix of complex numbers and w is an m x 1 vector of real numbers.
Bottle-neck
This is the expression for R(i,j) which is the slowest part of the function (not 100% sure if the notation is correct):
where w is a non-negative weight function.
The weighted inner-product is mentioned on several Wikipedia pages, this is one on the weight function and this is one on orthogonal functions.
Reproduction
You can produce results using the following script:
A = complex( rand(360000,100), rand(360000,100));
w = rand(360000, 1);
[Q, R] = Gram_Schmidt(A, w);
where A and w are the inputs.
Speed and Computation
If you use the above script you will get profiler results synonymous to the following:
Testing Result
You can test the results by comparing a function with the one above using the following script:
A = complex( rand( 100, 10), rand( 100, 10));
w = rand( 100, 1);
[Q , R ] = Gram_Schmidt( A, w);
[Q2, R2] = Gram_Schmidt2( A, w);
zeros1 = norm( Q - Q2 );
zeros2 = norm( R - R2 );
where Gram_Schmidt is the function described earlier and Gram_Schmidt2 is an alternative function. The results zeros1 and zeros2 should then be very close to zero.
Note:
I tried speeding up the calculation of R(i,j) with the following but to no avail ...
R(i,j) = ( w' * ( v .* conj( Q(:,i) ) ) ) / ...
( w' * ( Q(:,i) .* conj( Q(:,i) ) ) );
1)
My first attempt at vectorization:
function [Q, R] = Gram_Schmidt1(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
for j = 1:n
v = A(:,j);
QQ = Q(:,1:j-1);
QQ = bsxfun(#rdivide, bsxfun(#times, w, conj(QQ)), w.' * abs(QQ).^2);
for i = 1:j-1
R(i,j) = (v.' * QQ(:,i));
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
end
end
Unfortunately, it turned out to be slower than the original function.
2)
Then I realized that the columns of this intermediate matrix QQ are built incrementally, and that previous ones are not modified. So here is my second attempt:
function [Q, R] = Gram_Schmidt2(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
QQ = complex(zeros(m, n-1));
for j = 1:n
if j>1
qj = Q(:,j-1);
QQ(:,j-1) = (conj(qj) .* w) ./ (w.' * (qj.*conj(qj)));
end
v = A(:,j);
for i = 1:j-1
R(i,j) = (v.' * QQ(:,i));
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
end
end
Technically no major vectorization was done; I've only precomputed intermediate results, and moved the computation outside the inner loop.
Based on a quick benchmark, this new version is definitely faster:
% some random data
>> M = 10000; N = 100;
>> A = complex(rand(M,N), rand(M,N));
>> w = rand(M,1);
% time
>> timeit(#() Gram_Schmidt(A,w), 2) % original version
ans =
1.2444
>> timeit(#() Gram_Schmidt1(A,w), 2) % first attempt (vectorized)
ans =
2.0990
>> timeit(#() Gram_Schmidt2(A,w), 2) % final version
ans =
0.4698
% check results
>> [Q,R] = Gram_Schmidt(A,w);
>> [Q2,R2] = Gram_Schmidt2(A,w);
>> norm(Q-Q2)
ans =
4.2796e-14
>> norm(R-R2)
ans =
1.7782e-12
EDIT:
Following the comments, we can rewrite the second solution to get rid of the if-statmenet, by moving that part to the end of the outer loop (i.e immediately after computing the new column Q(:,j), we compute and store the corresponding QQ(:,j)).
The function is identical in output, and timing is not that different either; the code is just a bit shorter!
function [Q, R] = Gram_Schmidt3(A, w)
[m, n] = size(A);
Q = zeros(m, n, 'like',A);
R = zeros(n, n, 'like',A);
QQ = zeros(m, n, 'like',A);
for j = 1:n
v = A(:,j);
for i = 1:j-1
R(i,j) = (v.' * QQ(:,i));
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
QQ(:,j) = (conj(Q(:,j)) .* w) ./ (w.' * (Q(:,j).*conj(Q(:,j))));
end
end
Note that I used the zeros(..., 'like',A) syntax (new in recent MATLAB versions). This allows us to run the function unmodified on the GPU (assuming you have the Parallel Computing Toolbox):
% CPU
[Q3,R3] = Gram_Schmidt3(A, w);
vs.
% GPU
AA = gpuArray(A);
[Q3,R3] = Gram_Schmidt3(AA, w);
Unfortunately in my case, it wasn't any faster. In fact it was many times slower to run on the GPU than on the CPU, but it was worth a shot :)
There is a long discussion here, but, to jump to the answer. You have weighted the numerator and denominator of the R calculation by a vector w. The weighting occurs on the inner loop, and consist of a triple dot product, A dot Q dot w in the numerator, and Q dot Q dot w in the denominator. If you make one change, I think the code will run significantly faster. Write num = (A dot sqrt(w)) dot (Q dot sqrt(w)) and write den = (Q dot sqrt(w)) dot (Q dot sqrt(w)). That moves the (A dot sqrt(w)) and (Q dot sqrt(w)) product calculations out of the inner loop.
I would like to give an description of the formulation to Gram Schmidt Orthogonalization, that, hopefully, in addition to giving an alternate computational solution, gives further insight into the advantage of GSO.
The "goals" of GSO are two fold. First, to enable the solution of an equation like Ax=y, where A has far more rows than columns. This situation occurs frequently when measuring data, in that it is easy to measure more data than the number of states. The approach to the first goal is to rewrite A as QR such that the columns of Q are orthogonal and normalized, and R is a triangular matrix. The algorithm you provided, I believe, achieves the first goal. The Q represents the basis space of the A matrix, and R represents the amplitude of each basis space required to generate each column of A.
The second goal of GSO is to rank the basis vectors in order of significance. This the step that you have not done. And, while including this step, may increase the solution time, the results will identify which elements of x are important, according the data contained in the measurements represented by A.
But, I think, with this implementation, the solution is faster than the approach you presented.
Aij = Qij Rij where Qj are orthonormal and Rij is upper triangular, Ri,j>i=0. Qj are the orthogonal basis vectors for A, and Rij is the participation of each Qj to create a column in A. So,
A_j1 = Q_j1 * R_1,1
A_j2 = Q_j1 * R_1,2 + Q_j2 * R_2,2
A_j3 = Q_j1 * R_1,3 + Q_j2 * R_2,3 + Q_j3 * R_3,3
By inspection, you can write
A_j1 = ( A_j1 / | A_j1 | ) * | A_j1 | = Q_j1 * R_1,1
Then you project Q_j1 onto from every other column A to get the R_1,j elements
R_1,2 = Q_j1 dot Aj2
R_1,3 = Q_j1 dot Aj3
...
R_1,j(j>1) = A_j dot Q_j1
Then you subtract the elements of project of Q_j1 from the columns of A (this would set the first column to zero, so you can ignore the first column
for j = 2,n
A_j = A_j - R_1,j * Q_j1
end
Now one column from A has been removed, the first orthonormal basis vector, Q,j1, was determined, and the contribution of the first basis vector to each column, R_1,j has been determined, and the contribution of the first basis vector has been subtracted from each column. Repeat this process for the remaining columns of A to obtain the remaining columns of Q and rows of R.
for i = 1,n
R_ii = |A_i| A_i is the ith column of A, |A_i| is magnitude of A_i
Q_i = A_i / R_ii Q_i is the ith column of Q
for j = i, n
R_ij = | A_j dot Q_i |
A_j = A_j - R_ij * Q_i
end
end
You are trying to weight the rows of A, with w. Here is one approach. I would normalize w, and incorporate the effect into R. You "removed" the effects of w by multiply and dividing by w. An alternative to "removing" the effect is to normalize the amplitude of w to one.
w = w / | w |
for i = 1,n
R_ii = |A_i inner product w| # A_i inner product w = A_i .* w
Q_i = A_i / R_ii
for j = i, n
R_ij = | (A_i inner product w) dot Q_i | # A dot B = A' * B
A_j = A_j - R_ij * Q_i
end
end
Another approach to implementing w is to normalize w and then premultiply every column of A by w. That cleanly weights the rows of A, and reduces the number of multiplications.
Using the following may help in speeding up your code
A inner product B = A .* B
A dot w = A' w
(A B)' = B'A'
A' conj(A) = |A|^2
The above can be vectorized easily in matlab, pretty much as written.
But, you are missing the second portion of ranking of A, which tells you which states (elements of x in A x = y) are significantly represented in the data
The ranking procedure is easy to describe, but I'll let you work out the programming details. The above procedure essentially assumes the columns of A are in order of significance, and the first column is subtracted off all the remaining columns, then the 2nd column is subtracted off the remaining columns, etc. The first row of R represents the contribution of the first column of Q to each column of A. If you sum the absolute value of the first row of R contributions, you get a measurement of the contribution of the first column of Q to the matrix A. So, you just evaluate each column of A as the first (or next) column of Q, and determine the ranking score of the contribution of that Q column to the remaining columns of A. Then select the A column that has the highest rank as the next Q column. Coding this essentially comes down to pre estimating the next row of R, for every remaining column in A, in order to determine which ranked R magnitude has the largest amplitude. Having a index vector that represents the original column order of A will be beneficial. By ranking the basis vectors, you end up with the "principal" basis vectors that represent A, which is typically much smaller in number than the number of columns in A.
Also, if you rank the columns, it is not necessary to calculate every column of R. When you know which columns of A don't contain any useful information, there's no real benefit to keeping those columns.
In structural dynamics, one approach to reducing the number of degrees of freedom is to calculate the eigenvalues, assuming you have representative values for the mass and stiffness matrix. If you think about it, the above approach can be used to "calculate" the M and K (and C) matrices from measured response, and also identify the "measurement response shapes" that are significantly represented in the data. These are diffenrent, and potentially more important, than the mode shapes. So, you can solve very difficult problems, i.e., estimation of state matrices and number of degrees of freedom represented, from measured response, by the above approach. If you read up on N4SID, he did something similar, except he used SVD instead of GSO. I don't like the technical description for N4SID, too much focus on vector projection notation, which is simply a dot product.
There may be one or two errors in the above information, I'm writing this off the top of my head, before rushing off to work. So, check the algorithm / equations as you implement... Good Luck
Coming back to your question, of how to optimize the algorithm when you weight with w. Here is a basic GSO algorithm, without the sorting, written compatible with your function.
Note, the code below is in octave, not matlab. There are some minor differences.
function [Q, R] = Gram_Schmidt_2(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
# Outer loop identifies the basis vectors
for j = 1:n
aCol = A(:,j);
# Subtract off the basis vector
for i = 1:(j-1)
R(i,j) = ctranspose(Q(:,j)) * aCol;
aCol = aCol - R(i,j) * Q(:,j);
end
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
end
end
To get your algorithm, only change one line. But, you lose a lot of speed because "ctranspose(Q(:,j)) * aCol" is a vector operation but "sum( aCol .* conj( Q(:,i) ) .* w )" is a row operation.
function [Q, R] = Gram_Schmidt_2(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
# Outer loop identifies the basis vectors
for j = 1:n
aCol = A(:,j);
# Subtract off the basis vector
for i = 1:(j-1)
# R(i,j) = ctranspose(Q(:,j)) * aCol;
R(i,j) = sum( aCol .* conj( Q(:,i) ) .* w ) / ...
sum( Q(:,i) .* conj( Q(:,i) ) .* w );
aCol = aCol - R(i,j) * Q(:,j);
end
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
end
end
You can change it back to a vector operation by weighting aCol and Q by the sqrt of w.
function [Q, R] = Gram_Schmidt_3(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
Q_sw = complex(zeros(m, n));
sw = w .^ 0.5;
for j = 1:n
aCol = A(:,j);
aCol_sw = aCol .* sw;
# Subtract off the basis vector
for i = 1:(j-1)
# R(i,j) = ctranspose(Q(:,i)) * aCol;
numTerm = ctranspose( Q_sw(:,i) ) * aCol_sw;
denTerm = ctranspose( Q_sw(:,i) ) * Q_sw(:,i);
R(i,j) = numTerm / denTerm;
aCol_sw = aCol_sw - R(i,j) * Q_sw(:,i);
end
aCol = aCol_sw ./ sw;
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
Q_sw(:,j) = Q(:,j) .* sw;
end
end
As pointed out by JacobD, the above function does not run faster. Possibly it takes time to create the additional arrays. Another grouping strategy for the triple product is to group w with conj(Q). Hope this is faster...
function [Q, R] = Gram_Schmidt_4(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
for j = 1:n
aCol = A(:,j);
for i = 1:(j-1)
cqw = conj(Q(:,i)) .* w;
R(i,j) = ( transpose( aCol ) * cqw) ...
/ (transpose( Q(:,i) ) * cqw);
aCol = aCol - R(i,j) * Q(:,i);
end
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
end
end
Here's a driver function to time different versions.
function Gram_Schmidt_tester_2
nSamples = 360000;
nMeas = 100;
nMeas = 15;
A = complex( rand(nSamples,nMeas), rand(nSamples,nMeas));
w = rand(nSamples, 1);
profile on;
[Q1, R1] = Gram_Schmidt_basic(A);
profile off;
data1 = profile ("info");
tData1=data1.FunctionTable(1).TotalTime;
approx_zero1 = A - Q1 * R1;
max_value1 = max(max(abs(approx_zero1)));
profile on;
[Q2, R2] = Gram_Schmidt_w_Orig(A, w);
profile off;
data2 = profile ("info");
tData2=data2.FunctionTable(1).TotalTime;
approx_zero2 = A - Q2 * R2;
max_value2 = max(max(abs(approx_zero2)));
sw=w.^0.5;
profile on;
[Q3, R3] = Gram_Schmidt_sqrt_w(A, w);
profile off;
data3 = profile ("info");
tData3=data3.FunctionTable(1).TotalTime;
approx_zero3 = A - Q3 * R3;
max_value3 = max(max(abs(approx_zero3)));
profile on;
[Q4, R4] = Gram_Schmidt_4(A, w);
profile off;
data4 = profile ("info");
tData4=data4.FunctionTable(1).TotalTime;
approx_zero4 = A - Q4 * R4;
max_value4 = max(max(abs(approx_zero4)));
profile on;
[Q5, R5] = Gram_Schmidt_5(A, w);
profile off;
data5 = profile ("info");
tData5=data5.FunctionTable(1).TotalTime;
approx_zero5 = A - Q5 * R5;
max_value5 = max(max(abs(approx_zero5)));
profile on;
[Q2a, R2a] = Gram_Schmidt2a(A, w);
profile off;
data2a = profile ("info");
tData2a=data2a.FunctionTable(1).TotalTime;
approx_zero2a = A - Q2a * R2a;
max_value2a = max(max(abs(approx_zero2a)));
profshow (data1, 6);
profshow (data2, 6);
profshow (data3, 6);
profshow (data4, 6);
profshow (data5, 6);
profshow (data2a, 6);
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data1.FunctionTable(1).FunctionName,
data1.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value1)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data2.FunctionTable(1).FunctionName,
data2.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value2)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data3.FunctionTable(1).FunctionName,
data3.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value3)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data4.FunctionTable(1).FunctionName,
data4.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value4)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data5.FunctionTable(1).FunctionName,
data5.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value5)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data2a.FunctionTable(1).FunctionName,
data2a.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value2a)
end
On my old home laptop, in Octave, the results are
ans = Time for Gram_Schmidt_basic is 0.889 sec with 360000 samples and 15 meas, max value is 1.57009e-16
ans = Time for Gram_Schmidt_w_Orig is 0.952 sec with 360000 samples and 15 meas, max value is 6.36717e-16
ans = Time for Gram_Schmidt_sqrt_w is 0.390 sec with 360000 samples and 15 meas, max value is 6.47366e-16
ans = Time for Gram_Schmidt_4 is 0.452 sec with 360000 samples and 15 meas, max value is 6.47366e-16
ans = Time for Gram_Schmidt_5 is 2.636 sec with 360000 samples and 15 meas, max value is 6.47366e-16
ans = Time for Gram_Schmidt2a is 0.905 sec with 360000 samples and 15 meas, max value is 6.68443e-16
These results indicate the fastest algorithm is the sqrt_w algorithm above at 0.39 sec, followed by the grouping of conj(Q) with w (above) at 0.452 sec, then version 2 of Amro solution at 0.905 sec, then the original algorithm in the question at 0.952, then a version 5 which interchanges rows / columns to see if row storage presented (code not included) at 2.636 sec. These results indicate the sqrt(w) split between A and Q is the fastest solution. But these results are not consistent with JacobD's comment about sqrt(w) not being faster.
It is possible to vectorize this so only one loop is necessary. The important fundamental change from the original algorithm is that if you swap the inner and outer loops you can vectorize the projection of the reference vector to all remaining vectors. Working off #Amro's solution, I found that an inner loop is actually faster than the matrix subtraction. I do not understand why this would be. Timing this against #Amro's solution, it is about 45% faster.
function [Q, R] = Gram_Schmidt5(A, w)
Q = A;
n_dimensions = size(A, 2);
R = zeros(n_dimensions);
R(1, 1) = norm(Q(:, 1));
Q(:, 1) = Q(:, 1) ./ R(1, 1);
for i = 2 : n_dimensions
Qw = (Q(:, i - 1) .* w)' * Q(:, (i - 1) : end);
R(i - 1, i : end) = Qw(2:end) / Qw(1);
%% Surprisingly this loop beats the matrix multiply
for j = i : n_dimensions
Q(:, j) = Q(:, j) - Q(:, i - 1) * R(i - 1, j);
end
%% This multiply is slower than above
% Q(:, i : end) = ...
% Q(:, i : end) - ...
% Q(:, i - 1) * R(i - 1, i : end);
R(i, i) = norm(Q(:,i));
Q(:, i) = Q(:, i) ./ R(i, i);
end
I'm working on a function with three nested for loops that is way too slow for its intended use. The bottleneck is clearly the looping part - almost 100 % of the execution time is spent in the innermost loop.
The function takes a 2d matrix called rM as input and returns a 3d matrix called ec:
rows = size(rM, 1);
cols = size(rM, 2);
%preallocate.
ec = zeros(rows+1, cols, numRiskLevels);
ec(1, :, :) = 100;
for risk = minRisk:stepRisk:maxRisk;
for c = 1:cols,
for r = 2:rows+1,
ec(r, c, risk) = ec(r-1, c, risk) * (1 + risk * rM(r-1, c));
end
end
end
Any help on speeding up the for loops would be appreciated...
The problem is, that the inner loop is slowest, while it is also near-impossible to vectorize. As every iteration directly depends on the previous one.
The outer two are possible:
clc;
rM = rand(50);
rows = size(rM, 1);
cols = size(rM, 2);
minRisk = 1;
stepRisk = 1;
maxRisk = 100;
numRiskLevels = maxRisk/stepRisk;
%preallocate.
ec = zeros(rows+1, cols, numRiskLevels);
ec(1, :, :) = 100;
riskArray = (minRisk:stepRisk:maxRisk)';
tic
for r = 2:rows+1
tmp = riskArray * rM(r-1, :);
tmp = permute(tmp, [3 2 1]);
ec(r, :, :) = ec(r-1, :, :) .* (1 + tmp);
end
toc
%preallocate.
ec2 = zeros(rows+1, cols, numRiskLevels);
ec2(1, :, :) = 100;
tic
for risk = minRisk:stepRisk:maxRisk;
for c = 1:cols
for r = 2:rows+1
ec2(r, c, risk) = ec2(r-1, c, risk) * (1 + risk * rM(r-1, c));
end
end
end
toc
all(all(all(ec == ec2)))
But to my surprise, the vectorized code is indeed slower. (But maybe someone can improve the code, so I figured I leave it her for you.)
I have just tried to vectorize the outer loop, and actually noticed a significant speed increase. Of course it is hard to judge the speed of a script without knowing (the size of) the inputs but I would say this is a good starting point:
% Here you can change the input parameters
riskVec = 1:3:120;
rM = rand(50);
%preallocate and calculate non vectorized solution
ec2 = zeros(size(rM,2)+1, size(rM,1), max(riskVec));
ec2(1, :, :) = 100;
tic
for risk = riskVec
for c = 1:size(rM,2)
for r = 2:size(rM,1)+1
ec2(r, c, risk) = ec2(r-1, c, risk) * (1 + risk * rM(r-1, c));
end
end
end
t1=toc;
%preallocate and calculate vectorized solution
ec = zeros(size(rM,2)+1, size(rM,1), max(riskVec));
ec(1, :, :) = 100;
tic
for c = 1:size(rM,2)
for r = 2:size(rM,1)+1
ec(r, c, riskVec) = ec(r-1, c, riskVec) .* reshape(1 + riskVec * rM(r-1, c),[1 1 length(riskVec)]);
end
end
t2=toc;
% Check whether the vectorization is done correctly and show the timing results
if ec(:) == ec2(:)
t1
t2
end
The given output is:
t1 =
0.1288
t2 =
0.0408
So for this riskVec and rM it is about 3 times as fast as the non-vectorized solution.
In the Matlab programs I use I often have to average within a matrix (interpolation). The most straightforward way is to add the matrix and a shifted one (avg). However you could do the same operation using matrix multiplication (avg2). I noticed a considerable speed increase in the case of using matrix multiplication in the case of large matrices.
Could anyone explain why Matlab is able to process this multiplication faster than adding the same matrix? Also what are the possible downsides of using avg2() in respect to avg()?
Difference in runtime was a factor ~6 for this case (n=500).
function [] = speed()
%Speed test for averaging a matrix
n = 500;
A = rand(n,n);
tic
for i=1:100
avg(A);
end
toc
tic
for i=1:100
avg2(A);
end
toc
end
function B = avg(A,k)
if nargin<2, k = 1; end
if size(A,1)==1, A = A'; end
if k<2, B = (A(2:end,:)+A(1:end-1,:))/2; else B = avg(A,k-1); end
if size(A,2)==1, B = B'; end
end
function B = avg2(A,k)
if nargin<2, k = 1; end
if size(A,1)==1, A = A'; end
if k<2,
m = size(A,1);
e = ones(m,1);
S = spdiags(e*[1 1],-1:0,m,m-1)'/2;
B = S*A; else B = avg2(A,k-1); end
if size(A,2)==1, B = B'; end
end
Im afraid I cant give you an answer to the inner workings of the functions you are using. However, as they seem overly complicated, I felt I should make you aware of an easier (and a bit faster) way of doing this averaging.
You can instead use conv2 with a kernel of [0.5;0.5]. I have extended your code below:
function [A, T1, T2 T3] = speed()
%Speed test for averaging a matrix
n = 900;
A = rand(n,n);
tic
for i=1:100
T1 = avg(A);
end
toc
tic
for i=1:100
T2 = avg2(A);
end
toc
tic
for i=1:100
T3 = conv2(A,[1;1]/2,'valid');
end
toc
if sum(sum(abs(T3-T2))) > 0
warning('Method 3 not equal the other methods')
end
end
function B = avg(A,k)
if nargin<2, k = 1; end
if size(A,1)==1, A = A'; end
if k<2, B = (A(2:end,:)+A(1:end-1,:))/2; else B = avg(A,k-1); end
if size(A,2)==1, B = B'; end
end
function B = avg2(A,k)
if nargin<2, k = 1; end
if size(A,1)==1, A = A'; end
if k<2,
m = size(A,1);
e = ones(m,1);
S = spdiags(e*[1 1],-1:0,m,m-1)'/2;
B = S*A; else B = avg2(A,k-1); end
if size(A,2)==1, B = B'; end
end
Results:
Elapsed time is 10.201399 seconds.
Elapsed time is 1.088003 seconds.
Elapsed time is 1.040471 seconds.
Apologies if you already knew this.