How to improve the Matlab code for faster processing time? - performance

I developed some code for my masters project that will simulate 90 years daily data using 1000 different data sets. The code is working fine and gives the correct output that i wanted but the processing time are very high. It took about 8 hours to finish the simulation. Here is the code that i used:
tic
%% importing the csv file with selected column
files=dir('*_scen_*.csv');
for i=1:length(files);
LHR=importcsv(files(i).name);
%% Definable variables
% Define These Value
TAW=-216; %total available water
RAW=-129; %readily available water
KC=1.0; %crop coefficient
IRL=15; %intense rain level
RC=(80/100); %percentage of recharge
RO=(1-RC); %percentage of runoff
% The very first row of Soil Moisture Deficit
for j=1
SMD(j,i)=(LHR.RAIN(j)-LHR.PET(j));
if SMD(j,i)>0;
SMD(j,i)=0;
elseif SMD(j,i)<RAW;
SMD(j,i)=(LHR.RAIN(j)-(LHR.PET(j)*((TAW-SMD(j-1))/(TAW-RAW))));
end
end
%for the following SMD Calculation
for k=2:(length(LHR.RAIN));
SMD(k,i)=SMD(k-1,i)+(LHR.RAIN(k)-LHR.PET(k));
% The SMD conditions
if SMD(k,i)>0;
SMD(k,i)=0;
elseif SMD(k,i)<RAW;
SMD(k,i)=SMD(k-1,i)+(LHR.RAIN(k)-(LHR.PET(k)*((TAW-SMD(k-1,i))/(TAW-RAW))));
end
end
%Convert negative SMD to Positive
SMD=abs(SMD);
%%Evapotranspiration Calculation
for l=1:(length(SMD));
if SMD(l,i)<abs(RAW);
AET(l,i)=LHR.PET(l);
elseif SMD(l,i)>abs(RAW);
AET(l,i)=KC*LHR.PET(l)*((abs(TAW)-(SMD(l,i)))/(abs(TAW)-abs(RAW)));
end
end
for m=2:(length(SMD));
if SMD(m,i)<abs(RAW);
AET(m,i)=LHR.PET(m);
elseif SMD(m,i)>abs(RAW);
AET(m,i)=KC*LHR.PET(m)*((abs(TAW)-(SMD(m-1,i)))/(abs(TAW)-abs(RAW)));
end
end
%% HER calculation
for n=1:length(SMD);
if SMD(n,i)<(LHR.RAIN(n)-AET(n,i));
HER(n,i)=(LHR.RAIN(n)-AET(n,i)-SMD(n,i));
elseif SMD(n,i)>(LHR.RAIN(n)-AET(n,i));
HER(n,i)=0;
end
end
%% Calculation of recharge anf runoff
for o=1:(length(HER));
if (HER(o,i)+(abs(TAW)-SMD(o,i)))<abs(TAW);
RUNOFF(o,i)=0;
elseif (HER(o,i)+(abs(TAW)-SMD(o,i)))>abs(TAW);
if HER(o,i)>IRL;
RUNOFF(o,i)=RO*HER(o,i);
elseif HER(o,i)<IRL;
RUNOFF(o,i)=0;
end
end
if (HER(o,i)+(abs(TAW)-SMD(o,i)))<abs(TAW);
RECHARGE(o,i)=0;
elseif (HER(o,i)+(abs(TAW)-SMD(o,i)))>abs(TAW);
if HER(o,i)>IRL;
RECHARGE(o,i)=RC*HER(o,i);
elseif HER(o,i)<IRL;
RECHARGE(o,i)=HER(o,i);
end
end
end
%% rainfall
for p=1:length(LHR.RAIN);
RAINFALL(p,i)=LHR.RAIN(p);
PET(p,i)=LHR.PET(p);
end
end
clear i
clear j
clear k
clear l
clear m
clear n
clear o
clear p
toc
Is there any improvement scope for this code that might reduce the processing time? Sorry if the code looks unprofessional because i am in the beginner stage for MATLAB programming.

If there is one thing Matlab is good at, it's matrix and vector computations. With your loop-like code, especially for big datasets, you are completely missing this advantage.
I didn't look into the details, but it seems like all your loops do element-wise computations and logical operations. You could replace these by matrix calculations.
For example, let's consider your first loop:
for k=2:(length(LHR.RAIN));
SMD(k,i)=SMD(k-1,i)+(LHR.RAIN(k)-LHR.PET(k));
% …
end
Could be replaced by something like (untested) SMD(2:end,i)=SMD(1:end-1,i)+LHR.RAIN(1:end-1)-LHR.PET(1:end-1)
And logical operations like this:
for k=2:(length(LHR.RAIN));
% ...
% The SMD conditions
if SMD(k,i)>0;
SMD(k,i)=0;
% ...
end
Can be replaced by this:
SMD(SMD>0)=0;
Etc.

One thing that speed up (a lot!) scripts is to declare your matrices prior to entering a loop. For example, for your SMD, AET, RECHARGE, etc matrix, you should use something like
SMD=NaN(nrow,ncol);
where nrow and ncol is the size of the final matrix (if known of course).
Then do your loop.

Related

Speed up a for loop with matrix multiplication?

one of the part of my programme contains this piece of code:
size2=2500;
gran=3;
A=ones(size2,size2);
for k=1:gran:(size2-gran)
for j=1:gran:(size2-gran)
X=rand*2*pi-pi;
for h=1:gran
for l=1:gran
A(k+l-1,j+h-1) = A(k+l-1,j+h-1) *exp(+1i*X); %phase in the square gran x gran
end
end
end
end
My pc runs this code in 0.60 seconds but I would like to know if it is possible to speed up this process.
A faster way would be to write this as a matrix multiplication but in order to write X I think I have to create a for loop.
Is there any way to improve the speed of this code?
your for loop needs to be replaced by a matrix of random phases that has a 2x2 repeat. can create the random variable as a matrix of size A
X = rand(size2/2)*2*pi-pi;
X = kron(X,ones(2));
then
A = A.*X;

Matrix equation not properly updating in time

As a simple example to illustrate my point, I am trying to solve the following equation f(t+1) = f(t) + f(t)*Tr (f^2) starting at t=0 where Tr is the trace of a matrix (sum of diagonal elements). Below I provide a basic code. My code compiles with no errors but is not updating the solution as I want. My expected result is also below which I calculated by hand (it's very easy to check by hand via matrix multiplication).
In my sample code below I have two variables that store solution, g is for f(t=0) which I implement, and then I store f(t+1) as f.
complex,dimension(3,3) :: f,g
integer :: k,l,m,p,q
Assume g=f(t=0) is defined as below
do l=1,3 !matrix index loops
do k=1,3 !matrix index loops
if (k == l) then
g(k,l) = cmplx(0.2,0)
else if ( k /= l) then
g(k,l) = cmplx(0,0)
end if
end do
end do
I have checked this result is indeed what I want it to be, so I know f at t=0 is defined properly.
Now I try to use this matrix at t=0 and find the matrix for all time, governed by the equation f(t+1) = f(t)+f(t)*Tr(f^2), but this is where I am not correctly implementing the code I want.
do m=1,3 !loop for 3 time iterations
do p=1,3 !loops for dummy indices for matrix trace
do q=1,3
g(1,1) = g(1,1) + g(1,1)*g(p,q)*g(p,q) !compute trace here
f(1,1) = g(1,1)
!f(2,2) = g(2,2) + g(2,2)*g(p,q)*g(p,q)
!f(3,3) = g(3,3) + g(3,3)*g(p,q)*g(p,q)
!assume all other matrix elements are zero except diagonal
end do
end do
end do
Printing this result is done by
print*, "calculated f where m=", m
do k=1,3
print*, (f(k,l), l=1,3)
end do
This is when I realize my code is not being implemented correctly.
When I print f(k,l) I expect for t=1 a result of 0.224*identity matrix and now I get this. However for t=2 the output is not right. So my code is being updated correctly for the first time iteration, but not after that.
I am looking for a solution as to how to properly implement the equation I want to obtain the result I am expecting.
I'll answer a couple things you seem to be having trouble with. First, the trace. The trace of a 3x3 matrix is A(1,1)+A(2,2)+A(3,3). The first and second indexes are the same, so we use one loop variable. To compute the trace of an NxN matrix A:
trace = 0.
do i=1,N
trace = trace + A(i,i)
enddo
I think you're trying to loop over p and q to compute your trace which is incorrect. In that sum, you'll add in terms like A(2,3) which is wrong.
Second, to compute the update, I recommend you compute the updated f into fNew, and then your code would look something like:
do m=1,3 ! time
! -- Compute f^2 (with loops not shown)
f2 = ...
! -- Compute trace of f2 (with loop not shown)
trace = ...
! -- Compute new f
do j=1,3
do i=1,3
fNew(i,j) = f(i,j) + trace*f(i,j)
enddo
enddo
! -- Now update f, perhaps recording fNew-f for some residual
! -- The LHS and RHS are both arrays of dimension (3,3),
! -- so fortran will automatically perform an array operation
f = fNew
enddo
This method has two advantages. First, your code actually looks like the math you're trying to do, and is easy to follow. This is very important for realistic problesm which are not so simple. Second, if fNew(i,j) depended on f(i+1,j), for example, you are not updating to the next time level while the current time level values still need to be used.

Is it possible to remove the following !$OMP CRITICAL regions

I have a fortran code that shows some very unsatisfactory performance due to some $OMP CRITICAL regions. This question is actually more about how to the critical regions can be avoided and whether those regions can be removed? In those critical regions I am updating counters and reading/writing values to an array
i=0
j=MAX/2
total = 0
!$OMP PARALLEL PRIVATE(x,N)
MAIN_LOOP:do
$OMP CRITICAL
total = total + 1
x = array(i)
i = i + 1
if ( i > MAX) i=1 ! if the counter is past the end start form the beginning
$OMP END CRITICAL
if (total > MAX_TOTAL) exit
! do some calculations here and get the value of the integer (N)
! store (N) copies of x it back in the original array with some offset
!$OMP CRITICAL
do p=1,N
array(j)=x
j=j+1
if (j>MAX) j=1
end do
!$OMP END CRITICAL
end do MAIN_LOOP
$OMP END PARALLEL
One simple thing that came to my mind is to eliminate the counter on total by using explicit dynamic loop scheduling.
!$OMP PARALLEL DO SCHEDULE(DYNAMIC)
MAIN_LOOP:do total = 1,MAX_TOTAL
! do the calculation here
end do MAIN_LOOP
!$OMP END PARALLEL DO
I was also thinking to allocate different portion of the array to each thread and using the thread ID to do offsetting. This time each processor will have it's own counter which will be stored in an array count_i(ID) and something of the sort
!this time the size if array is NUM_OMP_THREADS*MAX
x=array(ID + sum(count_i)) ! get the offset by summing up all values
ID=omp_get_thread_num()
count_i(ID)=count_i(ID)+1
if (count_i(ID) > MAX) count_i(ID) = 1
This however will mess the order and will not do the same as the original method. Moreover some empty space will be present, since the different threads will not able to fit the entire range 1:MAX
I would appreciate your help and ideas.
Your use of critical sections is a bit strange here. The motivation for using critical sections must be to avoid having an entry in the array being clobbered before it can be read. Your code does accomplish this, but only accidentally, by acting as barriers. Try replacing the critical stuff with OMP barriers, and you should still get the right result and the same horrible speed.
Since you always write to the array half its length away from where you write to it, you can avoid critical sections by dividing the operation into one step which reads from the first half and writes to the second half, and vice versa. (Edit: After the question was edited, this is no longer true, so the approach below won't work).
nhalf = size(array)/2
!$omp parallel do
do i = 1, nhalf
array(i+nhalf) = f(array(i))
end do
!$omp parallel do
do i = 1, nhalf
array(i) = f(array(i+nhalf))
end do
Here f(x) represents whatever calculation you want to do to the array values. It doesn't have to be a function if you don't want it to. If it isn't clear, this code first loops through the entries in the first half of the array in parallel. The first task may go through i=1,1+nproc,1+2*nproc, etc. while the second task goes through i=2,2+nproc,2+2*nproc, and so on. This can be done in parallel without any locking because there is no overlap between the part of the array that is read from and written to in this loop. The second loop only starts once every task has finished the first loop, so there is no clobbering between the loops.
Unlike in your code, there is here one i per thread, so one doesn't need locking to update it (the loop variable is automatically private).
This assumes that you only want to make one pass through the array. Otherwise you can just loop over these two loops:
do iouter = 1, (max_total+size(array)-1)/size(array)
nleft = max_total-(iouter-1)*size(array)
nhalf = size(array)/2
!$omp parallel do
do i = 1, min(nhalf,nleft)
array(i+nhalf) = f(array(i))
end do
!$omp parallel do
do i = 1, min(nhalf,nleft-nhalf)
array(i) = f(array(i+nhalf))
end do
end do
Edit: Your new example is confusing. I'm not sure what it's supposed to do. Depending on the value of N, the array values may end being clobbered before they can be used. Is this intentional? It's hard to answer your question when it's not clear what you're trying to do. :/
I thought about this for a while and my feeling is that there is no good answer to this specific issue.
Indeed, your code seems, at first glance, like a good approach to the problem such as stated (although I personally find the problem itself a bit strange). However, there are problems in your implementation:
What happens if for some reason one of the threads gets delayed in processing its iteration? Just imagine that the thread owning very first index takes a while to process it (delayed y some third party process coming in the way and taking the CPU time on the core where the thread was pinned/scheduled for example) and is the last to finish... Then it will set back its values to array in a completely different order than what the sequential algorithm would have done. Is that something you can accept in your algorithm?
Even without this sort of "extreme" delay, can you accept that the order in which the i indexes were distributed among threads is different that the order in which the j indexes are subsequently updated? If the thread owning i+1 finishes right before the one owning i, it will use index j instead of index j+n as it should have had...
Again, I'm not sure I understand all the subtleties of your algorithm and how resilient it is to miss-ordering of iterations, but if ordering is something important, then the approach is wrong. In this case, I guess that a proper parallelisation could be something like this (put in a subroutine to make it compilable):
subroutine loop(array, maxi, max_iteration)
implicit none
integer, intent(in) :: maxi, max_iteration
real, intent(inout) :: array(maxi)
real :: x
integer :: iteration, i, j, n, p
i = 0
j = maxi/2
!$omp parallel do ordered private(x, n, p) schedule(static,1)
do iteration = 1,max_iteration
!$omp ordered
x = array(wrap_around(i, maxi))
!$omp end ordered
! do some calculations here and get the value of the integer (n)
!$omp ordered
do p = 1,n
array(wrap_around(j, maxi)) = x
end do
!$omp end ordered
end do
!$omp end parallel do
contains
integer function wrap_around(i, maxi)
implicit none
integer, intent(in) :: maxi
integer, intent(inout) :: i
i = i+1
if (i > maxi) i = 1
wrap_around = i
end function wrap_around
end subroutine loop
I hope this would work. However, unless the central part of the loop where n is retrieved does some serious computation, this won't be any faster than the sequential version.

Optimize/ Vectorize Mahalanobis distance calculations in MATLAB

I have the following piece of Matlab code, which calculates Mahalanobis distances between a vector and a matrix with several iterations. I am trying to find a faster method to do this by vectorization but without success.
S.data=0+(20-0).*rand(15000,3);
S.a=0+(20-0).*rand(2500,3);
S.resultat=ones(length(S.data),length(S.a))*nan;
S.b=ones(length(S.a),3,length(S.a))*nan;
for i=1:length(S.data)
for j=1:length(S.a)
S.a2=S.a;
S.a2(j,:)=S.data(i,:);
S.b(:,:,j)=S.a2;
if j==length(S.a)
for k=1:length(S.a);
S.resultat(i,k)=mahal(S.a(k,:),S.b(:,:,k));
end
end
end
end
I have now modified the code and avoid one of the loop. But it is still very long. If someone have an idea, I will be very greatful!
S.data=0+(20-0).*rand(15000,3);
S.a=0+(20-0).*rand(2500,3);
S.resultat=ones(length(S.data),length(S.a))*nan;
for i=1:length(S.data)
for j=1:length(S.a)
S.a2=S.a;
S.a2(j,:)=S.data(i,:);
S.resultat(i,j)=mahal(S.a(j,:),S.a2);
end
end
Introduction and solution code
You can replace the innermost loop that uses mahal with something that is a bit vectorized, as it uses some pre-calculated values (with the help of bsxfun) inside a loop-shortened and hacked version of mahal.
Basically you have a 2D array, let's call it A for easy reference and a 3D array, let's call it B. Let the output be stored be into a variable out. So, the innermost code snippet could be extracted and based on the assumed variable names.
Original loopy code
for k=1:size(A,1)
out(k)=mahal(A(k,:),B(:,:,k));
end
So, what I did was to hack into mahal.m and look for portions that could be vectorized when the inputs are 2D and 3D. Now, mahal uses qr inside it, which could not be vectorized. Thus, we end up with a hacked code.
Hacked code
%// Pre-calculate certain values that could be avoided than using into loop
meanB = mean(B,1); %// mean of B along dim-1
B_meanB = bsxfun(#minus,B,meanB); %// B minus mean values of B
A_B_meanB = A' - reshape(meanB,size(B,2),[]); %//'# A minus B_meanB
%// QR calculations in a for-loop starts until the output is obtained
for k = 1:size(A,1)
[~,R] = qr(B_meanB(:,:,k),0);
out2(k) = sum((R'\A_B_meanB(:,k)).^2)*(size(A,1)-1);
end
Now, to extend this hack solution to the problem code, one can introduce few more tweaks to pre-calculate more values being used those nested loops.
Final solution code
A = S.a; %// Get data from S
[rx,cx] = size(A); %// Get size parameters
Atr = A'; %//'# Pre-calculate transpose of A
%// Pre-calculate replicated B and the indices to be modified at each iteration
B_rep = repmat(S.a,1,1,rx);
B_idx = bsxfun(#plus,[(0:cx-1)*rx + 1]',[0:rx-1]*(rx*cx+1)); %//'
out = zeros(size(S.data,1),rx); %// initialize output array
for i=1:length(S.data)
B = B_rep;
B(B_idx) = repmat(S.data(i,:)',1,rx); %//'
meanB = mean(B,1); %// mean of B along dim-1
B_meanB = bsxfun(#minus,B,meanB); %// B minus mean values of B
A_B_meanB = Atr - reshape(meanB,3,[]); %// A minus B_meanB
for jj = 1:rx
[~,R] = qr(B_meanB(:,:,jj),0);
out(i,jj) = sum((R'\A_B_meanB(:,jj)).^2)*(rx-1); %//'
end
end
S.resultat = out;
Benchmarking
Here's the benchmarking code to compare the proposed solution against the code listed in the problem -
%// Random inputs
S.data=0+(20-0).*rand(1500,3); %(size 10x reduced for a quicker runtime test)
S.a=0+(20-0).*rand(250,3);
S.resultat=ones(length(S.data),length(S.a))*nan;
disp('----------------------------- With original code')
tic
S.b=ones(length(S.a),3,length(S.a))*nan;
for i=1:length(S.data)
for j=1:length(S.a)
S.a2=S.a;
S.a2(j,:)=S.data(i,:);
S.b(:,:,j)=S.a2;
if j==length(S.a)
for k=1:length(S.a);
S.resultat(i,k)=mahal(S.a(k,:),S.b(:,:,k));
end
end
end
end
toc, clear i j S.a2 k S.resultat
S.resultat=ones(length(S.data),length(S.a))*nan;
disp('----------------------------- With proposed solution code')
tic
[ ... Proposed solution code ...]
toc
Runtimes -
----------------------------- With original code
Elapsed time is 17.734394 seconds.
----------------------------- With proposed solution code
Elapsed time is 6.602860 seconds.
Thus, we might get around 2.7x speedup with the proposed approach and some tweaks!

Efficient replacement for ppval

I have a loop in which I use ppval to evaluate a set of values from a piecewise polynomial spline. The interpolation is easily the most time consuming part of the loop and I am looking for a way improve the function's efficiency.
More specifically, I'm using a finite difference scheme to calculate transient temperature distributions in friction welds. To do this I need to recalculate the material properties (as a function of temperature and position) at each time step. The rate limiting factor is the interpolation of these values. I could use an alternate finite difference scheme (less restrictive in the time domain) but would rather stick with what I have if at all possible.
I've included a MWE below:
x=0:.1:10;
y=sin(x);
pp=spline(x,y);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=ppval(pp,x_int);
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.265442 seconds.
Edit - I should probably mention that I would be more than happy with a simple linear interpolation between values but the interp1 function is slower than ppval
x=0:.1:10;
y=sin(x);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=interp1(x,y,x_int,'linear');
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.957256 seconds.
This is slow, because you're running into the single most annoying limitation of JIT. It's the cause of many many many oh so many questions in the MATLAB tag here on SO:
MATLAB's JIT accelerator cannot accelerate loops that call non-builtin functions.
Both ppval and interp1 are not built in (check with type ppval or edit interp1). Their implementation is not particularly slow, they just aren't fast when placed in a loop.
Now I have the impression it's getting better in more recent versions of MATLAB, but there are still quite massive differences between "inlined" and "non-inlined" loops. Why their JIT doesn't automate this task by simply recursing into non-builtins, I really have no idea.
Anyway, to fix this, you should copy-paste the essence of what happens in ppval into the loop body:
% Example data
x = 0:.1:10;
y = sin(x);
pp = spline(x,y);
% Your original version
tic
for n = 1:10000
x_int = 10*rand(1000,1);
y_int = ppval(pp, x_int);
end
toc
% "inlined" version
tic
br = pp.breaks.';
cf = pp.coefs;
for n = 1:10000
x_int = 10*rand(1000,1);
[~, inds] = histc(x_int, [-inf; br(2:end-1); +inf]);
x_shf = x_int - br(inds);
zero = ones(size(x_shf));
one = x_shf;
two = one .* x_shf;
three = two .* x_shf;
y_int = sum( [three two one zero] .* cf(inds,:), 2);
end
toc
Profiler:
Results on my crappy machine:
Elapsed time is 2.764317 seconds. % ppval
Elapsed time is 1.695324 seconds. % "inlined" version
The difference is actually less than what I expected, but I think that's mostly due to the sum() -- for this ppval case, I usually only need to evaluate a single site per iteration, which you can do without histc (but with simple vectorized code) and matrix/vector multiplication x*y (BLAS) instead of sum(x.*y) (fast, but not BLAS-fast).
Oh well, a ~60% reduction is not bad :)
It is a bit surprising that interp1 is slower than ppval, but having a quick look at its source code, it seems that it has to check for many special cases and has to loop over all the points since it it cannot be sure if the step-size is constant.
I didn't check the timing, but I guess you can speed up the linear interpolation by a lot if you can guarantee that steps in x of your table are constant, and that the values to be interpolated are stricktly within the given range, so that you do not have to do any checking. In that case, linear interpolation can be converted to a simple lookup problem like so:
%data to be interpolated, on grid with constant step
x = 0:0.5:10;
y = sin(x);
x_int = 0:0.1:9.9;
%make sure it is interpolation, not extrapolation
assert(all(x(1) <= x_int & x_int < x(end)));
% compute mapping, this can be precomputed for constant grid
slope = (length(x) - 1) / (x(end) - x(1));
offset = 1 - slope*x(1);
%map x_int to interval 1..lenght(i)
xmapped = offset + slope * x_int;
ind = floor(xmapped);
frac = xmapped - ind;
%interpolate by taking weighted sum of neighbouring points
y_int = y(ind) .* (1 - frac) + y(ind+1) .* frac;
% make plot to check correctness
plot(x, y, 'o-', x_int, y_int, '.')

Resources