Suppose we have two methods or algorithm. Suppose the code for each one as follow:
Method 1:
for i=0 to 100
print i
end for
Method 2:
int x=0
w = x/2
print w
What is the best approach to compare the computation time between method 1 & 2?
I tried to use Matlab code:
t= cputime;
Method 1
e = cputime-t
But I'm not sure if this is the correct way to compare the performance of those methods or not.
Use the timeit function, this comes with Matlab from 2013b onward but it is available on the file exchange if you are using an older version. This will correctly "warm up" the function for you before timing it and also outs your function in a loop internally and it reports the median time over many runs for you.
Otherwise the conventional method is to use tic and toc around a loop containing your function.
Please read this for complete info.
n = 1e4; %Higher number results in more accurate comparison
tic;
for t=1:n
{method1}
end
toc;
tic;
for t=1:n
{method2}
end
toc;
This will give you time-elapsed each time.
Related
Matlab has timeit method which is helpful to compare the performance of an implementation with another. I couldn't find something similar in octave. I wrote this benchmark method with runs a function f N times and then returns the total time taken. Is this a reasonable way to compare different implementations or am I missing something critical like "warmup"?
function elapsed_time_in_seconds = benchmark(f, N)
% benchmark runs the function 'f' N times and returns the elapsed time in seconds.
timeid = tic;
for i=1:N
output = f();
end
elapsed_time_in_seconds = toc(timeid);
end
MATLAB's timeit does the following (you can read the whole function, it's an M-file):
Obtain a rough estimate t_rough of the time for calling the function f.
Use the estimate to determine N such that N*t_rough is about 0.001 s.
Determine M such that M*N*t_rough is no more than 15 s, but M must be between 3 and 11.
Loop M times:
Call f() N times and record the total time.
Determine the median of the M times, divided by N.
The purpose of the two loops, M and N, is as follows: Calling f() N times ensures that the time measured by tic/toc is sufficiently large to be reliable, this loop avoids attempting to time something that is so short that it cannot be timed. Repeating the measurement M times and keeping the median attempts to make the measurement robust against delays caused by other stuff happening on your system, which can artificially inflate the recorded time.
The function subtracts the overhead of calling a function through its handle (determined experimentally by timing the call of an empty function), as well as the tic/toc call time (also determined experimentally). It does not subtract the cost of the inner loop, presumably because in MATLAB it is optimized by the JIT and its cost is negligible.
There are some further refinements. The function that determines t_rough first warms up tic and toc by calling each one twice, then it uses a while loop to ensure it calls f() for at least 0.001 s. But in this loop, if the first iteration takes at least 3 s, it just takes that time as the rough estimate. If the first iteration takes less time, the first time count is discarded (warmup), and then uses the median of all the subsequent calls as the rough estimate of the time.
There's also a lot of effort put into calling the function f() with the right number of output arguments.
The code has a lot of comments explaining the reason behind all these steps, it is worth reading.
As a minimum, I would augment your benchmark function as follows:
function elapsed_time_in_seconds = benchmark(f, N, M)
% benchmark runs the function 'f' N*M times and returns the elapsed time in seconds.
tic; [~] = toc; tic; [~] = toc; % warmup
output = f(); % warmup
t = zeros(M, 1);
for k=1:M
timeid = tic;
for i=1:N
output = f();
end
t(k) = toc(timeid) / N;
end
elapsed_time_in_seconds = median(t);
end
If you use the function to directly compare various alternatives, keeping N and M constant, then the overheads of tic, toc, function calls and loops is irrelevant.
This function does assume that f has one output argument, which is not necessarily the case. You could just call f() instead of output = f(), which will work for functions with or without output arguments. But if the function needs to have a certain number of outputs to work correctly, or to trigger computations that you want to time, then you'd have to adjust the function to call it with the right number of output arguments.
You could come up with some heuristic to determine M from N, which would make it a little easier to use this function.
I have the following piece of Matlab code, which calculates Mahalanobis distances between a vector and a matrix with several iterations. I am trying to find a faster method to do this by vectorization but without success.
S.data=0+(20-0).*rand(15000,3);
S.a=0+(20-0).*rand(2500,3);
S.resultat=ones(length(S.data),length(S.a))*nan;
S.b=ones(length(S.a),3,length(S.a))*nan;
for i=1:length(S.data)
for j=1:length(S.a)
S.a2=S.a;
S.a2(j,:)=S.data(i,:);
S.b(:,:,j)=S.a2;
if j==length(S.a)
for k=1:length(S.a);
S.resultat(i,k)=mahal(S.a(k,:),S.b(:,:,k));
end
end
end
end
I have now modified the code and avoid one of the loop. But it is still very long. If someone have an idea, I will be very greatful!
S.data=0+(20-0).*rand(15000,3);
S.a=0+(20-0).*rand(2500,3);
S.resultat=ones(length(S.data),length(S.a))*nan;
for i=1:length(S.data)
for j=1:length(S.a)
S.a2=S.a;
S.a2(j,:)=S.data(i,:);
S.resultat(i,j)=mahal(S.a(j,:),S.a2);
end
end
Introduction and solution code
You can replace the innermost loop that uses mahal with something that is a bit vectorized, as it uses some pre-calculated values (with the help of bsxfun) inside a loop-shortened and hacked version of mahal.
Basically you have a 2D array, let's call it A for easy reference and a 3D array, let's call it B. Let the output be stored be into a variable out. So, the innermost code snippet could be extracted and based on the assumed variable names.
Original loopy code
for k=1:size(A,1)
out(k)=mahal(A(k,:),B(:,:,k));
end
So, what I did was to hack into mahal.m and look for portions that could be vectorized when the inputs are 2D and 3D. Now, mahal uses qr inside it, which could not be vectorized. Thus, we end up with a hacked code.
Hacked code
%// Pre-calculate certain values that could be avoided than using into loop
meanB = mean(B,1); %// mean of B along dim-1
B_meanB = bsxfun(#minus,B,meanB); %// B minus mean values of B
A_B_meanB = A' - reshape(meanB,size(B,2),[]); %//'# A minus B_meanB
%// QR calculations in a for-loop starts until the output is obtained
for k = 1:size(A,1)
[~,R] = qr(B_meanB(:,:,k),0);
out2(k) = sum((R'\A_B_meanB(:,k)).^2)*(size(A,1)-1);
end
Now, to extend this hack solution to the problem code, one can introduce few more tweaks to pre-calculate more values being used those nested loops.
Final solution code
A = S.a; %// Get data from S
[rx,cx] = size(A); %// Get size parameters
Atr = A'; %//'# Pre-calculate transpose of A
%// Pre-calculate replicated B and the indices to be modified at each iteration
B_rep = repmat(S.a,1,1,rx);
B_idx = bsxfun(#plus,[(0:cx-1)*rx + 1]',[0:rx-1]*(rx*cx+1)); %//'
out = zeros(size(S.data,1),rx); %// initialize output array
for i=1:length(S.data)
B = B_rep;
B(B_idx) = repmat(S.data(i,:)',1,rx); %//'
meanB = mean(B,1); %// mean of B along dim-1
B_meanB = bsxfun(#minus,B,meanB); %// B minus mean values of B
A_B_meanB = Atr - reshape(meanB,3,[]); %// A minus B_meanB
for jj = 1:rx
[~,R] = qr(B_meanB(:,:,jj),0);
out(i,jj) = sum((R'\A_B_meanB(:,jj)).^2)*(rx-1); %//'
end
end
S.resultat = out;
Benchmarking
Here's the benchmarking code to compare the proposed solution against the code listed in the problem -
%// Random inputs
S.data=0+(20-0).*rand(1500,3); %(size 10x reduced for a quicker runtime test)
S.a=0+(20-0).*rand(250,3);
S.resultat=ones(length(S.data),length(S.a))*nan;
disp('----------------------------- With original code')
tic
S.b=ones(length(S.a),3,length(S.a))*nan;
for i=1:length(S.data)
for j=1:length(S.a)
S.a2=S.a;
S.a2(j,:)=S.data(i,:);
S.b(:,:,j)=S.a2;
if j==length(S.a)
for k=1:length(S.a);
S.resultat(i,k)=mahal(S.a(k,:),S.b(:,:,k));
end
end
end
end
toc, clear i j S.a2 k S.resultat
S.resultat=ones(length(S.data),length(S.a))*nan;
disp('----------------------------- With proposed solution code')
tic
[ ... Proposed solution code ...]
toc
Runtimes -
----------------------------- With original code
Elapsed time is 17.734394 seconds.
----------------------------- With proposed solution code
Elapsed time is 6.602860 seconds.
Thus, we might get around 2.7x speedup with the proposed approach and some tweaks!
I have a loop in which I use ppval to evaluate a set of values from a piecewise polynomial spline. The interpolation is easily the most time consuming part of the loop and I am looking for a way improve the function's efficiency.
More specifically, I'm using a finite difference scheme to calculate transient temperature distributions in friction welds. To do this I need to recalculate the material properties (as a function of temperature and position) at each time step. The rate limiting factor is the interpolation of these values. I could use an alternate finite difference scheme (less restrictive in the time domain) but would rather stick with what I have if at all possible.
I've included a MWE below:
x=0:.1:10;
y=sin(x);
pp=spline(x,y);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=ppval(pp,x_int);
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.265442 seconds.
Edit - I should probably mention that I would be more than happy with a simple linear interpolation between values but the interp1 function is slower than ppval
x=0:.1:10;
y=sin(x);
tic
for n=1:10000
x_int=10*rand(1000,1);
y_int=interp1(x,y,x_int,'linear');
end
toc
plot(x,y,x_int,y_int,'*') % plot for sanity of data
Elapsed time is 1.957256 seconds.
This is slow, because you're running into the single most annoying limitation of JIT. It's the cause of many many many oh so many questions in the MATLAB tag here on SO:
MATLAB's JIT accelerator cannot accelerate loops that call non-builtin functions.
Both ppval and interp1 are not built in (check with type ppval or edit interp1). Their implementation is not particularly slow, they just aren't fast when placed in a loop.
Now I have the impression it's getting better in more recent versions of MATLAB, but there are still quite massive differences between "inlined" and "non-inlined" loops. Why their JIT doesn't automate this task by simply recursing into non-builtins, I really have no idea.
Anyway, to fix this, you should copy-paste the essence of what happens in ppval into the loop body:
% Example data
x = 0:.1:10;
y = sin(x);
pp = spline(x,y);
% Your original version
tic
for n = 1:10000
x_int = 10*rand(1000,1);
y_int = ppval(pp, x_int);
end
toc
% "inlined" version
tic
br = pp.breaks.';
cf = pp.coefs;
for n = 1:10000
x_int = 10*rand(1000,1);
[~, inds] = histc(x_int, [-inf; br(2:end-1); +inf]);
x_shf = x_int - br(inds);
zero = ones(size(x_shf));
one = x_shf;
two = one .* x_shf;
three = two .* x_shf;
y_int = sum( [three two one zero] .* cf(inds,:), 2);
end
toc
Profiler:
Results on my crappy machine:
Elapsed time is 2.764317 seconds. % ppval
Elapsed time is 1.695324 seconds. % "inlined" version
The difference is actually less than what I expected, but I think that's mostly due to the sum() -- for this ppval case, I usually only need to evaluate a single site per iteration, which you can do without histc (but with simple vectorized code) and matrix/vector multiplication x*y (BLAS) instead of sum(x.*y) (fast, but not BLAS-fast).
Oh well, a ~60% reduction is not bad :)
It is a bit surprising that interp1 is slower than ppval, but having a quick look at its source code, it seems that it has to check for many special cases and has to loop over all the points since it it cannot be sure if the step-size is constant.
I didn't check the timing, but I guess you can speed up the linear interpolation by a lot if you can guarantee that steps in x of your table are constant, and that the values to be interpolated are stricktly within the given range, so that you do not have to do any checking. In that case, linear interpolation can be converted to a simple lookup problem like so:
%data to be interpolated, on grid with constant step
x = 0:0.5:10;
y = sin(x);
x_int = 0:0.1:9.9;
%make sure it is interpolation, not extrapolation
assert(all(x(1) <= x_int & x_int < x(end)));
% compute mapping, this can be precomputed for constant grid
slope = (length(x) - 1) / (x(end) - x(1));
offset = 1 - slope*x(1);
%map x_int to interval 1..lenght(i)
xmapped = offset + slope * x_int;
ind = floor(xmapped);
frac = xmapped - ind;
%interpolate by taking weighted sum of neighbouring points
y_int = y(ind) .* (1 - frac) + y(ind+1) .* frac;
% make plot to check correctness
plot(x, y, 'o-', x_int, y_int, '.')
I am trying to evaluate the following integral:
I can find the area for the following polynomial as follows:
pn =
-0.0250 0.0667 0.2500 -0.6000 0
First using the integration by Simpson's rule
fn=#(x) exp(polyval(pn,x));
area=quad(fn,-10,10);
fprintf('area evaluated by Simpsons rule : %f \n',area)
and the result is area evaluated by Simpsons rule : 11.483072
Then with the following code that evaluates the summation in the above formula with gamma function
a=pn(1);b=pn(2);c=pn(3);d=pn(4);f=pn(5);
area=0;
result=0;
for n=0:40;
for m=0:40;
for p=0:40;
if(rem(n+p,2)==0)
result=result+ (b^n * c^m * d^p) / ( factorial(n)*factorial(m)*factorial(p) ) *...
gamma( (3*n+2*m+p+1)/4 ) / (-a)^( (3*n+2*m+p+1)/4 );
end
end
end
end
result=result*1/2*exp(f)
and this returns 11.4831. More or less the same result with the quad function. Now my question is whether or not it is possible for me to get rid of this nested loop as I will construct the cumulative distribution function so that I can get samples from this distribution using the inverse CDF transform. (for constructing the cdf I will use gammainc i.e. the incomplete gamma function instead of gamma)
I will need to sample from such densities that may have different polynomial coefficients and speed is of concern to me. I can already sample from such densities using Monte Carlo methods but I would like to see whether or not it is possible for me to use exact sampling from the density in order to speed up.
Thank you very much in advance.
There are several things one might do. The simplest is to avoid calling factorial. Instead one can use the relation that
factorial(n) = gamma(n+1)
Since gamma seems to be actually faster than a call to factorial, you can save a bit there. Even better, you can
>> timeit(#() factorial(40))
ans =
4.28681157826087e-05
>> timeit(#() gamma(41))
ans =
2.06671024634146e-05
>> timeit(#() gammaln(41))
ans =
2.17632543333333e-05
Even better, one can do all 4 calls in a single call to gammaln. For example, think about what this does:
gammaln([(3*n+2*m+p+1)/4,n+1,m+1,p+1])*[1 -1 -1 -1]'
Note that this call has no problem with overflows either in case your numbers get large enough. And since gammln is vectorized, that one call is fast. It costs little more time to compute 4 values than it does to compute one.
>> timeit(#() gammaln([15 20 40 30]))
ans =
2.73937416896552e-05
>> timeit(#() gammaln(40))
ans =
2.46521943333333e-05
Admittedly, if you use gammaln, you will need a call to exp at the end to recover the final result. You could do it with a single call to gamma however too. Perhaps like this:
g = gamma([(3*n+2*m+p+1)/4,n+1,m+1,p+1]);
g = g(1)/(g(2)*g(3)*g(4));
Next, you can be more creative in the inner loop on p. Rather than a full loop, coupled with a test to ignore the combinations you don't need, why not just do this?
for p=mod(n,2):2:40
That statement will select only those values of p that would have been used anyway, so now you can drop the if statement completely.
All of the above will give you what I'll guess is about a 5x speed increase in your loops. But it still has a set of nested loops. With some effort, you might be able to improve that too.
For example, rather than computing all of those factorials (or gamma functions) many times, do it ONCE. This should work:
a=pn(1);b=pn(2);c=pn(3);d=pn(4);f=pn(5);
area=0;
result=0;
nlim = 40;
facts = factorial(0:nlim);
gammas = gamma((0:(6*nlim+1))/4);
for n=0:nlim
for m=0:nlim
for p=mod(n,2):2:nlim
result = result + (b.^n * c.^m * d.^p) ...
.*gammas(3*n+2*m+p+1 + 1) ...
./ (facts(n+1).*facts(m+1).*facts(p+1)) ...
./ (-a)^( (3*n+2*m+p+1)/4 );
end
end
end
result=result*1/2*exp(f)
In my test on my machine, I find that your triply nested loops required 4.3 seconds to run. My version above produces the same result, yet required only 0.028418 seconds, a speedup of roughly 150 to 1, despite the triply nested loops.
Well, without even making changes to your code you could install an excellent package from Tom Minka at Microsoft called lightspeed which replaces some built-in matlab functions with much faster versions. I know there's a replacement for gammaln().
You'll get nontrivial speed improvements, though I'm not sure how much, and it's straight-forward to install.
Optimizing my MATLAB code, I stumbled upon a weird problem regarding anonymous functions.
Like in this thread I realized, that sometimes anonymous functions are running really slow.
But with minimal changes to the function, it runs as fast as subfunctions or nested functions.
I used this (simple) test file to reproduce the behaviour with Matlab R2010b under Windows 7 64-bit:
clear all; close all; clc;
% functions
fn1 = #(x) x^2;
fn2 = #(x) double(x^2);
% variables
x = linspace(-100,100,100000);
N = length(x);
%% anonymous function
y = zeros(1,N);
t = tic;
for i=1:N
y(i) = fn1(x(i));
end
tm.anonymous_1 = toc(t);
%% anonymous function (modified)
y = zeros(1,N);
t = tic;
for i=1:N
y(i) = fn2(x(i));
end
tm.anonymous_2 = toc(t);
%% print
tm
The results I got were:
tm =
anonymous_1: 1.0605
anonymous_2: 0.1217
As you can see the first approach is about 10 times slower.
I have no idea what triggers this speedup/slowdown.
I tried different things, getting nearly the same (fast) timings:
fn2 = #(x) 1 * x^2;
fn2 = #(x) 0 + x^2;
fn2 = #(x) abs(x^2);
fn2 = #(x) x*x;
Before I start profiling all of my functions,
I would like to know if anyone has an explanation for this behaviour?
P.S.: I know that "vectorized" approaches are much faster, but in my case a solver will be evaluating the function for each variable time step, so that is not an option.
It appears that in the case of 'fn2' the Matlab optimizer is able to inline the function, whereas it is unable to do so in the case of 'fn1'.
This probably has to do with what Matlab knows about the scalarity or complexity or structure of the argument and return value. It probably figures out that 'i' (the argument at the call-site) is necessarily scalar, real and non-strctured. Given a scalar argument it then tries to figure out the behaviour of the function. In the case of 'fn2' Matlab's optimizer statically determines that it can always fit all possible results of 'double()' into the target variable 'y(i)'. For some reason only known to the designers of the optimizer, Matlab is unable to come to the same conclusion for 'fn1'. Maybe there are some non-obvious corner-cases, or '^' lacks some piece of meta-data that the optimizer depends on. Anyway, the result is that in case of 'fn1' Matlab apparently re-evaluats the function at every iteration.
Anyway, statically optimizing dynamic languages is something of a black art in compiler design.
I believe that making the return type of a function independent of its argument's types makes it easier for Matlab to optimize. By the way, y = fn1(x); and y = fn2(x); have roughly the same proportion in terms of run time, so it's not the effect of arguments being scalar or complex.