Many scientific computing languages make a distinction between absolute time (wall clock) and CPU time (processor cycles). For example, in Matlab we have:
>> tic; pause(1); toc
Elapsed time is 1.009068 seconds.
>> start = cputime; pause(1); elapsed = cputime - start
elapsed =
0
and in Mathematica we have:
>>In[1]:= AbsoluteTiming[Pause[1]]
>>Out[1]= {1.0010572, Null}
>>In[2]:= Timing[Pause[1]]
>>Out[2]= {0., Null}
This distinction is useful when benchmarking code run on computation servers, where there may be high variance in the absolute timing results depending on what other processes are running concurrently.
The Julia standard library provides support for timing of expressions through tic(), toc(), #time and a few other functions/macros all based on time_ns(), a function that measures absolute time.
>>julia> #time sleep(1)
elapsed time: 1.017056895 seconds (135788 bytes allocated)
My question: Is there a simple way to get the elapsed CPU time for an expression evaluation in Julia?
(Side note: doing some digging, it appears that Julia timing is based on the uv_hrtime() function from libuv. It seems to me that using uv_getrusage from the same library might give a way to access elapsed CPU time in Julia, but I'm no expert. Has anybody tried using anything like this?)
I couldn't find any existing solutions, so I've put together a package with some simple CPU timing functionality here: https://github.com/schmrlng/CPUTime.jl. The package is completely untested on parallel code and may have other bugs, but if anybody else would like to try it out calling
>> Pkg.clone("https://github.com/schmrlng/CPUTime.jl.git")
from the julia> prompt should install the package.
Julia does have the commands tic() and toc() which work just like tic and toc in Matlab:
julia> tic(); 7^1000000000; toc()
elapsed time: 0.046563597 seconds
0.046563597
Related
The cputime (); returns the CPU time used by an Octave session. Here, a way is stated to measure the processing time of a code.
t = cputime;
...%your code
...%computing
...%when you are done
printf('Total cpu time: %f seconds\n', cputime-t);
In octave documentation we can use the etime(); function file, it returns the difference (in seconds) between two timestamps from the clock, like this:
t0 = clock ();
# many computations later...
elapsed_time = etime (clock (), t0);
What's the difference between the above two ways? The etime uses the hour of clock of my system? And what's a good way to collect the processing time of an code?
Your first code snippet with cputime measures the time your code was execuded on a CPU. The second snippet measures the wall clock time (btw: use tic and toc to measure this). The wall clock includes time where the CPU was executing some other threads or waiting for IO.
It is up to you what you want to measure. I normally use wall clock with tic/toc if I want to benchmark my code.
On single threaded apps the wall clock time is greater than or equal the cputime (because it includes the time the CPU was also processing some other threads). On multithreaded apps with multiple cores cputime can be greater than the wall clock time.
I tried the matlab's convolution function conv2 convn with gpuArray.
For example convn(gpuArray.rand(100,100,10,'single'),gpuArray.rand(5,'single') and compared it to the cpu version convn(rand(100,100,10),rand(5)). Unfortunately the gpu version is much slower than the cpu version, especially noticeable when I put the function for example into a loop(which will be relevant for me). Does anyone know an alternative to fast convolution using matlab and the gpu for relatively small filtering kernels from 5x5 to 14x14?
The GPU performance is limited by the data array size [100x100x10] and [5x5] in your test case.
The actual performance also depends on the GPU and CPU module type. For your data size (test case 2 of the following code), I can get a performance improvement (2.75x) on GPU Tesla M2090 and CPU Xeon E5-2609.
For the following matlab test code
m=1000;
n=100;
k=5;
gc=convn(gpuArray.rand(m,m,10,'single'),gpuArray.rand(k,'single'));
tic;
for i=1:n
gc=convn(gpuArray.rand(m,m,10,'single'),gpuArray.rand(k,'single'));
end
toc
c=convn(rand(m,m,10,'single'),rand(k,'single'));
tic;
for i=1:n
c=convn(rand(m,m,10,'single'),rand(k,'single'));
end
toc
When m=1000; n=100; k=5; I got very good performance improvement (11.6x) on GPU.
Elapsed time is 2.367453 seconds.
Elapsed time is 27.502952 seconds.
But when m=100; n=1000; k=5; I got only 2.75x
Elapsed time is 1.206053 seconds.
Elapsed time is 3.330559 seconds.
When m=100; n=1000; k=14;, it becomes better (4.84x).
Elapsed time is 2.804957 seconds.
Elapsed time is 13.585698 seconds.
In trying to choose which indexing method to recommend, I tried to measeure the performance. However, the measurements confused me a lot. I ran this multiple times in different orders, but the measurements remain consistent.
Here is how I measured the performance:
for N = [10000 15000 100000 150000]
x = round(rand(N,1)*5)-2;
idx1 = x~=0;
idx2 = abs(x)>0;
tic
for t = 1:5000
idx1 = x~=0;
end
toc
tic
for t = 1:5000
idx2 = abs(x)>0;
end
toc
end
And this is the result:
Elapsed time is 0.203504 seconds.
Elapsed time is 0.230439 seconds.
Elapsed time is 0.319840 seconds.
Elapsed time is 0.352562 seconds.
Elapsed time is 2.118108 seconds. % This is the strange part
Elapsed time is 0.434818 seconds.
Elapsed time is 0.508882 seconds.
Elapsed time is 0.550144 seconds.
I checked and for values around 100000 this also happens, even at 50000 the strange measurements occur.
So my question is: Does anyone else experience this for a certain range, and what causes this? (Is it a bug?)
I think this is something to do with JIT (results below are using 2011b). Depending on system, version of Matlab, the size of variables, and exactly what is in the loop(s), it is not always faster to use JIT. This is related to the "warm-up" effect, where sometimes if you run an m-file more than once in a session it gets quicker after the first run, as the accelerator only has to compile some parts of the code once.
JIT on (feature accel on)
Elapsed time is 0.176765 seconds.
Elapsed time is 0.185301 seconds.
Elapsed time is 0.252631 seconds.
Elapsed time is 0.284415 seconds.
Elapsed time is 1.782446 seconds.
Elapsed time is 0.693508 seconds.
Elapsed time is 0.855005 seconds.
Elapsed time is 1.004955 seconds.
JIT off (feature accel off)
Elapsed time is 0.143924 seconds.
Elapsed time is 0.184360 seconds.
Elapsed time is 0.206405 seconds.
Elapsed time is 0.306424 seconds.
Elapsed time is 1.416654 seconds.
Elapsed time is 2.718846 seconds.
Elapsed time is 2.110420 seconds.
Elapsed time is 4.027782 seconds.
ETA, kinda interesting to see what happens if you use integers instead of doubles:
JIT on, same code but converted x using int8
Elapsed time is 0.202201 seconds.
Elapsed time is 0.192103 seconds.
Elapsed time is 0.294974 seconds.
Elapsed time is 0.296191 seconds.
Elapsed time is 2.001245 seconds.
Elapsed time is 2.038713 seconds.
Elapsed time is 0.870500 seconds.
Elapsed time is 0.898301 seconds.
JIT off, using int8
Elapsed time is 0.198611 seconds.
Elapsed time is 0.187589 seconds.
Elapsed time is 0.282775 seconds.
Elapsed time is 0.282938 seconds.
Elapsed time is 1.837561 seconds.
Elapsed time is 1.846766 seconds.
Elapsed time is 2.746034 seconds.
Elapsed time is 2.760067 seconds.
This may due to some automatic optimization matlab uses for its Basic Linear Algebra Subroutine.
Just like yours, my configuration (OSX 10.8.4, R2012a with default settings) takes longer to compute idx1 = x~=0 for x (10e5 elements) than x (11e5 elements). See the left panel of the figure where the processing time (y-axis) is measured for different vector size (x-axis). You will see a lower proceesing time for N>103000. In this panel, I also displayed the number of cores that were active during the calculation. You will see that there is no drop for the one-core configuration. It means that matlab do not optimize the execution of ~= when 1 core is active (no parallelization possible). Matlab enables some optimization routines when two conditions are met: multiple cores and a vector of sufficient size.
The right panel displays the results when feature('accel','on'/off') is set to off (doc). Here, only one core is active (1-core and 4-core are identical) and therefore no optimization is possible.
Finally, the function I used for activating/deactivating the cores is maxNumCompThreads. According to Loren Shure, maxNumCompThreads controls both the JIT and BLAS. Since feature('JIT','on'/'off') did not play a role in the performance, BLAS is the last option remaining.
I will leave the final sentence to Loren: "The main message here is that you should not generally need to use this function [maxNumCompThreads] at all! Why? Because we'd like to make MATLAB do the best job possible for you."
accel = {'on';'off'};
figure('Color','w');
N = 100000:1000:105000;
for ind_accel = 2:-1:1
eval(['feature(''accel'',''' accel{ind_accel} ''')']);
tElapsed = zeros(4,length(N));
for ind_core = 1:4
maxNumCompThreads(ind_core);
n_core = maxNumCompThreads;
for ii = 1:length(N)
fprintf('core asked: %d(true:%d) - N:%d\n',ind_core,n_core, ii);
x = round(rand(N(ii),1)*5)-2;
idx1 = x~=0;
tStart = tic;
for t = 1:5000
idx1 = x~=0;
end
tElapsed(ind_core,ii) = toc(tStart);
end
end
h2 = subplot(1,2,ind_accel);
plot(N, tElapsed,'-o','MarkerSize',10);
legend({('1':'4')'});
xlabel('Vector size','FontSize',14);
ylabel('Processing time','FontSize',14);
set(gca,'FontSize',14,'YLim',[0.2 0.7]);
title(['accel ' accel{ind_accel}]);
end
I am calculating an equation A*x=B, where A is a matrix and B is a vector, x is answer (unknown) vector.
Hardware specs:
Intel i7 3630QM (4 cores),
nVidia GeForce GT 640M (384 CUDA cores)
Here's an example:
>> A=rand(5000);
>> B=rand(5000,1);
>> Agpu=gpuArray(A);
>> Bgpu=gpuArray(B);
>> tic;A\B;toc;
Elapsed time is 1.382281 seconds.
>> tic;Agpu\Bgpu;toc;
Elapsed time is 4.775395 seconds.
Somehow GPU is much slower... Why? It is also slower in FFT, INV, LU calculations, which should be related with matrix division.
However, GPU is much faster in matrix multiplication (the same data):
>> tic;A*B;toc;
Elapsed time is 0.014700 seconds.
>> tic;Agpu*Bgpu;toc;
Elapsed time is 0.000505 seconds.
The main question is why GPU A\B (mldivide) is so slow comparing to CPU?
UPDATED
Here are some more results when A, B (on CPU), AA, BB (on GPU) are rand(5000):
>> tic;fft(A);toc;
Elapsed time is *0.117189 *seconds.
>> tic;fft(AA);toc;
Elapsed time is 1.062969 seconds.
>> tic;fft(AA);toc;
Elapsed time is 0.542242 seconds.
>> tic;fft(AA);toc;
Elapsed time is *0.229773* seconds.
>> tic;fft(AA);toc;
Bold times are stable times. However GPU is almost twice slower. By the way, why GPU is even more slower on first two attempts? Is it compiled twice firstly?
In addition:
>> tic;sin(A);toc;
Elapsed time is *0.121008* seconds.
>> tic;sin(AA);toc;
Elapsed time is 0.020448 seconds.
>> tic;sin(AA);toc;
Elapsed time is 0.157209 seconds.
>> tic;sin(AA);toc;
Elapsed time is *0.000419 *seconds
After two calculations GPU is incredibly faster in sin calculations.
So, still, why GPU is so slow in matrix division, fft and similar calculations, though it is so fast in matrix multiplication and trigonometry? The question actually should not be like that... GPU should be faster in all these calculations because Matlab has released overlapped functions (mldivide, fft) for GPU.
Could somebody help me solve these issues, please? :)
Please read how Matlab calculates the solutions. It will help you understand why GPU is slower.
I'll try say it in few words.
A*x=b becomes L*(U*x=y)=b, L*U=A
So Matlab makes A to L*U (This process cannot be done fully parallel
as far as I know instead some steps can be done parallel, due to
their nature)
Then Matlab solves L*y=B and finds y. (This process cannot be done
parallel as each step requires data from previous)
Then Matlab solves U*x=y and finds x. (This process cannot be done
parallel as each step requires data from previous)
So it GPU clock is slower than the CPU, and since processes cannot be done parallel, CPU is faster. And no, unless you come up with a better method (good luck!) then GPU will be always slower except in some very specific cases.
Part 1 of the explanation is in the answer from user2230360, but your question is twofold, so I'll add a bit about the multiplication.
As noted already, the LU factorization is not very easily parallelized even if some steps can be. Matrix multiplication, however, is very much parallelizable. If you're working with these things you should be able to do matrix multiplication by hand, and then you will know that calculating the elements of the matrix C in A*B=C can be done in any order you want - hence the possibility for parallel computation. That is probably why you're seeing so lightning fast multiplication, but slow solving of linear systems. One can't be parallelized "as much as the other".
I seem to be getting different performance results when using eigs. On the same matrix, calling
[c, v] = eigs(A, 2, 'sm');
somtimes takes 30 seconds and sometimes 2 seconds.
I need to know whether there's a speedup using some caching on subsequent calls for eigs on the same matrix since I need to report the times...
If so, this doesn't appear to be a generic feature. I ran this test from the command line
A = randn(10000);
B = randn(10000);
C = B;
tic; [c1,v1] = eigs(A,2,'sm'); toc;
tic; [c2,v2] = eigs(A,2,'sm'); toc;
tic; [c3,v3] = eigs(B,2,'sm'); toc;
tic; [c4,v4] = eigs(C,2,'sm'); toc
and got this result
Elapsed time is 32.373128 seconds.
Elapsed time is 28.412905 seconds.
Elapsed time is 32.752616 seconds.
Elapsed time is 29.024055 seconds.
I'm surprised, because usually MATLAB tries to outsmart you and will store results for reuse.
Under some circumstances, a large enough matrix might push things into virtual memory, or not, depending upon whether there is a large enough block of contiguous RAM available. Or, you may be doing something on the side.
You can verify what is happening by watching a process monitor as you do the test. Are there suddenly large amounts of disk accesses? If so, then virtual memory is being touched. Is there a different, unrelated process active that is hogging the CPU?