Faster way to initialize arrays via empty matrix multiplication? (Matlab) - performance

I've stumbled upon the weird way (in my view) that Matlab is dealing with empty matrices. For example, if two empty matrices are multiplied the result is:
zeros(3,0)*zeros(0,3)
ans =
0 0 0
0 0 0
0 0 0
Now, this already took me by surprise, however, a quick search got me to the link above, and I got an explanation of the somewhat twisted logic of why this is happening.
However, nothing prepared me for the following observation. I asked myself, how efficient is this type of multiplication vs just using zeros(n) function, say for the purpose of initialization? I've used timeit to answer this:
f=#() zeros(1000)
timeit(f)
ans =
0.0033
vs:
g=#() zeros(1000,0)*zeros(0,1000)
timeit(g)
ans =
9.2048e-06
Both have the same outcome of 1000x1000 matrix of zeros of class double, but the empty matrix multiplication one is ~350 times faster! (a similar result happens using tic and toc and a loop)
How can this be? are timeit or tic,toc bluffing or have I found a faster way to initialize matrices?
(this was done with matlab 2012a, on a win7-64 machine, intel-i5 650 3.2Ghz...)
EDIT:
After reading your feedback, I have looked more carefully into this peculiarity, and tested on 2 different computers (same matlab ver though 2012a) a code that examine the run time vs the size of matrix n. This is what I get:
The code to generate this used timeit as before, but a loop with tic and toc will look the same. So, for small sizes, zeros(n) is comparable. However, around n=400 there is a jump in performance for the empty matrix multiplication. The code I've used to generate that plot was:
n=unique(round(logspace(0,4,200)));
for k=1:length(n)
f=#() zeros(n(k));
t1(k)=timeit(f);
g=#() zeros(n(k),0)*zeros(0,n(k));
t2(k)=timeit(g);
end
loglog(n,t1,'b',n,t2,'r');
legend('zeros(n)','zeros(n,0)*zeros(0,n)',2);
xlabel('matrix size (n)'); ylabel('time [sec]');
Are any of you experience this too?
EDIT #2:
Incidentally, empty matrix multiplication is not needed to get this effect. One can simply do:
z(n,n)=0;
where n> some threshold matrix size seen in the previous graph, and get the exact efficiency profile as with empty matrix multiplication (again using timeit).
Here's an example where it improves efficiency of a code:
n = 1e4;
clear z1
tic
z1 = zeros( n );
for cc = 1 : n
z1(:,cc)=cc;
end
toc % Elapsed time is 0.445780 seconds.
%%
clear z0
tic
z0 = zeros(n,0)*zeros(0,n);
for cc = 1 : n
z0(:,cc)=cc;
end
toc % Elapsed time is 0.297953 seconds.
However, using z(n,n)=0; instead yields similar results to the zeros(n) case.

This is strange, I am seeing f being faster while g being slower than what you are seeing. But both of them are identical for me. Perhaps a different version of MATLAB ?
>> g = #() zeros(1000, 0) * zeros(0, 1000);
>> f = #() zeros(1000)
f =
#()zeros(1000)
>> timeit(f)
ans =
8.5019e-04
>> timeit(f)
ans =
8.4627e-04
>> timeit(g)
ans =
8.4627e-04
EDIT can you add + 1 for the end of f and g, and see what times you are getting.
EDIT Jan 6, 2013 7:42 EST
I am using a machine remotely, so sorry about the low quality graphs (had to generate them blind).
Machine config:
i7 920. 2.653 GHz. Linux. 12 GB RAM. 8MB cache.
It looks like even the machine I have access to shows this behavior, except at a larger size (somewhere between 1979 and 2073). There is no reason I can think of right now for the empty matrix multiplication to be faster at larger sizes.
I will be investigating a little bit more before coming back.
EDIT Jan 11, 2013
After #EitanT's post, I wanted to do a little bit more of digging. I wrote some C code to see how matlab may be creating a zeros matrix. Here is the c++ code that I used.
int main(int argc, char **argv)
{
for (int i = 1975; i <= 2100; i+=25) {
timer::start();
double *foo = (double *)malloc(i * i * sizeof(double));
for (int k = 0; k < i * i; k++) foo[k] = 0;
double mftime = timer::stop();
free(foo);
timer::start();
double *bar = (double *)malloc(i * i * sizeof(double));
memset(bar, 0, i * i * sizeof(double));
double mmtime = timer::stop();
free(bar);
timer::start();
double *baz = (double *)calloc(i * i, sizeof(double));
double catime = timer::stop();
free(baz);
printf("%d, %lf, %lf, %lf\n", i, mftime, mmtime, catime);
}
}
Here are the results.
$ ./test
1975, 0.013812, 0.013578, 0.003321
2000, 0.014144, 0.013879, 0.003408
2025, 0.014396, 0.014219, 0.003490
2050, 0.014732, 0.013784, 0.000043
2075, 0.015022, 0.014122, 0.000045
2100, 0.014606, 0.014480, 0.000045
As you can see calloc (4th column) seems to be the fastest method. It is also getting significantly faster between 2025 and 2050 (I'd assume it would at around 2048 ?).
Now I went back to matlab to check for the same. Here are the results.
>> test
1975, 0.003296, 0.003297
2000, 0.003377, 0.003385
2025, 0.003465, 0.003464
2050, 0.015987, 0.000019
2075, 0.016373, 0.000019
2100, 0.016762, 0.000020
It looks like both f() and g() are using calloc at smaller sizes (<2048 ?). But at larger sizes f() (zeros(m, n)) starts to use malloc + memset, while g() (zeros(m, 0) * zeros(0, n)) keeps using calloc.
So the divergence is explained by the following
zeros(..) begins to use a different (slower ?) scheme at larger sizes.
calloc also behaves somewhat unexpectedly, leading to an improvement in performance.
This is the behavior on Linux. Can someone do the same experiment on a different machine (and perhaps a different OS) and see if the experiment holds ?

The results might be a bit misleading. When you multiply two empty matrices, the resulting matrix is not immediately "allocated" and "initialized", rather this is postponed until you first use it (sort of like a lazy evaluation).
The same applies when indexing out of bounds to grow a variable, which in the case of numeric arrays fills out any missing entries with zeros (I discuss afterwards the non-numeric case). Of course growing the matrix this way does not overwrite existing elements.
So while it may seem faster, you are just delaying the allocation time until you actually first use the matrix. In the end you'll have similar timings as if you did the allocation from the start.
Example to show this behavior, compared to a few other alternatives:
N = 1000;
clear z
tic, z = zeros(N,N); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z = zeros(N,0)*zeros(0,N); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z(N,N) = 0; toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z = full(spalloc(N,N,0)); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z(1:N,1:N) = 0; toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
val = 0;
tic, z = val(ones(N)); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z = repmat(0, [N N]); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
The result shows that if you sum the elapsed time for both instructions in each case, you end up with similar total timings:
// zeros(N,N)
Elapsed time is 0.004525 seconds.
Elapsed time is 0.000792 seconds.
// zeros(N,0)*zeros(0,N)
Elapsed time is 0.000052 seconds.
Elapsed time is 0.004365 seconds.
// z(N,N) = 0
Elapsed time is 0.000053 seconds.
Elapsed time is 0.004119 seconds.
The other timings were:
// full(spalloc(N,N,0))
Elapsed time is 0.001463 seconds.
Elapsed time is 0.003751 seconds.
// z(1:N,1:N) = 0
Elapsed time is 0.006820 seconds.
Elapsed time is 0.000647 seconds.
// val(ones(N))
Elapsed time is 0.034880 seconds.
Elapsed time is 0.000911 seconds.
// repmat(0, [N N])
Elapsed time is 0.001320 seconds.
Elapsed time is 0.003749 seconds.
These measurements are too small in the milliseconds and might not be very accurate, so you might wanna run these commands in a loop a few thousand times and take the average. Also sometimes running saved M-functions is faster than running scripts or on the command prompt, as certain optimizations only happen that way...
Either way allocation is usually done once, so who cares if it takes an extra 30ms :)
A similar behavior can be seen with cell arrays or arrays of structures. Consider the following example:
N = 1000;
tic, a = cell(N,N); toc
tic, b = repmat({[]}, [N,N]); toc
tic, c{N,N} = []; toc
which gives:
Elapsed time is 0.001245 seconds.
Elapsed time is 0.040698 seconds.
Elapsed time is 0.004846 seconds.
Note that even if they are all equal, they occupy different amount of memory:
>> assert(isequal(a,b,c))
>> whos a b c
Name Size Bytes Class Attributes
a 1000x1000 8000000 cell
b 1000x1000 112000000 cell
c 1000x1000 8000104 cell
In fact the situation is a bit more complicated here, since MATLAB is probably sharing the same empty matrix for all the cells, rather than creating multiple copies.
The cell array a is in fact an array of uninitialized cells (an array of NULL pointers), while b is a cell array where each cell is an empty array [] (internally and because of data sharing, only the first cell b{1} points to [] while all the rest have a reference to the first cell). The final array c is similar to a (uninitialized cells), but with the last one containing an empty numeric matrix [].
I looked around the list of exported C functions from the libmx.dll (using Dependency Walker tool), and I found a few interesting things.
there are undocumented functions for creating uninitialized arrays: mxCreateUninitDoubleMatrix, mxCreateUninitNumericArray, and mxCreateUninitNumericMatrix. In fact there is a submission on the File Exchange makes use of these functions to provide a faster alternative to zeros function.
there exist an undocumented function called mxFastZeros. Googling online, I can see you cross-posted this question on MATLAB Answers as well, with some excellent answers over there. James Tursa (same author of UNINIT from before) gave an example on how to use this undocumented function.
libmx.dll is linked against tbbmalloc.dll shared library. This is Intel TBB scalable memory allocator. This library provides equivalent memory allocation functions (malloc, calloc, free) optimized for parallel applications. Remember that many MATLAB functions are automatically multithreaded, so I wouldn't be surprised if zeros(..) is multithreaded and is using Intel's memory allocator once the matrix size is large enough (here is recent comment by Loren Shure that confirms this fact).
Regarding the last point about the memory allocator, you could write a similar benchmark in C/C++ similar to what #PavanYalamanchili did, and compare the various allocators available. Something like this. Remember that MEX-files have a slightly higher memory management overhead, since MATLAB automatically frees any memory that was allocated in MEX-files using the mxCalloc, mxMalloc, or mxRealloc functions. For what it's worth, it used to be possible to change the internal memory manager in older versions.
EDIT:
Here is a more thorough benchmark to compare the discussed alternatives. It specifically shows that once you stress the use of the entire allocated matrix, all three methods are on equal footing, and the difference is negligible.
function compare_zeros_init()
iter = 100;
for N = 512.*(1:8)
% ZEROS(N,N)
t = zeros(iter,3);
for i=1:iter
clear z
tic, z = zeros(N,N); t(i,1) = toc;
tic, z(:) = 9; t(i,2) = toc;
tic, z = z + 1; t(i,3) = toc;
end
fprintf('N = %4d, ZEROS = %.9f\n', N, mean(sum(t,2)))
% z(N,N)=0
t = zeros(iter,3);
for i=1:iter
clear z
tic, z(N,N) = 0; t(i,1) = toc;
tic, z(:) = 9; t(i,2) = toc;
tic, z = z + 1; t(i,3) = toc;
end
fprintf('N = %4d, GROW = %.9f\n', N, mean(sum(t,2)))
% ZEROS(N,0)*ZEROS(0,N)
t = zeros(iter,3);
for i=1:iter
clear z
tic, z = zeros(N,0)*zeros(0,N); t(i,1) = toc;
tic, z(:) = 9; t(i,2) = toc;
tic, z = z + 1; t(i,3) = toc;
end
fprintf('N = %4d, MULT = %.9f\n\n', N, mean(sum(t,2)))
end
end
Below are the timings averaged over 100 iterations in terms of increasing matrix size. I performed the tests in R2013a.
>> compare_zeros_init
N = 512, ZEROS = 0.001560168
N = 512, GROW = 0.001479991
N = 512, MULT = 0.001457031
N = 1024, ZEROS = 0.005744873
N = 1024, GROW = 0.005352638
N = 1024, MULT = 0.005359236
N = 1536, ZEROS = 0.011950846
N = 1536, GROW = 0.009051589
N = 1536, MULT = 0.008418878
N = 2048, ZEROS = 0.012154002
N = 2048, GROW = 0.010996315
N = 2048, MULT = 0.011002169
N = 2560, ZEROS = 0.017940950
N = 2560, GROW = 0.017641046
N = 2560, MULT = 0.017640323
N = 3072, ZEROS = 0.025657999
N = 3072, GROW = 0.025836506
N = 3072, MULT = 0.051533432
N = 3584, ZEROS = 0.074739924
N = 3584, GROW = 0.070486857
N = 3584, MULT = 0.072822335
N = 4096, ZEROS = 0.098791732
N = 4096, GROW = 0.095849788
N = 4096, MULT = 0.102148452

After doing some research, I've found this article in "Undocumented Matlab", in which Mr. Yair Altman had already come to the conclusion that MathWork's way of preallocating matrices using zeros(M, N) is indeed not the most efficient way.
He timed x = zeros(M,N) vs. clear x, x(M,N) = 0 and found that the latter is ~500 times faster. According to his explanation, the second method simply creates an M-by-N matrix, the elements of which being automatically initialized to 0. The first method however, creates x (with x having automatic zero elements) and then assigns a zero to every element in x again, and that is a redundant operation that takes more time.
In the case of empty matrix multiplication, such as what you've shown in your question, MATLAB expects the product to be an M×N matrix, and therefore it allocates an M×N matrix. Consequently, the output matrix is automatically initialized to zeroes. Since the original matrices are empty, no further calculations are performed, and hence the elements in the output matrix remain unchanged and equal to zero.

Interesting question, apparently there are several ways to 'beat' the built-in zeros function. My only guess as to why this is happening would be that it could be more memory efficient (after all, zeros(LargeNumer) will sooner cause Matlab to hit the memory limit than form a devestating speed bottleneck in most code), or more robust somehow.
Here is another fast allocation method using a sparse matrix, i have added the regular zeros function as a benchmark:
tic; x=zeros(1000,1000); toc
Elapsed time is 0.002863 seconds.
tic; clear x; x(1000,1000)=0; toc
Elapsed time is 0.000282 seconds.
tic; x=full(spalloc(1000,1000,0)); toc
Elapsed time is 0.000273 seconds.
tic; x=spalloc(1000,1000,1000000); toc %Is this the same for practical purposes?
Elapsed time is 0.000281 seconds.

Related

Possible to speed up this gpuArray calculation with arrayfun() (or otherwise)?

I have a complex matrix A, and would like to modify it Nt times according to A = exp( -1i*(A + abs(A).^2) ). The size of A is typically 1000x1000, and the number of times to run would be around 10000.
I am looking to reduce the time taken to carry out these operations. For 1000 iterations on the CPU, I measure around 6.4 seconds. Following the Matlab documentation, I was able to move this to the GPU, which reduced the time taken to 0.07 seconds (an incredible x91 improvement!). So far so good.
However, I also now read this link in the docs, which describes how we can sometimes find even further improvement for element-wise calculations if we use arrayfun() as well. If I try to follow the tutorial, the time taken is actually worse, clocking in at 0.47 seconds. My tests are shown below:
Nt = 1000; % Number of times to run each method
test_functionFcn = #test_function;
A = rand( 500, 600, 'double' ) + rand( 500, 600, 'double' )*1i; % Define an initial complex matrix
gpu_A = gpuArray(A); % Transfer matrix to a GPU array
%%%%%%%%%%%%%%%%%%%% Run the calculation Nt times on CPU only %%%%%%%%%%%%%%%%%%%%
cpu_data_out = A;
tic
for k = 1:Nt
cpu_data_out = test_function( cpu_data_out );
end
tcpu = toc;
%%%%%%%%%%%%%%%%% Run the calculation Nt times on GPU directly %%%%%%%%%%%%%%%%%%%%
gpu_data_out = gpu_A;
tic
for k = 1:Nt
gpu_data_out = test_function(gpu_data_out);
end
tgpu = toc;
%%%%%%%%%%%%%% Run the calculation Nt times on GPU using arrayfun() %%%%%%%%%%%%%%
gpuarrayfun_data_out = gpu_A;
tic
for k = 1:Nt
gpuarrayfun_data_out = arrayfun( test_functionFcn, gpuarrayfun_data_out );
end
tgpu_arrayfun = toc;
%%% Print results %%%
fprintf( 'Time taken using only CPU: %g\n', tcpu );
fprintf( 'Time taken using gpuArray directly: %g\n', tgpu );
fprintf( 'Time taken using GPU + arrayfun(): %g\n', tgpu_arrayfun );
%%% Function to operate on matrices %%%
function y = test_function(x)
y = exp(-1i*(x + abs(x).^2));
end
and the results are:
Time taken using only CPU: 6.38785
Time taken using gpuArray directly: 0.0680587
Time taken using GPU + arrayfun(): 0.474612
My questions are:
Am I using arrayfun() correctly in this situation, and it is expected that arrayfun() should be worse?
If so, and it is really just expected that it is slower than the direct gpuArray method, is there any easy (i.e non-MEX) way to speed up such a calculation? (I see they also mention using pagefun for example).
Thanks in advance for any advice.
(The graphics card is Nvidia Quadro M4000, and I am running Matlab R2017a)
Edit
After reading #Edric's answer, I think it is important to show a little more of the wider code. One thing I didn't mention in the OP is that in my actual main code, is that inside the k=1:Nt loop there is an additional operation which is a matrix multiplication with the transpose of a sparse, tridiagonal matrix. Here is a more fleshed out MWE of what is really going on:
Nt = 1000; % Number of times to run each method
N_rows = 500;
N_cols = 600;
test_functionFcn = #test_function;
A = rand( N_rows, N_cols, 'double' ) + rand( N_rows, N_cols, 'double' )*1i; % Define an initial complex matrix
%%% Generate a sparse, tridiagonal, square transformation matrix %%%%%%%%
mm = 10*ones(N_cols,1); % Subdiagonal elements
dd = 20*ones(N_cols,1); % Main diagonal elements
pp = 30*ones(N_cols,1); % Superdiagonal elements
M = spdiags([mm dd pp],-1:1,N_cols,N_cols);
M(1,1) = 6; % Set a couple of other entries
M(2,1) = 3;
%%%%%%%%%%%%%%%%%%%% Run the calculation Nt times on CPU only %%%%%%%%%%%%
cpu_data_out = A;
for k = 1:Nt
cpu_data_out = test_function( cpu_data_out );
cpu_data_out = cpu_data_out*M.';
end
%%% Function to operate on matrices %%%
function y = test_function(x)
y = exp(-1i*(x + abs(x).^2));
end
I'm very sorry for not including that in the OP - I did not realise at the time that it might be relevant to the solution. Does this change things? Are there still gains to be made with arrayfun() on the GPU, or is this now not suitable for converting to arrayfun() ?
A few points here. Firstly, (and most importantly), to time code on the GPU, you need to use either gputimeit, or you need to inject a call to wait(gpuDevice) before calling toc. That's because work is launched asynchronously on the GPU, and you only get accurate timings by waiting for it to finish. With those minor modifications, on my GPU, I see 0.09 seconds for the gpuArray method, and 0.18 seconds for the arrayfun version.
Running a loop of GPU operations is generally inefficient, so the main gain you can get here is by pushing the loop inside the arrayfun function body so that that loop runs directly on the GPU. Like this:
%%% Function to operate on matrices %%%
function x = test_function(x,Nt)
for ii = 1:Nt
x = exp(-1i*(x + abs(x).^2));
end
end
You'll need to invoke it like A = arrayfun(#test_function, A, Nt). On my GPU, this brings the arrayfun time down to 0.05 seconds, so about twice as fast as the plain gpuArray version.

Efficiency of diag() - MATLAB

Motivation:
In writing out a matrix operation that was to be performed over tens of thousands of vectors I kept coming across the warning:
Requested 200000x200000 (298.0GB) array exceeds maximum array size
preference. Creation of arrays greater than this limit may take a long
time and cause MATLAB to become unresponsive. See array size limit or
preference panel for more information.
The reason for this was my use of diag() to get the values down the diagonal of an matrix inner product. Because MATLAB is generally optimized for vector/matrix operations, when I first write code, I usually go for the vectorized form. In this case, however, MATLAB has to build the entire matrix in order to get the diagonal which causes the memory and speed issues.
Experiment:
I decided to test the use of diag() vs a for loop to see if at any point it was more efficient to use diag():
num = 200000; % Matrix dimension
x = ones(num, 1);
y = 2 * ones(num, 1);
% z = diag(x*y'); % Expression to solve
% Loop approach
tic
z = zeros(num,1);
for i = 1 : num
z(i) = x(i)*y(i);
end
toc
% Dividing the too-large matrix into process-able chunks
fraction = [10, 20, 50, 100, 500, 1000, 5000, 10000, 20000];
time = zeros(size(fraction));
for k = 1 : length(fraction)
f = fraction(k);
% Operation to time
tic
z = zeros(num,1);
for i = 1 : k
first = (i-1) * (num / f);
last = first + (num / f);
z(first + 1 : last) = diag(x(first + 1: last) * y(first + 1 : last)');
end
time(k) = toc;
end
% Plot results
figure;
hold on
plot(log10(fraction), log10(chunkTime));
plot(log10(fraction), repmat(log10(loopTime), 1, length(fraction)));
plot(log10(fraction), log10(chunkTime), 'g*'); % Plot points along time
legend('Partioned Running Time', 'Loop Running Time');
xlabel('Log_{10}(Fractional Size)'), ylabel('Log_{10}(Running Time)'), title('Running Time Comparison');
This is the result of the test:
(NOTE: The red line represents the loop time as a threshold--it's not to say that the total loop time is constant regardless of the number of loops)
From the graph it is clear that it takes breaking the operations down into roughly 200x200 square matrices to be faster to use diag than to perform the same operation using loops.
Question:
Can someone explain why I'm seeing these results? Also, I would think that with MATLAB's ever-more optimized design, there would be built-in handling of these massive matrices within a diag() function call. For example, it could just perform the i = j indexed operations. Is there a particular reason why this might be prohibitive?
I also haven't really thought of memory implications for diag using the partition method, although it's clear that as the partition size decreases, memory requirements drop.
Test of speed of diag vs. a loop.
Initialization:
n = 10000;
M = randn(n, n); %create a random matrix.
Test speed of diag:
tic;
d = diag(M);
toc;
Test speed of loop:
tic;
d = zeros(n, 1);
for i=1:n
d(i) = M(i,i);
end;
toc;
This would test diag. Your code is not a clean test of diag...
Comment on where there might be confusion
Diag only extracts the diagonal of a matrix. If x and y are vectors, and you do d = diag(x * y'), MATLAB first constructs the n by n matrix x*y' and calls diag on that. This is why, you get the error, "cannot construct 290GB matrix..." Matlab interpreter does not optimize in a crazy way, realize you only want the diagonal and construct just a vector (rather than full matrix with x*y', that does not happen.
Not sure if you're asking this, but the fastest way to calculate d = diag(x*y') where x and y are n by 1 vectors would simply be: d = x.*y

Matlab allocating A=zeros(N) vs. A(N,N)=0 yields strange performance results [duplicate]

I've stumbled upon the weird way (in my view) that Matlab is dealing with empty matrices. For example, if two empty matrices are multiplied the result is:
zeros(3,0)*zeros(0,3)
ans =
0 0 0
0 0 0
0 0 0
Now, this already took me by surprise, however, a quick search got me to the link above, and I got an explanation of the somewhat twisted logic of why this is happening.
However, nothing prepared me for the following observation. I asked myself, how efficient is this type of multiplication vs just using zeros(n) function, say for the purpose of initialization? I've used timeit to answer this:
f=#() zeros(1000)
timeit(f)
ans =
0.0033
vs:
g=#() zeros(1000,0)*zeros(0,1000)
timeit(g)
ans =
9.2048e-06
Both have the same outcome of 1000x1000 matrix of zeros of class double, but the empty matrix multiplication one is ~350 times faster! (a similar result happens using tic and toc and a loop)
How can this be? are timeit or tic,toc bluffing or have I found a faster way to initialize matrices?
(this was done with matlab 2012a, on a win7-64 machine, intel-i5 650 3.2Ghz...)
EDIT:
After reading your feedback, I have looked more carefully into this peculiarity, and tested on 2 different computers (same matlab ver though 2012a) a code that examine the run time vs the size of matrix n. This is what I get:
The code to generate this used timeit as before, but a loop with tic and toc will look the same. So, for small sizes, zeros(n) is comparable. However, around n=400 there is a jump in performance for the empty matrix multiplication. The code I've used to generate that plot was:
n=unique(round(logspace(0,4,200)));
for k=1:length(n)
f=#() zeros(n(k));
t1(k)=timeit(f);
g=#() zeros(n(k),0)*zeros(0,n(k));
t2(k)=timeit(g);
end
loglog(n,t1,'b',n,t2,'r');
legend('zeros(n)','zeros(n,0)*zeros(0,n)',2);
xlabel('matrix size (n)'); ylabel('time [sec]');
Are any of you experience this too?
EDIT #2:
Incidentally, empty matrix multiplication is not needed to get this effect. One can simply do:
z(n,n)=0;
where n> some threshold matrix size seen in the previous graph, and get the exact efficiency profile as with empty matrix multiplication (again using timeit).
Here's an example where it improves efficiency of a code:
n = 1e4;
clear z1
tic
z1 = zeros( n );
for cc = 1 : n
z1(:,cc)=cc;
end
toc % Elapsed time is 0.445780 seconds.
%%
clear z0
tic
z0 = zeros(n,0)*zeros(0,n);
for cc = 1 : n
z0(:,cc)=cc;
end
toc % Elapsed time is 0.297953 seconds.
However, using z(n,n)=0; instead yields similar results to the zeros(n) case.
This is strange, I am seeing f being faster while g being slower than what you are seeing. But both of them are identical for me. Perhaps a different version of MATLAB ?
>> g = #() zeros(1000, 0) * zeros(0, 1000);
>> f = #() zeros(1000)
f =
#()zeros(1000)
>> timeit(f)
ans =
8.5019e-04
>> timeit(f)
ans =
8.4627e-04
>> timeit(g)
ans =
8.4627e-04
EDIT can you add + 1 for the end of f and g, and see what times you are getting.
EDIT Jan 6, 2013 7:42 EST
I am using a machine remotely, so sorry about the low quality graphs (had to generate them blind).
Machine config:
i7 920. 2.653 GHz. Linux. 12 GB RAM. 8MB cache.
It looks like even the machine I have access to shows this behavior, except at a larger size (somewhere between 1979 and 2073). There is no reason I can think of right now for the empty matrix multiplication to be faster at larger sizes.
I will be investigating a little bit more before coming back.
EDIT Jan 11, 2013
After #EitanT's post, I wanted to do a little bit more of digging. I wrote some C code to see how matlab may be creating a zeros matrix. Here is the c++ code that I used.
int main(int argc, char **argv)
{
for (int i = 1975; i <= 2100; i+=25) {
timer::start();
double *foo = (double *)malloc(i * i * sizeof(double));
for (int k = 0; k < i * i; k++) foo[k] = 0;
double mftime = timer::stop();
free(foo);
timer::start();
double *bar = (double *)malloc(i * i * sizeof(double));
memset(bar, 0, i * i * sizeof(double));
double mmtime = timer::stop();
free(bar);
timer::start();
double *baz = (double *)calloc(i * i, sizeof(double));
double catime = timer::stop();
free(baz);
printf("%d, %lf, %lf, %lf\n", i, mftime, mmtime, catime);
}
}
Here are the results.
$ ./test
1975, 0.013812, 0.013578, 0.003321
2000, 0.014144, 0.013879, 0.003408
2025, 0.014396, 0.014219, 0.003490
2050, 0.014732, 0.013784, 0.000043
2075, 0.015022, 0.014122, 0.000045
2100, 0.014606, 0.014480, 0.000045
As you can see calloc (4th column) seems to be the fastest method. It is also getting significantly faster between 2025 and 2050 (I'd assume it would at around 2048 ?).
Now I went back to matlab to check for the same. Here are the results.
>> test
1975, 0.003296, 0.003297
2000, 0.003377, 0.003385
2025, 0.003465, 0.003464
2050, 0.015987, 0.000019
2075, 0.016373, 0.000019
2100, 0.016762, 0.000020
It looks like both f() and g() are using calloc at smaller sizes (<2048 ?). But at larger sizes f() (zeros(m, n)) starts to use malloc + memset, while g() (zeros(m, 0) * zeros(0, n)) keeps using calloc.
So the divergence is explained by the following
zeros(..) begins to use a different (slower ?) scheme at larger sizes.
calloc also behaves somewhat unexpectedly, leading to an improvement in performance.
This is the behavior on Linux. Can someone do the same experiment on a different machine (and perhaps a different OS) and see if the experiment holds ?
The results might be a bit misleading. When you multiply two empty matrices, the resulting matrix is not immediately "allocated" and "initialized", rather this is postponed until you first use it (sort of like a lazy evaluation).
The same applies when indexing out of bounds to grow a variable, which in the case of numeric arrays fills out any missing entries with zeros (I discuss afterwards the non-numeric case). Of course growing the matrix this way does not overwrite existing elements.
So while it may seem faster, you are just delaying the allocation time until you actually first use the matrix. In the end you'll have similar timings as if you did the allocation from the start.
Example to show this behavior, compared to a few other alternatives:
N = 1000;
clear z
tic, z = zeros(N,N); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z = zeros(N,0)*zeros(0,N); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z(N,N) = 0; toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z = full(spalloc(N,N,0)); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z(1:N,1:N) = 0; toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
val = 0;
tic, z = val(ones(N)); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
clear z
tic, z = repmat(0, [N N]); toc
tic, z = z + 1; toc
assert(isequal(z,ones(N)))
The result shows that if you sum the elapsed time for both instructions in each case, you end up with similar total timings:
// zeros(N,N)
Elapsed time is 0.004525 seconds.
Elapsed time is 0.000792 seconds.
// zeros(N,0)*zeros(0,N)
Elapsed time is 0.000052 seconds.
Elapsed time is 0.004365 seconds.
// z(N,N) = 0
Elapsed time is 0.000053 seconds.
Elapsed time is 0.004119 seconds.
The other timings were:
// full(spalloc(N,N,0))
Elapsed time is 0.001463 seconds.
Elapsed time is 0.003751 seconds.
// z(1:N,1:N) = 0
Elapsed time is 0.006820 seconds.
Elapsed time is 0.000647 seconds.
// val(ones(N))
Elapsed time is 0.034880 seconds.
Elapsed time is 0.000911 seconds.
// repmat(0, [N N])
Elapsed time is 0.001320 seconds.
Elapsed time is 0.003749 seconds.
These measurements are too small in the milliseconds and might not be very accurate, so you might wanna run these commands in a loop a few thousand times and take the average. Also sometimes running saved M-functions is faster than running scripts or on the command prompt, as certain optimizations only happen that way...
Either way allocation is usually done once, so who cares if it takes an extra 30ms :)
A similar behavior can be seen with cell arrays or arrays of structures. Consider the following example:
N = 1000;
tic, a = cell(N,N); toc
tic, b = repmat({[]}, [N,N]); toc
tic, c{N,N} = []; toc
which gives:
Elapsed time is 0.001245 seconds.
Elapsed time is 0.040698 seconds.
Elapsed time is 0.004846 seconds.
Note that even if they are all equal, they occupy different amount of memory:
>> assert(isequal(a,b,c))
>> whos a b c
Name Size Bytes Class Attributes
a 1000x1000 8000000 cell
b 1000x1000 112000000 cell
c 1000x1000 8000104 cell
In fact the situation is a bit more complicated here, since MATLAB is probably sharing the same empty matrix for all the cells, rather than creating multiple copies.
The cell array a is in fact an array of uninitialized cells (an array of NULL pointers), while b is a cell array where each cell is an empty array [] (internally and because of data sharing, only the first cell b{1} points to [] while all the rest have a reference to the first cell). The final array c is similar to a (uninitialized cells), but with the last one containing an empty numeric matrix [].
I looked around the list of exported C functions from the libmx.dll (using Dependency Walker tool), and I found a few interesting things.
there are undocumented functions for creating uninitialized arrays: mxCreateUninitDoubleMatrix, mxCreateUninitNumericArray, and mxCreateUninitNumericMatrix. In fact there is a submission on the File Exchange makes use of these functions to provide a faster alternative to zeros function.
there exist an undocumented function called mxFastZeros. Googling online, I can see you cross-posted this question on MATLAB Answers as well, with some excellent answers over there. James Tursa (same author of UNINIT from before) gave an example on how to use this undocumented function.
libmx.dll is linked against tbbmalloc.dll shared library. This is Intel TBB scalable memory allocator. This library provides equivalent memory allocation functions (malloc, calloc, free) optimized for parallel applications. Remember that many MATLAB functions are automatically multithreaded, so I wouldn't be surprised if zeros(..) is multithreaded and is using Intel's memory allocator once the matrix size is large enough (here is recent comment by Loren Shure that confirms this fact).
Regarding the last point about the memory allocator, you could write a similar benchmark in C/C++ similar to what #PavanYalamanchili did, and compare the various allocators available. Something like this. Remember that MEX-files have a slightly higher memory management overhead, since MATLAB automatically frees any memory that was allocated in MEX-files using the mxCalloc, mxMalloc, or mxRealloc functions. For what it's worth, it used to be possible to change the internal memory manager in older versions.
EDIT:
Here is a more thorough benchmark to compare the discussed alternatives. It specifically shows that once you stress the use of the entire allocated matrix, all three methods are on equal footing, and the difference is negligible.
function compare_zeros_init()
iter = 100;
for N = 512.*(1:8)
% ZEROS(N,N)
t = zeros(iter,3);
for i=1:iter
clear z
tic, z = zeros(N,N); t(i,1) = toc;
tic, z(:) = 9; t(i,2) = toc;
tic, z = z + 1; t(i,3) = toc;
end
fprintf('N = %4d, ZEROS = %.9f\n', N, mean(sum(t,2)))
% z(N,N)=0
t = zeros(iter,3);
for i=1:iter
clear z
tic, z(N,N) = 0; t(i,1) = toc;
tic, z(:) = 9; t(i,2) = toc;
tic, z = z + 1; t(i,3) = toc;
end
fprintf('N = %4d, GROW = %.9f\n', N, mean(sum(t,2)))
% ZEROS(N,0)*ZEROS(0,N)
t = zeros(iter,3);
for i=1:iter
clear z
tic, z = zeros(N,0)*zeros(0,N); t(i,1) = toc;
tic, z(:) = 9; t(i,2) = toc;
tic, z = z + 1; t(i,3) = toc;
end
fprintf('N = %4d, MULT = %.9f\n\n', N, mean(sum(t,2)))
end
end
Below are the timings averaged over 100 iterations in terms of increasing matrix size. I performed the tests in R2013a.
>> compare_zeros_init
N = 512, ZEROS = 0.001560168
N = 512, GROW = 0.001479991
N = 512, MULT = 0.001457031
N = 1024, ZEROS = 0.005744873
N = 1024, GROW = 0.005352638
N = 1024, MULT = 0.005359236
N = 1536, ZEROS = 0.011950846
N = 1536, GROW = 0.009051589
N = 1536, MULT = 0.008418878
N = 2048, ZEROS = 0.012154002
N = 2048, GROW = 0.010996315
N = 2048, MULT = 0.011002169
N = 2560, ZEROS = 0.017940950
N = 2560, GROW = 0.017641046
N = 2560, MULT = 0.017640323
N = 3072, ZEROS = 0.025657999
N = 3072, GROW = 0.025836506
N = 3072, MULT = 0.051533432
N = 3584, ZEROS = 0.074739924
N = 3584, GROW = 0.070486857
N = 3584, MULT = 0.072822335
N = 4096, ZEROS = 0.098791732
N = 4096, GROW = 0.095849788
N = 4096, MULT = 0.102148452
After doing some research, I've found this article in "Undocumented Matlab", in which Mr. Yair Altman had already come to the conclusion that MathWork's way of preallocating matrices using zeros(M, N) is indeed not the most efficient way.
He timed x = zeros(M,N) vs. clear x, x(M,N) = 0 and found that the latter is ~500 times faster. According to his explanation, the second method simply creates an M-by-N matrix, the elements of which being automatically initialized to 0. The first method however, creates x (with x having automatic zero elements) and then assigns a zero to every element in x again, and that is a redundant operation that takes more time.
In the case of empty matrix multiplication, such as what you've shown in your question, MATLAB expects the product to be an M×N matrix, and therefore it allocates an M×N matrix. Consequently, the output matrix is automatically initialized to zeroes. Since the original matrices are empty, no further calculations are performed, and hence the elements in the output matrix remain unchanged and equal to zero.
Interesting question, apparently there are several ways to 'beat' the built-in zeros function. My only guess as to why this is happening would be that it could be more memory efficient (after all, zeros(LargeNumer) will sooner cause Matlab to hit the memory limit than form a devestating speed bottleneck in most code), or more robust somehow.
Here is another fast allocation method using a sparse matrix, i have added the regular zeros function as a benchmark:
tic; x=zeros(1000,1000); toc
Elapsed time is 0.002863 seconds.
tic; clear x; x(1000,1000)=0; toc
Elapsed time is 0.000282 seconds.
tic; x=full(spalloc(1000,1000,0)); toc
Elapsed time is 0.000273 seconds.
tic; x=spalloc(1000,1000,1000000); toc %Is this the same for practical purposes?
Elapsed time is 0.000281 seconds.

Compute double sum in matlab efficiently?

I am looking for an optimal way to program this summation ratio. As input I have two vectors v_mn and x_mn with (M*N)x1 elements each.
The ratio is of the form:
The vector x_mn is 0-1 vector so when x_mn=1, the ration is r given above and when x_mn=0 the ratio is 0.
The vector v_mn is a vector which contain real numbers.
I did the denominator like this but it takes a lot of times.
function r_ij = denominator(v_mn, M, N, i, j)
%here x_ij=1, to get r_ij.
S = [];
for m = 1:M
for n = 1:N
if (m ~= i)
if (n ~= j)
S = [S v_mn(i, n)];
else
S = [S 0];
end
else
S = [S 0];
end
end
end
r_ij = 1+S;
end
Can you give a good way to do it in matlab. You can ignore the ratio and give me the denominator which is more complicated.
EDIT: I am sorry I did not write it very good. The i and j are some numbers between 1..M and 1..N respectively. As you can see, the ratio r is many values (M*N values). So I calculated only the value i and j. More precisely, I supposed x_ij=1. Also, I convert the vectors v_mn into a matrix that's why I use double index.
If you reshape your data, your summation is just a repeated matrix/vector multiplication.
Here's an implementation for a single m and n, along with a simple speed/equality test:
clc
%# some arbitrary test parameters
M = 250;
N = 1000;
v = rand(M,N); %# (you call it v_mn)
x = rand(M,N); %# (you call it x_mn)
m0 = randi(M,1); %# m of interest
n0 = randi(N,1); %# n of interest
%# "Naive" version
tic
S1 = 0;
for mm = 1:M %# (you call this m')
if mm == m0, continue; end
for nn = 1:N %# (you call this n')
if nn == n0, continue; end
S1 = S1 + v(m0,nn) * x(mm,nn);
end
end
r1 = v(m0,n0)*x(m0,n0) / (1+S1);
toc
%# MATLAB version: use matrix multiplication!
tic
ninds = [1:m0-1 m0+1:M];
minds = [1:n0-1 n0+1:N];
S2 = sum( x(minds, ninds) * v(m0, ninds).' );
r2 = v(m0,n0)*x(m0,n0) / (1+S2);
toc
%# Test if values are equal
abs(r1-r2) < 1e-12
Outputs on my machine:
Elapsed time is 0.327004 seconds. %# loop-version
Elapsed time is 0.002455 seconds. %# version with matrix multiplication
ans =
1 %# and yes, both are equal
So the speedup is ~133×
Now that's for a single value of m and n. To do this for all values of m and n, you can use an (optimized) double loop around it:
r = zeros(M,N);
for m0 = 1:M
xx = x([1:m0-1 m0+1:M], :);
vv = v(m0,:).';
for n0 = 1:N
ninds = [1:n0-1 n0+1:N];
denom = 1 + sum( xx(:,ninds) * vv(ninds) );
r(m0,n0) = v(m0,n0)*x(m0,n0)/denom;
end
end
which completes in ~15 seconds on my PC for M = 250, N= 1000 (R2010a).
EDIT: actually, with a little more thought, I was able to reduce it all down to this:
denom = zeros(M,N);
for mm = 1:M
xx = x([1:mm-1 mm+1:M],:);
denom(mm,:) = sum( xx*v(mm,:).' ) - sum( bsxfun(#times, xx, v(mm,:)) );
end
denom = denom + 1;
r_mn = x.*v./denom;
which completes in less than 1 second for N = 250 and M = 1000 :)
For a start you need to pre-alocate your S matrix. It changes size every loop so put
S = zeros(m*n, 1)
at the start of your function. This will also allow you to do away with your else conditional statements, ie they will reduce to this:
if (m ~= i)
if (n ~= j)
S(m*M + n) = v_mn(i, n);
Otherwise since you have to visit every element im afraid it may not be able to get much faster.
If you desperately need more speed you can look into doing some mex coding which is code in c/c++ but run in matlab.
http://www.mathworks.com.au/help/matlab/matlab_external/introducing-mex-files.html
Rather than first jumping into vectorization of the double loop, you may want modify the above to make sure that it does what you want. In this code, there is no summing of the data, instead a vector S is being resized at each iteration. As well, the signature could include the matrices V and X so that the multiplication occurs as in the formula (rather than just relying on the value of X to be zero or one, let us pass that matrix in).
The function could look more like the following (I've replaced the i,j inputs with m,n to be more like the equation):
function result = denominator(V,X,m,n)
% use the size of V to determine M and N
[M,N] = size(V);
% initialize the summed value to one (to account for one at the end)
result = 1;
% outer loop
for i=1:M
% ignore the case where m==i
if i~=m
for j=1:N
% ignore the case where n==j
if j~=n
result = result + V(m,j)*X(i,j);
end
end
end
end
Note how the first if is outside of the inner for loop since it does not depend on j. Try the above and see what happens!
You can vectorize from within Matlab to speed up your calculations. Every time you use an operation like ".^" or ".*" or any matrix operation for that matter, Matlab will do them in parallel, which is much, much faster than iterating over each item.
In this case, look at what you are doing in terms of matrices. First, in your loop you are only dealing with the mth row of $V_{nm}$, which we can use as a vector for itself.
If you look at your formula carefully, you can figure out that you almost get there if you just write this row vector as a column vector and multiply the matrix $X_{nm}$ to it from the left, using standard matrix multiplication. The resulting vector contains the sums over all n. To get the final result, just sum up this vector.
function result = denominator_vectorized(V,X,m,n)
% get the part of V with the first index m
Vm = V(m,:)';
% remove the parts of X you don't want to iterate over. Note that, since I
% am inside the function, I am only editing the value of X within the scope
% of this function.
X(m,:) = 0;
X(:,n) = 0;
%do the matrix multiplication and the summation at once
result = 1-sum(X*Vm);
To show you how this optimizes your operation, I will compare it to the code proposed by another commenter:
function result = denominator(V,X,m,n)
% use the size of V to determine M and N
[M,N] = size(V);
% initialize the summed value to one (to account for one at the end)
result = 1;
% outer loop
for i=1:M
% ignore the case where m==i
if i~=m
for j=1:N
% ignore the case where n==j
if j~=n
result = result + V(m,j)*X(i,j);
end
end
end
end
The test:
V=rand(10000,10000);
X=rand(10000,10000);
disp('looped version')
tic
denominator(V,X,1,1)
toc
disp('matrix operation')
tic
denominator_vectorized(V,X,1,1)
toc
The result:
looped version
ans =
2.5197e+07
Elapsed time is 4.648021 seconds.
matrix operation
ans =
2.5197e+07
Elapsed time is 0.563072 seconds.
That is almost ten times the speed of the loop iteration. So, always look out for possible matrix operations in your code. If you have the Parallel Computing Toolbox installed and a CUDA-enabled graphics card installed, Matlab will even perform these operations on your graphics card without any further effort on your part!
EDIT: That last bit is not entirely true. You still need to take a few steps to do operations on CUDA hardware, but they aren't a lot. See Matlab documentation.

Speeding up MATLAB code for FDR estimation

I have 2 input variables:
a vector of p-values (p) with N elements (unsorted)
and N x M matrix with p-values obtained by random permutations (pr) with M iterations. N is quite large, 10K to 100K or more. M let's say 100.
I'm estimating the False Discovery Rate (FDR) for each element of p representing how many p-values from random permutations will pass if the current p-value (from p) will be the threshold.
I wrote the function with ARRAYFUN, but it takes lot of time for large N (2 min for N=20K), comparable to for-loop.
function pfdr = fdr_from_random_permutations(p, pr)
%# ... skipping arguments checks
pfdr = arrayfun( #(x) mean(sum(pr<=x))./sum(p<=x), p);
Any ideas how to make it faster?
Comments about statistical issues here are also welcome.
The test data can be generated as p = rand(N,1); pr = rand(N,M);.
Well, the trick was indeed sorting the vectors. I give credit to #EgonGeerardyn for that. Also, there is no need to use mean. You can just divide everything afterwards by M. When p is sorted, finding the amount of values that are less than current x, is just a running index. pr is a more interesting case - I used a running index called place to discover how many elements are less than x.
Edit(2): Here is the fastest version I come up with:
function Speedup2()
N = 10000/4 ;
M = 100/4 ;
p = rand(N,1); pr = rand(N,M);
tic
pfdr = arrayfun( #(x) mean(sum(pr<=x))./sum(p<=x), p);
toc
tic
out = zeros(numel(p),1);
[p,sortIndex] = sort(p);
pr = sort(pr(:));
pr(end+1) = Inf;
place = 1;
N = numel(pr);
for i=1:numel(p)
x = p(i);
while pr(place)<=x
place = place+1;
end
exp1a = place-1;
exp2 = i;
out(i) = exp1a/exp2;
end
out(sortIndex) = out/ M;
toc
disp(max(abs(pfdr-out)));
end
And the benchmark results for N = 10000/4 ; M = 100/4 :
Elapsed time is 0.898689 seconds.
Elapsed time is 0.007697 seconds.
2.220446049250313e-016
and for N = 10000 ; M = 100 ;
Elapsed time is 39.730695 seconds.
Elapsed time is 0.088870 seconds.
2.220446049250313e-016
First of all, tr to analyze this using the profiler. Profiling should ALWAYS be the first step when trying to improve performance. We can all guess at what is causing your performance drop, but the only way to be sure and focus on the right part is to inspect the profiler report.
I didn't run the profiler on your code, as I don't want to generate test data to do so; but I have some ideas about what work is being carried out in vain. In your function mean(sum(pr<=x))./sum(p<=x), you are repeatedly summing over p<=x. All in all, one call includes N comparisons and N-1 summations. So for both, you have behavior that is quadratic in N when all N values of p are calculated.
If you step through a sorted version of p, you need less calculations and comparisons, as you can keep track of a running sum (i.e. behavior that is linear in N). I guess a similar method could be applied to the other part of the calculation.
edit:
The implementation of my idea as expressed above:
function pfdr = fdr(p,pr)
[N, M] = size(pr);
[p, idxP] = sort(p);
[pr] = sort(pr(:));
pfdr = NaN(N,1);
parfor iP = 1:N
x = p(iP);
m = sum(pr<=x)/M;
pfdr(iP) = m/iP;
end
pfdr(idxP) = pfdr;
If you have access to the parallel computing toolbox, the parfor loop will allow you to gain some performance. I used two basic ideas: mean(sum(pr<=x)) is actually equal to sum(pr(:)<=x)/M. On the other hand, since p is sorted, this allows you to just take the index as the number of elements (in the assumption that every element is unique, otherwise you'll have to work with unique to do the full rigorous analysis).
As you should already know very well by running the profiler yourself, the line m = sum(pr<=x)/M; is the main resource hog. This can be tackled similarly to p by making use of the sorted nature of pr.
I tested my code (both for identical results and for time consumption) against yours. For N=20e3; M=100, I get about 63 seconds to run your code and 43 seconds to run mine on my main computer (MATLAB 2011a on 64 bit Arch Linux, 8 GiB RAM, Core i7 860). For smaller values of M the gain is larger. But this gain is in part due to parallelization.
edit2: Apparently, I came to very similar results as Andrey, my result would have been very similar had I pursued the same approach.
However, I realised that there are some built-in functions that do more or less what you need, i.e. quite similar to determining the empirical cumulative density function. And this can be done by constructing the histogram:
function pfdr = fdr(p,pr)
[N, M] = size(pr);
[p, idxP] = sort(p);
count = histc(pr(:), [0; p]);
count = cumsum(count(1:N));
pfdr = count./(1:N).';
pfdr(idxP) = pfdr/M;
For the same M and N as above, this code takes 228 milliseconds on my computer. It takes 104 milliseconds for Andrey's parameters, so on my computer it turns out a bit slower, but I think this code is far more readable than intricate for loops (as was the case in both our examples).
Following the discussion between me and Andrey in this question, this very late answer is just to prove to Andrey that vectorized solutions are still faster than JIT'ed loops, they sometimes just aren't as easy to find.
I am more than willing to remove this answer if it is deemed inappropriate by the OP.
Now, on to business, here's the original arrayfun, looped version by Andrey, and vectorized version by Egon:
function test
clc
N = 10000/4 ;
M = 100/4 ;
p = rand(N,1);
pr = rand(N,M);
%% first option
tic
pfdr = arrayfun( #(x) mean(sum(pr<=x))./sum(p<=x), p);
toc
%% second option
tic
out = zeros(numel(p),1);
[p2,sortIndex] = sort(p);
pr2 = sort(pr(:));
pr2(end+1) = Inf;
place = 1;
for i=1:numel(p2)
x = p2(i);
while pr2(place)<=x
place = place+1;
end
exp1a = place-1;
exp2 = i;
out(i) = exp1a/exp2;
end
out(sortIndex) = out/ M;
toc
%% third option
tic
[p2,sortIndex] = sort(p);
count = histc(pr2(:), [0; p2]);
count = cumsum(count(1:N));
out = count./(1:N).';
out(sortIndex) = out/M;
toc
end
Results on my laptop:
Elapsed time is 0.916196 seconds.
Elapsed time is 0.011429 seconds.
Elapsed time is 0.007328 seconds.
and for N=1000; M = 100; :
Elapsed time is 38.082718 seconds.
Elapsed time is 0.127052 seconds.
Elapsed time is 0.042686 seconds.
So: vectorized is 2-3 times faster.

Resources