Avoid loops in the computation of logistic equation? - performance

I am trying to calculate the nth value of a logistic equation in Python. It is easy to do it with a loop:
import timeit
tic = timeit.default_timer()
x = 0.23
i = 0
n = 1000000000
while (i < n):
x = 4 * x * (1 - x)
i += 1
toc = timeit.default_timer()
toc - tic
However it is also generally time-consuming. Doing it in PyPy greatly improves the performance, as suggested by abarnert in Is MATLAB faster than Python (little simple experiment).
I have also been suggested to avoid Python loops and use NumPy arrays and vector operations instead - actually I do not see how these can help (it seems to me that NumPy operations are similar to Matlab ones, and I am unaware of any way the code above can be vectorized in Matlab either).
Is there a way to optimize the code without loops?

Without loops? Maybe,but this is probably not be the best way to go. It's important to realize that loops are not per-se slow. You try to avoid them in python or matlab in high performance code. If you are writing C code, you don't have to care.
So one idea to optimize here would be to use cython to compile your code to C code:
python version:
def calc_x(x, n):
i = 0
while (i < n):
x = 4 * x * (1 - x)
i += 1
return x
statically typed cython version:
def calc_x_cy(double x, long n):
cdef long i = 0
while (i < n):
x = 4 * x * (1 - x)
i += 1
return x
And all of a sudden, you are almost two orders of magnitude faster:
%timeit calc_x(0.23, n) -> 1 loops, best of 3: 26.9 s per loop
%timeit calc_x_cy(0.23, n) -> 1 loops, best of 3: 370 ms per loop

Related

Numba Cuda computation seems to be slower than sequential run. Did I do obvious mistakes?

There are several threads covering similar topics, but unfortunately, these seem to be too complicated for me, so I would like to ask a similar question, hoping that someone will have a look at my code specifically to tell me if I got something wrong.
I am learning numba cuda right now, starting with the simple examples one can find in the net. I started with this tutorial here:
https://github.com/ContinuumIO/gtc2017-numba/blob/master/4%20-%20Writing%20CUDA%20Kernels.ipynb
which shows how to do an addition of arrays in parallel. The system configuration they used to evaluate the times is not given. For the code replication, I use a Geforce GTX 1080 Ti and an Intel Core i7 8700K CPU.
I basically copied the addition script from the tutorial, but added also sequential code for comparison:
from numba import cuda
import numpy as np
import time
import math
#cuda.jit
def addition_kernel(x, y, out):
tx = cuda.threadIdx.x
ty = cuda.blockIdx.x
block_size = cuda.blockDim.x
grid_size = cuda.gridDim.x
start = tx+ ty * block_size
stride = block_size * grid_size
for i in range(start, x.shape[0], stride):
out[i] = y[i] + x[i]
def add(n, x, y):
for i in range(n):
y[i] = y[i] + x[i]
if __name__ =="__main__":
print(cuda.gpus[0])
print("")
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
out = np.empty_like(x)
x_device = cuda.to_device(x)
y_device = cuda.to_device(y)
out_device = cuda.device_array_like(x)
# Set the number of threads in a block
threadsperblock = 128
# Calculate the number of thread blocks in the grid
blockspergrid = 30#math.ceil(n[0] / threadsperblock)
# Now start the kernel
start = time.process_time()
cuda.synchronize()
addition_kernel[blockspergrid, threadsperblock](x_device, y_device, out_device)
cuda.synchronize()
end = time.process_time()
out_global_mem = out_device.copy_to_host()
print("parallel time: ", end - start)
start = time.process_time()
add(n,x,y)
end = time.process_time()
print("sequential time: ", end-start)
The parallel time is on average around 0.14 seconds, while the code without GPU kernel takes only 0.02 seconds.
This seems quite strange to me. Is there anything I did wrong? Or is this problem not a good example for parallelism? (which I do not think as you can run the for loop in parallel)
What is odd is that I do hardly notice a difference if I do not use the to_device() functions. As far as I understood, these should be important, as they avoid the communication between CPU and GPU after each iteration.
addition_kernel is compiled at runtime when it is called the first time, so in the middle of your measured time! The compilation of a kernel is a pretty intensive operation. You can force the compilation to be done eagerly (ie. when the function is defined) by providing the types to Numba.
Note that the arrays are a bit too small so you can see a big improvement on GPUs. Moreover, the comparison with the CPU version is not really fair: you should also use Numba for the CPU implementation or at least Numpy (but not an interpreted pure-CPython loop).
Here is an example:
import numba as nb
#cuda.jit('void(float32[::1], float32[::1], float32[::1])')
def addition_kernel(x, y, out):
tx = cuda.threadIdx.x
ty = cuda.blockIdx.x
block_size = cuda.blockDim.x
grid_size = cuda.gridDim.x
start = tx+ ty * block_size
stride = block_size * grid_size
for i in range(start, x.shape[0], stride):
out[i] = y[i] + x[i]
#nb.njit('void(int64, float32[::1], float32[::1])')
def add(n, x, y):
for i in range(n):
y[i] = y[i] + x[i]

Efficiency of diag() - MATLAB

Motivation:
In writing out a matrix operation that was to be performed over tens of thousands of vectors I kept coming across the warning:
Requested 200000x200000 (298.0GB) array exceeds maximum array size
preference. Creation of arrays greater than this limit may take a long
time and cause MATLAB to become unresponsive. See array size limit or
preference panel for more information.
The reason for this was my use of diag() to get the values down the diagonal of an matrix inner product. Because MATLAB is generally optimized for vector/matrix operations, when I first write code, I usually go for the vectorized form. In this case, however, MATLAB has to build the entire matrix in order to get the diagonal which causes the memory and speed issues.
Experiment:
I decided to test the use of diag() vs a for loop to see if at any point it was more efficient to use diag():
num = 200000; % Matrix dimension
x = ones(num, 1);
y = 2 * ones(num, 1);
% z = diag(x*y'); % Expression to solve
% Loop approach
tic
z = zeros(num,1);
for i = 1 : num
z(i) = x(i)*y(i);
end
toc
% Dividing the too-large matrix into process-able chunks
fraction = [10, 20, 50, 100, 500, 1000, 5000, 10000, 20000];
time = zeros(size(fraction));
for k = 1 : length(fraction)
f = fraction(k);
% Operation to time
tic
z = zeros(num,1);
for i = 1 : k
first = (i-1) * (num / f);
last = first + (num / f);
z(first + 1 : last) = diag(x(first + 1: last) * y(first + 1 : last)');
end
time(k) = toc;
end
% Plot results
figure;
hold on
plot(log10(fraction), log10(chunkTime));
plot(log10(fraction), repmat(log10(loopTime), 1, length(fraction)));
plot(log10(fraction), log10(chunkTime), 'g*'); % Plot points along time
legend('Partioned Running Time', 'Loop Running Time');
xlabel('Log_{10}(Fractional Size)'), ylabel('Log_{10}(Running Time)'), title('Running Time Comparison');
This is the result of the test:
(NOTE: The red line represents the loop time as a threshold--it's not to say that the total loop time is constant regardless of the number of loops)
From the graph it is clear that it takes breaking the operations down into roughly 200x200 square matrices to be faster to use diag than to perform the same operation using loops.
Question:
Can someone explain why I'm seeing these results? Also, I would think that with MATLAB's ever-more optimized design, there would be built-in handling of these massive matrices within a diag() function call. For example, it could just perform the i = j indexed operations. Is there a particular reason why this might be prohibitive?
I also haven't really thought of memory implications for diag using the partition method, although it's clear that as the partition size decreases, memory requirements drop.
Test of speed of diag vs. a loop.
Initialization:
n = 10000;
M = randn(n, n); %create a random matrix.
Test speed of diag:
tic;
d = diag(M);
toc;
Test speed of loop:
tic;
d = zeros(n, 1);
for i=1:n
d(i) = M(i,i);
end;
toc;
This would test diag. Your code is not a clean test of diag...
Comment on where there might be confusion
Diag only extracts the diagonal of a matrix. If x and y are vectors, and you do d = diag(x * y'), MATLAB first constructs the n by n matrix x*y' and calls diag on that. This is why, you get the error, "cannot construct 290GB matrix..." Matlab interpreter does not optimize in a crazy way, realize you only want the diagonal and construct just a vector (rather than full matrix with x*y', that does not happen.
Not sure if you're asking this, but the fastest way to calculate d = diag(x*y') where x and y are n by 1 vectors would simply be: d = x.*y

Efficient way of computing dot product inside double sum in python3

I'm looking into how to compute as efficient as possible in python3 a dot product inside a double sum of the form:
import cmath
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * sum(a*b for a,b in zip(x, [l - m for l, m in zip(r_p[j], r_p[k])])))
where r_np is a array of several thousand triples, and x a constant triple. Timing for a length of N=1000 triples is about 2.4s. The same using numpy:
import numpy as np
for j in range(0,N):
for k in range(0,N):
sum_np = np.add(sum_np, np.exp(-1j * np.inner(x_np,(r_np[j] - r_np[k]))))
is actually slower with a runtime of about 4.0s. I presume this is due to no big vectorizing advantage, only the short 3 dot 3 is np.dot, which is eaten up by starting N^2 of those in the loop.
However, a modest speedup over the first example I could gain by using plain python3 with map and mul:
from operator import mul
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * sum(map(mul,x, [l - m for l, m in zip(r_p[j], r_p[k])])))
with a runtime about 2.0s
Attempts to either use an if condition to not calculate the case j=k, where
r_np[j] - r_np[k] = 0
and thus the dot product also becomes 0, or splitting the sum up in two to achieve the same
for j in range(0,N):
for k in range(j+1,N):
...
for k in range(0,N):
for j in range(k+1,N):
...
both made it even slower. So the whole thing scales with O(N^2), and I wonder if with some methods like sorting or other things one could get rid of the loops and to make it scale with O(N logN).
The problem is that I need single digit second runtimes for a set of N~6000 triples as I have thousands of those sums to compute. Otherwise I have to try scipy’s weave , numba, pyrex or python or go down the C path entirely…
Thanks in advance for any help!
Edit:
this is how a data sample would look like:
# numpy arrays
x_np = np.array([0,0,1], dtype=np.float64)
N=1000
xy = np.multiply(np.subtract(np.random.rand(N,2),0.5),8)
z = np.linspace(0,40,N).reshape(N,1)
r_np = np.hstack((xy,z))
# in python format
x = (0,0,1)
r_p = r_np.tolist()
I used this to generate test data:
x = (1, 2, 3)
r_p = [(i, j, k) for i in range(10) for j in range(10) for k in range(10)]
On my machine, this took 2.7 seconds with your algorithm.
Then I got rid of the zips and sum:
for j in range(0,N):
for k in range(0,N):
s = 0
for t in range(3):
s += x[t] * (r_p[j][t] - r_p[k][t])
sum_p += cmath.exp(-1j * s)
This brought it down to 2.4 seconds.
Then I noted that x is constant so:
x * (p - q) = x1*p1 - x1*q1 + x2*p2 - x2*q2 - ...
So I changed the generation code to:
x = (1, 2, 3)
r_p = [(x[0] * i, x[1] * j, x[2] * k) for i in range(10) for j in range(10) for k in range(10)]
And the algorithm to:
for j in range(0,N):
for k in range(0,N):
s = 0
for t in range(3):
s += r_p[j][t] - r_p[k][t]
sum_p += cmath.exp(-1j * s)
Which got me to 2.0 seconds.
Then I realized we can rewrite it as:
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * (sum(r_p[j]) - sum(r_p[k])))
Which, surprisingly, got me to 1.1 seconds, which I can't really explain - maybe some caching going on?
Anyway, caching or not, you can precompute the sums of your triples and then you won't have to rely on the caching mechanism. I did that:
sums = [sum(a) for a in r_p]
sum_p = 0
N = len(r_p)
start = time.clock()
for j in range(0,N):
for k in range(0,N):
sum_p += cmath.exp(-1j * (sums[j] - sums[k]))
Which got me to 0.73 seconds.
I hope this is good enough!
Update:
Here's one around 0.01 seconds with a single for loop. It seems mathematically sound, but it's giving slightly different results, which I'm guessing is due to precision issues. I'm not sure how to fix those, but I thought I'd post it in case you can live with the precision issues or someone knows how to fix them.
Considering I'm using fewer exp calls than your initial code however, consider that maybe this is actually the more correct version, and your initial approach is the one with precision issues.
sums = [sum(a) for a in r_p]
e_denom = sum([cmath.exp(1j * p) for p in sums])
sum_p = 0
N = len(r_p)
start = time.clock()
for j in range(0,N):
sum_p += e_denom * cmath.exp(-1j * sums[j])
print(sum_p)
end = time.clock()
print(end - start)
Update 2:
The same, except with less multiplications and a sum function call:
sum_p = e_denom * sum([np.exp(-1j * p) for p in sums])
That double loop is a time killer in numpy. If you use vectorized array operations, the evaluation is cut to under a second.
In [1764]: sum_np=0
In [1765]: for j in range(0,N):
for k in range(0,N):
sum_np += np.exp(-1j * np.inner(x_np,(r_np[j] - r_np[k])))
In [1766]: sum_np
Out[1766]: (2116.3316526447466-1.0796252780664872e-11j)
In [1767]: np.exp(-1j * np.inner(x_np, (r_np[:N,None,:]-r_np[None,:N,:]))).sum((0,1))
Out[1767]: (2116.3316526447466-1.0796252780664872e-11j)
Timings:
In [1768]: timeit np.exp(-1j * np.inner(x_np, (r_np[:N,None,:]-r_np[None,:N,:]))).sum((0,1))
1 loops, best of 3: 506 ms per loop
In [1769]: %%timeit
sum_np=0
for j in range(0,N):
for k in range(0,N):
sum_np += np.exp(-1j * np.inner(x_np,(r_np[j] - r_np[k])))
1 loops, best of 3: 12.9 s per loop
replacing np.inner with np.einsum shaves 20% off the time
np.exp(-1j * np.einsum('k,ijk', x_np, r_np[:N,None,:]-r_np[None,:N,:])).sum((0,1))
Ok guys, thanks a lot for the help. IVlads last code that uses the identity sum_j sum_k a[j]*a[k] = sum_j a[j] * sum_k a[k] makes the biggest difference. This now scales also with less then O(N^2).
Precalculating the dot product before the sum makes hpaulj's numpy suggestion exactly the same fast:
sum_np = 0
dotprods = np.inner(q_np,r_np)
sum_rkexp = np.exp(1j * dotprods).sum()
sum_np = sum_rkexp * np.exp(-1j * dotprods).sum()
both with a runtime about 0.0003s. However, I found one more thing that gives another ~50% increase, instead of computing the exponential twice, I take the complex conjugate inside the sum:
sum_np = 0
dotprods = np.inner(q_np,r_np)
rkexp = np.exp(1j * dotprods)
sum_rkexp = rkexp.sum()
sum_np = sum_rkexp * np.conj(rkexp).sum()
which runs at around 0.0002s. Over my first attempts with non vectorized numpy that took ~4s, this is a speedup of about 2*10^4, and for my 'real data' arrays of N~6000 which run about 125s I now get 0.0005s, which is an amazing speedup of about 2.5*10^5. Thanks a lot, IVlad and hpaulj, learned a lot in the last day :)
P.S. I'm amazed by how quick you guys answer with stuff that took me half a day to just follow up ;)

Speeding up element-wise array multiplication in python

I have been playing around with numba and numexpr trying to speed up a simple element-wise matrix multiplication. I have not been able to get better results, they both are basically (speedwise) equivalent to numpys multiply function. Has anyone had any luck in this area? Am I using numba and numexpr wrong (I'm quite new to this) or is this altogether a bad approach to try and speed this up. Here is a reproducible code, thank you in advanced:
import numpy as np
from numba import autojit
import numexpr as ne
a=np.random.rand(10,5000000)
# numpy
multiplication1 = np.multiply(a,a)
# numba
def multiplix(X,Y):
M = X.shape[0]
N = X.shape[1]
D = np.empty((M, N), dtype=np.float)
for i in range(M):
for j in range(N):
D[i,j] = X[i, j] * Y[i, j]
return D
mul = autojit(multiplix)
multiplication2 = mul(a,a)
# numexpr
def numexprmult(X,Y):
M = X.shape[0]
N = X.shape[1]
return ne.evaluate("X * Y")
multiplication3 = numexprmult(a,a)
What about using fortran and ctypes?
elementwise.F90:
subroutine elementwise( a, b, c, M, N ) bind(c, name='elementwise')
use iso_c_binding, only: c_float, c_int
integer(c_int),intent(in) :: M, N
real(c_float), intent(in) :: a(M, N), b(M, N)
real(c_float), intent(out):: c(M, N)
integer :: i,j
forall (i=1:M,j=1:N)
c(i,j) = a(i,j) * b(i,j)
end forall
end subroutine
elementwise.py:
from ctypes import CDLL, POINTER, c_int, c_float
import numpy as np
import time
fortran = CDLL('./elementwise.so')
fortran.elementwise.argtypes = [ POINTER(c_float),
POINTER(c_float),
POINTER(c_float),
POINTER(c_int),
POINTER(c_int) ]
# Setup
M=10
N=5000000
a = np.empty((M,N), dtype=c_float)
b = np.empty((M,N), dtype=c_float)
c = np.empty((M,N), dtype=c_float)
a[:] = np.random.rand(M,N)
b[:] = np.random.rand(M,N)
# Fortran call
start = time.time()
fortran.elementwise( a.ctypes.data_as(POINTER(c_float)),
b.ctypes.data_as(POINTER(c_float)),
c.ctypes.data_as(POINTER(c_float)),
c_int(M), c_int(N) )
stop = time.time()
print 'Fortran took ',stop - start,'seconds'
# Numpy
start = time.time()
c = np.multiply(a,b)
stop = time.time()
print 'Numpy took ',stop - start,'seconds'
I compiled the Fortran file using
gfortran -O3 -funroll-loops -ffast-math -floop-strip-mine -shared -fPIC \
-o elementwise.so elementwise.F90
The output yields a speed-up of ~10%:
$ python elementwise.py
Fortran took 0.213667869568 seconds
Numpy took 0.230120897293 seconds
$ python elementwise.py
Fortran took 0.209784984589 seconds
Numpy took 0.231616973877 seconds
$ python elementwise.py
Fortran took 0.214708089828 seconds
Numpy took 0.25369310379 seconds
How are you doing your timings ?
The creation of your random array is taking up the overal part of your calculation, and if you include it in your timing you will hardly see any real difference in the results,
however, if you create it up front you can actually compare the methods.
Here are my results, and I'm consistently seeing what you are seeing. numpy and numba give about the same results (with numba being a little bit faster.)
(I don't have numexpr available)
In [1]: import numpy as np
In [2]: from numba import autojit
In [3]: a=np.random.rand(10,5000000)
In [4]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 90 ms per loop
In [5]: # numba
In [6]: def multiplix(X,Y):
...: M = X.shape[0]
...: N = X.shape[1]
...: D = np.empty((M, N), dtype=np.float)
...: for i in range(M):
...: for j in range(N):
...: D[i,j] = X[i, j] * Y[i, j]
...: return D
...:
In [7]: mul = autojit(multiplix)
In [26]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 182 ms per loop
In [27]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 185 ms per loop
In [28]: %timeit multiplication1 = np.multiply(a,a)
10 loops, best of 3: 181 ms per loop
In [29]: %timeit multiplication2 = mul(a,a)
10 loops, best of 3: 179 ms per loop
In [30]: %timeit multiplication2 = mul(a,a)
10 loops, best of 3: 180 ms per loop
In [31]: %timeit multiplication2 = mul(a,a)
10 loops, best of 3: 178 ms per loop
Update:
I used the latest version of numba, just compiled it from source: '0.11.0-3-gea20d11-dirty'
I tested this with the default numpy in Fedora 19, '1.7.1'
and numpy '1.6.1' compiled from source, linked against:
Update3
My earlier results were of course incorrect, I had return D in the inner loop, so skipping 90% of the calculations.
This provides more evidence for ali_m's assumption that it is really hard to do better than the already very optimized c code.
However, if you are trying to do something more complicated, e.g.,
np.sqrt(((X[:, None, :] - X) ** 2).sum(-1))
I can reproduce the figures Jake Vanderplas get's:
In [14]: %timeit pairwise_numba(X)
10000 loops, best of 3: 92.6 us per loop
In [15]: %timeit pairwise_numpy(X)
1000 loops, best of 3: 662 us per loop
So it seems you are doing something that has been so far optimized by numpy it is hard to do any better.
Edit: nevermind this answer, I'm wrong (see comment below).
I'm afraid it will be very, very hard to have a faster matrix multiplication in python than by using numpy's. NumPy usually uses internal fortran libraries like ATLAS/LAPACK that are very very well optimized.
To check if your version of NumPy was built with LAPACK support: open a terminal, go to your Python install directory and type:
for f in `find lib/python2.7/site-packages/numpy/* -name \*.so`; do echo $f; ldd $f;echo "\n";done | grep lapack
Note that the path can vary depending on your python version.
If you some lines get printed, you surely have LAPACK support... so having faster matrix multiplication on a single core will be very hard to achieve.
Now I don't know about using multiple cores to perform matrix multiplication, so you might want to look into that (see ali_m's comment).
use a GPU. use the following package.
gnumpy
The speed of np.multiply heavily relies on the arrays beeing exactly the same size.
a = np.random.rand(80000,1)
b = np.random.rand(80000,1)
c = np.multiply(a, b)
is fast as hell whereas the following code takes more than a minute and uses up all my 16 GB of ram:
a = np.squeeze(np.random.rand(80000,1))
b = np.random.rand(80000,1)
c = np.multiply(a, b)
So my advice would be to use arrays of exactly the same dimensions. Hope this is useful for someone looking how to speed up element-wise multiplication.

Speeding up MATLAB code for FDR estimation

I have 2 input variables:
a vector of p-values (p) with N elements (unsorted)
and N x M matrix with p-values obtained by random permutations (pr) with M iterations. N is quite large, 10K to 100K or more. M let's say 100.
I'm estimating the False Discovery Rate (FDR) for each element of p representing how many p-values from random permutations will pass if the current p-value (from p) will be the threshold.
I wrote the function with ARRAYFUN, but it takes lot of time for large N (2 min for N=20K), comparable to for-loop.
function pfdr = fdr_from_random_permutations(p, pr)
%# ... skipping arguments checks
pfdr = arrayfun( #(x) mean(sum(pr<=x))./sum(p<=x), p);
Any ideas how to make it faster?
Comments about statistical issues here are also welcome.
The test data can be generated as p = rand(N,1); pr = rand(N,M);.
Well, the trick was indeed sorting the vectors. I give credit to #EgonGeerardyn for that. Also, there is no need to use mean. You can just divide everything afterwards by M. When p is sorted, finding the amount of values that are less than current x, is just a running index. pr is a more interesting case - I used a running index called place to discover how many elements are less than x.
Edit(2): Here is the fastest version I come up with:
function Speedup2()
N = 10000/4 ;
M = 100/4 ;
p = rand(N,1); pr = rand(N,M);
tic
pfdr = arrayfun( #(x) mean(sum(pr<=x))./sum(p<=x), p);
toc
tic
out = zeros(numel(p),1);
[p,sortIndex] = sort(p);
pr = sort(pr(:));
pr(end+1) = Inf;
place = 1;
N = numel(pr);
for i=1:numel(p)
x = p(i);
while pr(place)<=x
place = place+1;
end
exp1a = place-1;
exp2 = i;
out(i) = exp1a/exp2;
end
out(sortIndex) = out/ M;
toc
disp(max(abs(pfdr-out)));
end
And the benchmark results for N = 10000/4 ; M = 100/4 :
Elapsed time is 0.898689 seconds.
Elapsed time is 0.007697 seconds.
2.220446049250313e-016
and for N = 10000 ; M = 100 ;
Elapsed time is 39.730695 seconds.
Elapsed time is 0.088870 seconds.
2.220446049250313e-016
First of all, tr to analyze this using the profiler. Profiling should ALWAYS be the first step when trying to improve performance. We can all guess at what is causing your performance drop, but the only way to be sure and focus on the right part is to inspect the profiler report.
I didn't run the profiler on your code, as I don't want to generate test data to do so; but I have some ideas about what work is being carried out in vain. In your function mean(sum(pr<=x))./sum(p<=x), you are repeatedly summing over p<=x. All in all, one call includes N comparisons and N-1 summations. So for both, you have behavior that is quadratic in N when all N values of p are calculated.
If you step through a sorted version of p, you need less calculations and comparisons, as you can keep track of a running sum (i.e. behavior that is linear in N). I guess a similar method could be applied to the other part of the calculation.
edit:
The implementation of my idea as expressed above:
function pfdr = fdr(p,pr)
[N, M] = size(pr);
[p, idxP] = sort(p);
[pr] = sort(pr(:));
pfdr = NaN(N,1);
parfor iP = 1:N
x = p(iP);
m = sum(pr<=x)/M;
pfdr(iP) = m/iP;
end
pfdr(idxP) = pfdr;
If you have access to the parallel computing toolbox, the parfor loop will allow you to gain some performance. I used two basic ideas: mean(sum(pr<=x)) is actually equal to sum(pr(:)<=x)/M. On the other hand, since p is sorted, this allows you to just take the index as the number of elements (in the assumption that every element is unique, otherwise you'll have to work with unique to do the full rigorous analysis).
As you should already know very well by running the profiler yourself, the line m = sum(pr<=x)/M; is the main resource hog. This can be tackled similarly to p by making use of the sorted nature of pr.
I tested my code (both for identical results and for time consumption) against yours. For N=20e3; M=100, I get about 63 seconds to run your code and 43 seconds to run mine on my main computer (MATLAB 2011a on 64 bit Arch Linux, 8 GiB RAM, Core i7 860). For smaller values of M the gain is larger. But this gain is in part due to parallelization.
edit2: Apparently, I came to very similar results as Andrey, my result would have been very similar had I pursued the same approach.
However, I realised that there are some built-in functions that do more or less what you need, i.e. quite similar to determining the empirical cumulative density function. And this can be done by constructing the histogram:
function pfdr = fdr(p,pr)
[N, M] = size(pr);
[p, idxP] = sort(p);
count = histc(pr(:), [0; p]);
count = cumsum(count(1:N));
pfdr = count./(1:N).';
pfdr(idxP) = pfdr/M;
For the same M and N as above, this code takes 228 milliseconds on my computer. It takes 104 milliseconds for Andrey's parameters, so on my computer it turns out a bit slower, but I think this code is far more readable than intricate for loops (as was the case in both our examples).
Following the discussion between me and Andrey in this question, this very late answer is just to prove to Andrey that vectorized solutions are still faster than JIT'ed loops, they sometimes just aren't as easy to find.
I am more than willing to remove this answer if it is deemed inappropriate by the OP.
Now, on to business, here's the original arrayfun, looped version by Andrey, and vectorized version by Egon:
function test
clc
N = 10000/4 ;
M = 100/4 ;
p = rand(N,1);
pr = rand(N,M);
%% first option
tic
pfdr = arrayfun( #(x) mean(sum(pr<=x))./sum(p<=x), p);
toc
%% second option
tic
out = zeros(numel(p),1);
[p2,sortIndex] = sort(p);
pr2 = sort(pr(:));
pr2(end+1) = Inf;
place = 1;
for i=1:numel(p2)
x = p2(i);
while pr2(place)<=x
place = place+1;
end
exp1a = place-1;
exp2 = i;
out(i) = exp1a/exp2;
end
out(sortIndex) = out/ M;
toc
%% third option
tic
[p2,sortIndex] = sort(p);
count = histc(pr2(:), [0; p2]);
count = cumsum(count(1:N));
out = count./(1:N).';
out(sortIndex) = out/M;
toc
end
Results on my laptop:
Elapsed time is 0.916196 seconds.
Elapsed time is 0.011429 seconds.
Elapsed time is 0.007328 seconds.
and for N=1000; M = 100; :
Elapsed time is 38.082718 seconds.
Elapsed time is 0.127052 seconds.
Elapsed time is 0.042686 seconds.
So: vectorized is 2-3 times faster.

Resources