Related
As a simple example to illustrate my point, I am trying to solve the following equation f(t+1) = f(t) + f(t)*Tr (f^2) starting at t=0 where Tr is the trace of a matrix (sum of diagonal elements). Below I provide a basic code. My code compiles with no errors but is not updating the solution as I want. My expected result is also below which I calculated by hand (it's very easy to check by hand via matrix multiplication).
In my sample code below I have two variables that store solution, g is for f(t=0) which I implement, and then I store f(t+1) as f.
complex,dimension(3,3) :: f,g
integer :: k,l,m,p,q
Assume g=f(t=0) is defined as below
do l=1,3 !matrix index loops
do k=1,3 !matrix index loops
if (k == l) then
g(k,l) = cmplx(0.2,0)
else if ( k /= l) then
g(k,l) = cmplx(0,0)
end if
end do
end do
I have checked this result is indeed what I want it to be, so I know f at t=0 is defined properly.
Now I try to use this matrix at t=0 and find the matrix for all time, governed by the equation f(t+1) = f(t)+f(t)*Tr(f^2), but this is where I am not correctly implementing the code I want.
do m=1,3 !loop for 3 time iterations
do p=1,3 !loops for dummy indices for matrix trace
do q=1,3
g(1,1) = g(1,1) + g(1,1)*g(p,q)*g(p,q) !compute trace here
f(1,1) = g(1,1)
!f(2,2) = g(2,2) + g(2,2)*g(p,q)*g(p,q)
!f(3,3) = g(3,3) + g(3,3)*g(p,q)*g(p,q)
!assume all other matrix elements are zero except diagonal
end do
end do
end do
Printing this result is done by
print*, "calculated f where m=", m
do k=1,3
print*, (f(k,l), l=1,3)
end do
This is when I realize my code is not being implemented correctly.
When I print f(k,l) I expect for t=1 a result of 0.224*identity matrix and now I get this. However for t=2 the output is not right. So my code is being updated correctly for the first time iteration, but not after that.
I am looking for a solution as to how to properly implement the equation I want to obtain the result I am expecting.
I'll answer a couple things you seem to be having trouble with. First, the trace. The trace of a 3x3 matrix is A(1,1)+A(2,2)+A(3,3). The first and second indexes are the same, so we use one loop variable. To compute the trace of an NxN matrix A:
trace = 0.
do i=1,N
trace = trace + A(i,i)
enddo
I think you're trying to loop over p and q to compute your trace which is incorrect. In that sum, you'll add in terms like A(2,3) which is wrong.
Second, to compute the update, I recommend you compute the updated f into fNew, and then your code would look something like:
do m=1,3 ! time
! -- Compute f^2 (with loops not shown)
f2 = ...
! -- Compute trace of f2 (with loop not shown)
trace = ...
! -- Compute new f
do j=1,3
do i=1,3
fNew(i,j) = f(i,j) + trace*f(i,j)
enddo
enddo
! -- Now update f, perhaps recording fNew-f for some residual
! -- The LHS and RHS are both arrays of dimension (3,3),
! -- so fortran will automatically perform an array operation
f = fNew
enddo
This method has two advantages. First, your code actually looks like the math you're trying to do, and is easy to follow. This is very important for realistic problesm which are not so simple. Second, if fNew(i,j) depended on f(i+1,j), for example, you are not updating to the next time level while the current time level values still need to be used.
I was persuaded some time ago to drop my comfortable matlab programming and start programming in Julia. I have been working for a long with neural networks and I thought that, now with Julia, I could get things done faster by parallelising the calculation of the gradient.
The gradient need not be calculated on the entire dataset in one go; instead one can split the calculation. For instance, by splitting the dataset in parts, we can calculate a partial gradient on each part. The total gradient is then calculated by adding up the partial gradients.
Though, the principle is simple, when I parallelise with Julia I get a performance degradation, i.e. one process is faster then two processes! I am obviously doing something wrong... I have consulted other questions asked in the forum but I could still not piece together an answer. I think my problem lies in that there is a lot of unnecessary data moving going on, but I can't fix it properly.
In order to avoid posting messy neural network code, I am posting below a simpler example that replicates my problem in the setting of linear regression.
The code-block below creates some data for a linear regression problem. The code explains the constants, but X is the matrix containing the data inputs. We randomly create a weight vector w which when multiplied with X creates some targets Y.
######################################
## CREATE LINEAR REGRESSION PROBLEM ##
######################################
# This code implements a simple linear regression problem
MAXITER = 100 # number of iterations for simple gradient descent
N = 10000 # number of data items
D = 50 # dimension of data items
X = randn(N, D) # create random matrix of data, data items appear row-wise
Wtrue = randn(D,1) # create arbitrary weight matrix to generate targets
Y = X*Wtrue # generate targets
The next code-block below defines functions for measuring the fitness of our regression (i.e. the negative log-likelihood) and the gradient of the weight vector w:
####################################
## DEFINE FUNCTIONS ##
####################################
#everywhere begin
#-------------------------------------------------------------------
function negative_loglikelihood(Y,X,W)
#-------------------------------------------------------------------
# number of data items
N = size(X,1)
# accumulate here log-likelihood
ll = 0
for nn=1:N
ll = ll - 0.5*sum((Y[nn,:] - X[nn,:]*W).^2)
end
return ll
end
#-------------------------------------------------------------------
function negative_loglikelihood_grad(Y,X,W, first_index,last_index)
#-------------------------------------------------------------------
# number of data items
N = size(X,1)
# accumulate here gradient contributions by each data item
grad = zeros(similar(W))
for nn=first_index:last_index
grad = grad + X[nn,:]' * (Y[nn,:] - X[nn,:]*W)
end
return grad
end
end
Note that the above functions are on purpose not vectorised! I choose not to vectorise, as the final code (the neural network case) will also not admit any vectorisation (let us not get into more details regarding this).
Finally, the code-block below shows a very simple gradient descent that tries to recover the parameter weight vector w from the given data Y and X:
####################################
## SOLVE LINEAR REGRESSION ##
####################################
# start from random initial solution
W = randn(D,1)
# learning rate, set here to some arbitrary small constant
eta = 0.000001
# the following for-loop implements simple gradient descent
for iter=1:MAXITER
# get gradient
ref_array = Array(RemoteRef, nworkers())
# let each worker process part of matrix X
for index=1:length(workers())
# first index of subset of X that worker should work on
first_index = (index-1)*int(ceil(N/nworkers())) + 1
# last index of subset of X that worker should work on
last_index = min((index)*(int(ceil(N/nworkers()))), N)
ref_array[index] = #spawn negative_loglikelihood_grad(Y,X,W, first_index,last_index)
end
# gather the gradients calculated on parts of matrix X
grad = zeros(similar(W))
for index=1:length(workers())
grad = grad + fetch(ref_array[index])
end
# now that we have the gradient we can update parameters W
W = W + eta*grad;
# report progress, monitor optimisation
#printf("Iter %d neg_loglikel=%.4f\n",iter, negative_loglikelihood(Y,X,W))
end
As is hopefully visible, I tried to parallelise the calculation of the gradient in the easiest possible way here. My strategy is to break the calculation of the gradient in as many parts as available workers. Each worker is required to work only on part of matrix X, which part is specified by first_index and last_index. Hence, each worker should work with X[first_index:last_index,:]. For instance, for 4 workers and N = 10000, the work should be divided as follows:
worker 1 => first_index = 1, last_index = 2500
worker 2 => first_index = 2501, last_index = 5000
worker 3 => first_index = 5001, last_index = 7500
worker 4 => first_index = 7501, last_index = 10000
Unfortunately, this entire code works faster if I have only one worker. If add more workers via addprocs(), the code runs slower. One can aggravate this issue by create more data items, for instance use instead N=20000.
With more data items, the degradation is even more pronounced.
In my particular computing environment with N=20000 and one core, the code runs in ~9 secs. With N=20000 and 4 cores it takes ~18 secs!
I tried many many different things inspired by the questions and answers in this forum but unfortunately to no avail. I realise that the parallelisation is naive and that data movement must be the problem, but I have no idea how to do it properly. It seems that the documentation is also a bit scarce on this issue (as is the nice book by Ivo Balbaert).
I would appreciate your help as I have been stuck for quite some while with this and I really need it for my work. For anyone wanting to run the code, to save you the trouble of copying-pasting you can get the code here.
Thanks for taking the time to read this very lengthy question! Help me turn this into a model answer that anyone new in Julia can then consult!
I would say that GD is not a good candidate for parallelizing it using any of the proposed methods: either SharedArray or DistributedArray, or own implementation of distribution of chunks of data.
The problem does not lay in Julia, but in the GD algorithm.
Consider the code:
Main process:
for iter = 1:iterations #iterations: "the more the better"
δ = _gradient_descent_shared(X, y, θ)
θ = θ - α * (δ/N)
end
The problem is in the above for-loop which is a must. No matter how good _gradient_descent_shared is, the total number of iterations kills the noble concept of the parallelization.
After reading the question and the above suggestion I've started implementing GD using SharedArray. Please note, I'm not an expert in the field of SharedArrays.
The main process parts (simple implementation without regularization):
run_gradient_descent(X::SharedArray, y::SharedArray, θ::SharedArray, α, iterations) = begin
N = length(y)
for iter = 1:iterations
δ = _gradient_descent_shared(X, y, θ)
θ = θ - α * (δ/N)
end
θ
end
_gradient_descent_shared(X::SharedArray, y::SharedArray, θ::SharedArray, op=(+)) = begin
if size(X,1) <= length(procs(X))
return _gradient_descent_serial(X, y, θ)
else
rrefs = map(p -> (#spawnat p _gradient_descent_serial(X, y, θ)), procs(X))
return mapreduce(r -> fetch(r), op, rrefs)
end
end
The code common to all workers:
#= Returns the range of indices of a chunk for every worker on which it can work.
The function splits data examples (N rows into chunks),
not the parts of the particular example (features dimensionality remains intact).=#
#everywhere function _worker_range(S::SharedArray)
idx = indexpids(S)
if idx == 0
return 1:size(S,1), 1:size(S,2)
end
nchunks = length(procs(S))
splits = [round(Int, s) for s in linspace(0,size(S,1),nchunks+1)]
splits[idx]+1:splits[idx+1], 1:size(S,2)
end
#Computations on the chunk of the all data.
#everywhere _gradient_descent_serial(X::SharedArray, y::SharedArray, θ::SharedArray) = begin
prange = _worker_range(X)
pX = sdata(X[prange[1], prange[2]])
py = sdata(y[prange[1],:])
tempδ = pX' * (pX * sdata(θ) .- py)
end
The data loading and training. Let me assume that we have:
features in X::Array of the size (N,D), where N - number of examples, D-dimensionality of the features
labels in y::Array of the size (N,1)
The main code might look like this:
X=[ones(size(X,1)) X] #adding the artificial coordinate
N, D = size(X)
MAXITER = 500
α = 0.01
initialθ = SharedArray(Float64, (D,1))
sX = convert(SharedArray, X)
sy = convert(SharedArray, y)
X = nothing
y = nothing
gc()
finalθ = run_gradient_descent(sX, sy, initialθ, α, MAXITER);
After implementing this and run (on 8-cores of my Intell Clore i7) I got a very slight acceleration over serial GD (1-core) on my training multiclass (19 classes) training data (715 sec for serial GD / 665 sec for shared GD).
If my implementation is correct (please check this out - I'm counting on that) then parallelization of the GD algorithm is not worth of that. Definitely you might get better acceleration using stochastic GD on 1-core.
If you want to reduce the amount of data movement, you should strongly consider using SharedArrays. You could preallocate just one output vector, and pass it as an argument to each worker. Each worker sets a chunk of it, just as you suggested.
Background
I have a large set of vectors (orientation data in an axis-angle representation... the axis is the vector). I want to apply a clustering algorithm to. I tried kmeans but the computational time was too long (never finished). So instead I am trying to implement KFCG algorithm which is faster (Kirke 2010):
Initially we have one cluster with the entire training vectors and the codevector C1 which is centroid. In the first iteration of the algorithm, the clusters are formed by comparing first element of training vector Xi with first element of code vector C1. The vector Xi is grouped into the cluster 1 if xi1< c11 otherwise vector Xi is grouped into cluster2 as shown in Figure 2(a) where codevector dimension space is 2. In second iteration, the cluster 1 is split into two by comparing second element Xi2 of vector Xi belonging to cluster 1 with that of the second element of the codevector. Cluster 2 is split into two by comparing the second element Xi2 of vector Xi belonging to cluster 2 with that of the second element of the codevector as shown in Figure 2(b). This procedure is repeated till the codebook size is reached to the size specified by user.
I'm unsure what ratio is appropriate for the codebook, but it shouldn't matter for the code optimization. Also note mine is 3-D so the same process is done for the 3rd dimension.
My code attempts
I've tried implementing the above algorithm into Matlab 2013 (Student Version). Here's some different structures I've tried - BUT take way too long (have never seen it completed):
%training vectors:
Atgood = Nx4 vector (see test data below if want to test);
vecA = Atgood(:,1:3);
roA = size(vecA,1);
%Codebook size, Nsel, is ratio of data
remainFrac2=0.5;
Nseltemp = remainFrac2*roA; %codebook size
%Ensure selected size after nearest power of 2 is NOT greater than roA
if 2^round(log2(Nseltemp)) < roA
NselIter = round(log2(Nseltemp));
else
NselIter = ceil(log2(Nseltemp)-1);
end
Nsel = 2^NselIter; %power of 2 - for LGB and other algorithms
MAIN BLOCK TO OPTIMIZE:
%KFCG:
%%cluster = cell(1,Nsel); %Unsure #rows - Don't know how to initialize if need mean...
codevec(1,1:3) = mean(vecA,1);
count1=1;
count2=1;
ind=1;
for kk = 1:NselIter
hh2 = 1:2:size(codevec,1)*2;
for hh1 = 1:length(hh2)
hh=hh2(hh1);
% for ii = 1:roA
% if vecA(ii,ind) < codevec(hh1,ind)
% cluster{1,hh}(count1,1:4) = Atgood(ii,:); %want all 4 elements
% count1=count1+1;
% else
% cluster{1,hh+1}(count2,1:4) = Atgood(ii,:); %want all 4
% count2=count2+1;
% end
% end
%EDIT: My ATTEMPT at optimizing above for loop:
repcv=repmat(codevec(hh1,ind),[size(vecA,1),1]);
splitind = vecA(:,ind)>=repcv;
splitind2 = vecA(:,ind)<repcv;
cluster{1,hh}=vecA(splitind,:);
cluster{1,hh+1}=vecA(splitind2,:);
end
clear codevec
%Only mean the 1x3 vector portion of the cluster - for centroid
codevec = cell2mat((cellfun(#(x) mean(x(:,1:3),1),cluster,'UniformOutput',false))');
if ind < 3
ind = ind+1;
else
ind=1;
end
end
if length(codevec) ~= Nsel
warning('codevec ~= Nsel');
end
Alternatively, instead of cells I thought 3D Matrices would be faster? I tried but it was slower using my method of appending the next row each iteration (temp=[]; for...temp=[temp;new];)
Also, I wasn't sure what was best to loop with, for or while:
%If initialize cell to full length
while length(find(~cellfun('isempty',cluster))) < Nsel
Well, anyways, the first method was fastest for me.
Questions
Is the logic standard? Not in the sense that it matches with the algorithm described, but from a coding perspective, any weird methods I employed (especially with those multiple inner loops) that slows it down? Where can I speed up (you can just point me to resources or previous questions)?
My array size, Atgood, is 1,000,000x4 making NselIter=19; - do I just need to find a way to decrease this size or can the code be optimized?
Should this be asked on CodeReview? If so, I'll move it.
Testing Data
Here's some random vectors you can use to test:
for ii=1:1000 %My size is ~ 1,000,000
omega = 2*rand(3,1)-1;
omega = (omega/norm(omega))';
Atgood(ii,1:4) = [omega,57];
end
Your biggest issue is re-iterating through all of vecA FOR EACH CODEVECTOR, rather than just the ones that are part of the corresponding cluster. You're supposed to split each cluster on it's codevector. As it is, your cluster structure grows and grows, and each iteration is processing more and more samples.
Your second issue is the loop around the comparisons, and the appending of samples to build up the clusters. Both of those can be solved by vectorizing the comparison operation. Oh, I just saw your edit, where this was optimized. Much better. But codevec(hh1,ind) is just a scalar, so you don't even need the repmat.
Try this version:
% (preallocs added in edit)
cluster = cell(1,Nsel);
codevec = zeros(Nsel, 3);
codevec(1,:) = mean(Atgood(:,1:3),1);
cluster{1} = Atgood;
nClusters = 1;
ind = 1;
while nClusters < Nsel
for c = 1:nClusters
lower_cluster_logical = cluster{c}(:,ind) < codevec(c,ind);
cluster{nClusters+c} = cluster{c}(~lower_cluster_logical,:);
cluster{c} = cluster{c}(lower_cluster_logical,:);
codevec(c,:) = mean(cluster{c}(:,1:3), 1);
codevec(nClusters+c,:) = mean(cluster{nClusters+c}(:,1:3), 1);
end
ind = rem(ind,3) + 1;
nClusters = nClusters*2;
end
I just have a question of fortran optimisation (probably programs in general):
There are two ways to carry out a basic operation, over the entire vector or row by row, i.e.
x = array(:,1)
y = array(:,2)
z = array(:,3)
x1 = floor(x/k) + 1
y1 = floor(y/k) + 1
z1 = floor(z/k) + 1
OR
do i = 1:n
x1(i) = floor(x(i)/k) + 1
y1(i) = floor(y(i)/k) + 1
z1(i) = floor(z(i)/k) + 1
end do
I can do openmp on the loop because there are 100 million entries but I'm not sure it would hlep. Would it be faster to do it in the loop or outside of the loop. Experience and common sense tells me to do it outside. There are other components to the program but I'm finding most of the time is taken up by creating new vectors x1,y1,z1 because there are so many x,y,z values to convert.
If you're concerned with execution speed then I suggest you profile a version of the code which dispenses with what seem to be the temporary array slices x,y, and z. Creating them will require copying a lot of stuff around the memory of your machine. You could simply write
x1 = floor(array(:,1)/k) + 1
y1 = floor(array(:,2)/k) + 1
z1 = floor(array(:,3)/k) + 1
Your compiler ought to be able to do this without making a copy of array but this is something you ought to check.
Depending on elements of your code which are not shown in your question you might even be able to declare x1,y1 and z1 to be pointers and write something like this:
array_over_k = floor(array/k) + 1
x1 => array_over_k(:,1)
y1 => array_over_k(:,2)
z1 => array_over_k(:,3)
Whichever way you do the calculations you still gotta do the calculations, but do you need to make all those copies of elements of the arrays ?
This will be memory bandwidth bound. I would go the first way, if they are separate in memory (i.e. not some weird non-contiguous pointers). But it's best to try and measure, without profiler one can be wrong easily. Also, you can do OpenMP or just autoparallelization for the first version as well.
I am just learning MATLAB and I find it hard to understand the performance factors of loops vs vectorized functions.
In my previous question: Nested for loops extremely slow in MATLAB (preallocated) I realized that using a vectorized function vs. 4 nested loops made a 7x times difference in running time.
In that example instead of looping through all dimensions of a 4 dimensional array and calculating median for each vector, it was much cleaner and faster to just call median(stack, n) where n meant the working dimension of the median function.
But median is just a very easy example and I was just lucky that it had this dimension parameter implemented.
My question is that how do you write a function yourself which works as efficiently as one which has this dimension range implemented?
For example you have a function my_median_1D which only works on a 1-D vector and returns a number.
How do you write a function my_median_nD which acts like MATLAB's median, by taking an n-dimensional array and a "working dimension" parameter?
Update
I found the code for calculating median in higher dimensions
% In all other cases, use linear indexing to determine exact location
% of medians. Use linear indices to extract medians, then reshape at
% end to appropriate size.
cumSize = cumprod(s);
total = cumSize(end); % Equivalent to NUMEL(x)
numMedians = total / nCompare;
numConseq = cumSize(dim - 1); % Number of consecutive indices
increment = cumSize(dim); % Gap between runs of indices
ixMedians = 1;
y = repmat(x(1),numMedians,1); % Preallocate appropriate type
% Nested FOR loop tracks down medians by their indices.
for seqIndex = 1:increment:total
for consIndex = half*numConseq:(half+1)*numConseq-1
absIndex = seqIndex + consIndex;
y(ixMedians) = x(absIndex);
ixMedians = ixMedians + 1;
end
end
% Average in second value if n is even
if 2*half == nCompare
ixMedians = 1;
for seqIndex = 1:increment:total
for consIndex = (half-1)*numConseq:half*numConseq-1
absIndex = seqIndex + consIndex;
y(ixMedians) = meanof(x(absIndex),y(ixMedians));
ixMedians = ixMedians + 1;
end
end
end
% Check last indices for NaN
ixMedians = 1;
for seqIndex = 1:increment:total
for consIndex = (nCompare-1)*numConseq:nCompare*numConseq-1
absIndex = seqIndex + consIndex;
if isnan(x(absIndex))
y(ixMedians) = NaN;
end
ixMedians = ixMedians + 1;
end
end
Could you explain to me that why is this code so effective compared to the simple nested loops? It has nested loops just like the other function.
I don't understand how could it be 7x times faster and also, that why is it so complicated.
Update 2
I realized that using median was not a good example as it is a complicated function itself requiring sorting of the array or other neat tricks. I re-did the tests with mean instead and the results are even more crazy:
19 seconds vs 0.12 seconds.
It means that the built in way for sum is 160 times faster than the nested loops.
It is really hard for me to understand how can an industry leading language have such an extreme performance difference based on the programming style, but I see the points mentioned in the answers below.
Update 2 (to address your updated question)
MATLAB is optimized to work well with arrays. Once you get used to it, it is actually really nice to just have to type one line and have MATLAB do the full 4D looping stuff itself without having to worry about it. MATLAB is often used for prototyping / one-off calculations, so it makes sense to save time for the person coding, and giving up some of C[++|#]'s flexibility.
This is why MATLAB internally does some loops really well - often by coding them as a compiled function.
The code snippet you give doesn't really contain the relevant line of code which does the main work, namely
% Sort along given dimension
x = sort(x,dim);
In other words, the code you show only needs to access the median values by their correct index in the now-sorted multi-dimensional array x (which doesn't take much time). The actual work accessing all array elements was done by sort, which is a built-in (i.e. compiled and highly optimized) function.
Original answer (about how to built your own fast functions working on arrays)
There are actually quite a few built-ins that take a dimension parameter: min(stack, [], n), max(stack, [], n), mean(stack, n), std(stack, [], n), median(stack,n), sum(stack, n)... together with the fact that other built-in functions like exp(), sin() automatically work on each element of your whole array (i.e. sin(stack) automatically does four nested loops for you if stack is 4D), you can built up a lot of functions that you might need just be relying on the existing built-ins.
If this is not enough for a particular case you should have a look at repmat, bsxfun, arrayfun and accumarray which are very powerful functions for doing things "the MATLAB way". Just search on SO for questions (or rather answers) using one of these, I learned a lot about MATLABs strong points that way.
As an example, say you wanted to implement the p-norm of stack along dimension n, you could write
function result=pnorm(stack, p, n)
result=sum(stack.^p,n)^(1/p);
... where you effectively reuse the "which-dimension-capability" of sum.
Update
As Max points out in the comments, also have a look at the colon operator (:) which is a very powerful tool for selecting elements from an array (or even changing it shape, which is more generally done with reshape).
In general, have a look at the section Array Operations in the help - it contains repmat et al. mentioned above, but also cumsum and some more obscure helper functions which you should use as building blocks.
Vectorization
In addition to whats already been said, you should also understand that vectorization involves parallelization, i.e. performing concurrent operations on data as opposed to sequential execution (think SIMD instructions), and even taking advantage of threads and multiprocessors in some cases...
MEX-files
Now although the "interpreted vs. compiled" point has already been argued, no one mentioned that you can extend MATLAB by writing MEX-files, which are compiled executables written in C, that can be called directly as normal function from inside MATLAB. This allows you to implement performance-critical parts using a lower-level language like C.
Column-major order
Finally, when trying to optimize some code, always remember that MATLAB stores matrices in column-major order. Accessing elements in that order can yield significant improvements compared to other arbitrary orders.
For example, in your previous linked question, you were computing the median of set of stacked images along some dimension. Now the order in which those dimensions are ordered greatly affect the performance. Illustration:
%# sequence of 10 images
fPath = fullfile(matlabroot,'toolbox','images','imdemos');
files = dir( fullfile(fPath,'AT3_1m4_*.tif') );
files = strcat(fPath,{filesep},{files.name}'); %'
I = imread( files{1} );
%# stacked images along the 1st dimension: [numImages H W RGB]
stack1 = zeros([numel(files) size(I) 3], class(I));
for i=1:numel(files)
I = imread( files{i} );
stack1(i,:,:,:) = repmat(I, [1 1 3]); %# grayscale to RGB
end
%# stacked images along the 4th dimension: [H W RGB numImages]
stack4 = permute(stack1, [2 3 4 1]);
%# compute median image from each of these two stacks
tic, m1 = squeeze( median(stack1,1) ); toc
tic, m4 = median(stack4,4); toc
isequal(m1,m4)
The timing difference was huge:
Elapsed time is 0.257551 seconds. %# stack1
Elapsed time is 17.405075 seconds. %# stack4
Could you explain to me that why is this code so effective compared to the simple nested loops? It has nested loops just like the other function.
The problem with nested loops is not the nested loops themselves. It's the operations you perform inside.
Each function call (especially to a non-built-in function) generates a little bit of overhead; more so if the function performs e.g. error checking that takes the same amount of time regardless of input size. Thus, if a function has only a 1 ms overhead, if you call it 1000 times, you will have wasted a second. If you can call it once to perform a vectorized calculation, you pay overhead only once.
Furthermore, the JIT compiler (pdf) can help vectorize simple for-loops, where you, for example, only perform basic arithmetic operations. Thus, the loops with simple calculations in your post are sped up by a lot, while the loops calling median are not.
In this case
M = median(A,dim) returns the median values for elements along the dimension of A specified by scalar dim
But with a general function you can try splitting your array with mat2cell (which can work with n-D arrays and not just matrices) and applying your my_median_1D function through cellfun. Below I will use median as an example to show that you get equivalent results, but instead you can pass it any function defined in an m-file, or an anonymous function defined with the #(args) notation.
>> testarr = [[1 2 3]' [4 5 6]']
testarr =
1 4
2 5
3 6
>> median(testarr,2)
ans =
2.5000
3.5000
4.5000
>> shape = size(testarr)
shape =
3 2
>> cellfun(#median,mat2cell(testarr,repmat(1,1,shape(1)),[shape(2)]))
ans =
2.5000
3.5000
4.5000