bsxfun implementation in matrix multiplication - performance

As always trying to learn more from you, I was hoping I could receive some help with the following code.
I need to accomplish the following:
1) I have a vector:
x = [1 2 3 4 5 6 7 8 9 10 11 12]
2) and a matrix:
A =[11 14 1
5 8 18
10 8 19
13 20 16]
I need to be able to multiply each value from x with every value of A, this means:
new_matrix = [1* A
2* A
3* A
...
12* A]
This will give me this new_matrix of size (12*m x n) assuming A (mxn). And in this case (12*4x3)
How can I do this using bsxfun from matlab? and, would this method be faster than a for-loop?
Regarding my for-loop, I need some help here as well... I am not able to storage each "new_matrix" as the loop runs :(
for i=x
new_matrix = A.*x(i)
end
Thanks in advance!!
EDIT: After the solutions where given
First solution
clear all
clc
x=1:0.1:50;
A = rand(1000,1000);
tic
val = bsxfun(#times,A,permute(x,[3 1 2]));
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[]);
toc
Output:
Elapsed time is 7.597939 seconds.
Second solution
clear all
clc
x=1:0.1:50;
A = rand(1000,1000);
tic
Ps = kron(x.',A);
toc
Output:
Elapsed time is 48.445417 seconds.

Send x to the third dimension, so that singleton expansion would come into effect when bsxfun is used for multiplication with A, extending the product result to the third dimension. Then, perform the bsxfun multiplication -
val = bsxfun(#times,A,permute(x,[3 1 2]))
Now, val is a 3D matrix and the desired output is expected to be a 2D matrix concatenated along the columns through the third dimension. This is achieved below -
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[])
Hope that made sense! Spread the bsxfun word around! woo!! :)

The kron function does exactly that:
kron(x.',A)

Here is my benchmark of the methods mentioned so far, along with a few additions of my own:
function [t,v] = testMatMult()
% data
%{
x = [1 2 3 4 5 6 7 8 9 10 11 12];
A = [11 14 1; 5 8 18; 10 8 19; 13 20 16];
%}
x = 1:50;
A = randi(100, [1000,1000]);
% functions to test
fcns = {
#() func1_repmat(A,x)
#() func2_bsxfun_3rd_dim(A,x)
#() func2_forloop_3rd_dim(A,x)
#() func3_kron(A,x)
#() func4_forloop_matrix(A,x)
#() func5_forloop_cell(A,x)
#() func6_arrayfun(A,x)
};
% timeit
t = cellfun(#timeit, fcns, 'UniformOutput',true);
% check results
v = cellfun(#feval, fcns, 'UniformOutput',false);
isequal(v{:})
%for i=2:numel(v), assert(norm(v{1}-v{2}) < 1e-9), end
end
% Amro
function B = func1_repmat(A,x)
B = repmat(x, size(A,1), 1);
B = bsxfun(#times, B(:), repmat(A,numel(x),1));
end
% Divakar
function B = func2_bsxfun_3rd_dim(A,x)
B = bsxfun(#times, A, permute(x, [3 1 2]));
B = reshape(permute(B, [1 3 2]), [], size(A,2));
end
% Vissenbot
function B = func2_forloop_3rd_dim(A,x)
B = zeros([size(A) numel(x)], 'like',A);
for i=1:numel(x)
B(:,:,i) = x(i) .* A;
end
B = reshape(permute(B, [1 3 2]), [], size(A,2));
end
% Luis Mendo
function B = func3_kron(A,x)
B = kron(x(:), A);
end
% SergioHaram & TheMinion
function B = func4_forloop_matrix(A,x)
[m,n] = size(A);
p = numel(x);
B = zeros(m*p,n, 'like',A);
for i=1:numel(x)
B((i-1)*m+1:i*m,:) = x(i) .* A;
end
end
% Amro
function B = func5_forloop_cell(A,x)
B = cell(numel(x),1);
for i=1:numel(x)
B{i} = x(i) .* A;
end
B = cell2mat(B);
%B = vertcat(B{:});
end
% Amro
function B = func6_arrayfun(A,x)
B = cell2mat(arrayfun(#(xx) xx.*A, x(:), 'UniformOutput',false));
end
The results on my machine:
>> t
t =
0.1650 %# repmat (Amro)
0.2915 %# bsxfun in the 3rd dimension (Divakar)
0.4200 %# for-loop in the 3rd dim (Vissenbot)
0.1284 %# kron (Luis Mendo)
0.2997 %# for-loop with indexing (SergioHaram & TheMinion)
0.5160 %# for-loop with cell array (Amro)
0.4854 %# arrayfun (Amro)
(Those timings can slightly change between different runs, but this should give us an idea how the methods compare)
Note that some of these methods are going to cause out-of-memory errors for larger inputs (for example my solution based on repmat can easily run out of memory). Others will get significantly slower for larger sizes but won't error due to exhausted memory (the kron solution for instance).
I think that the bsxfun method func2_bsxfun_3rd_dim or the straightforward for-loop func4_forloop_matrix (thanks to MATLAB JIT) are the best solutions in this case.
Of course you can change the above benchmark parameters (size of x and A) and draw your own conclusions :)

Just to add an alternative, you maybe can use cellfun to achieve what you want. Here's an example (slightly modified from yours):
x = randi(2, 5, 3)-1;
a = randi(3,3);
%// bsxfun 3D (As implemented in the accepted solution)
val = bsxfun(#and, a, permute(x', [3 1 2])); %//'
out = reshape(permute(val,[1 3 2]),size(val,1)*size(val,3),[]);
%// cellfun (My solution)
val2 = cellfun(#(z) bsxfun(#and, a, z), num2cell(x, 2), 'UniformOutput', false);
out2 = cell2mat(val2); % or use cat(3, val2{:}) to get a 3D matrix equivalent to val and then permute/reshape like for out
%// compare
disp(nnz(out ~= out2));
Both give the same exact result.
For more infos and tricks using cellfun, see: http://matlabgeeks.com/tips-tutorials/computation-using-cellfun/
And also this: https://stackoverflow.com/a/1746422/1121352

If your vector x is of lenght = 12 and your matrix of size 3x4, I don't think that using one or the other would change much in term of time. If you are working with higher size matrix and vector, now that might become an issue.
So first of all, we want to multiply a vector with a matrix. In the for-loop method, that would give something like that :
s = size(A);
new_matrix(s(1),s(2),numel(x)) = zeros; %This is for pre-allocating. If you have a big vector or matrix, this will help a lot time efficiently.
for i = 1:numel(x)
new_matrix(:,:,i)= A.*x(i)
end
This will give you 3D matrix, with each 3rd dimension being a result of your multiplication. If this is not what you are looking for, I'll be adding another solution which might be more time efficient with bigger matrixes and vectors.

Related

Why does this simple code become slower to execute when I vectorize it?

I have a snippet of code which takes a vector A and creates a new vector C by summing A with both a left and right shifted version of itself (this is based on a centered finite difference formula ). For example, if A = [1 2 3 4 5], then C = [0 6 9 12 0], where the edges are zero because these don't have entries on both sides.
I created two versions, one which loops over element by element in C, and a second which processes the whole array. I measure the "vectorised" version to be around 50 times slower than the loop version... I was expecting the vectorised version to offer a speed improvement, but seems like it is not the case - what am I missing?
A = [1 2 3 4 5];
N = length(A);
C1 = zeros([1, N]);
C2 = zeros([1, N]);
%%% Loop version, element by element %%%
tic
for ntimes = 1:1e6
for m = 2:(N-1)
C1(m) = A(m+1) + A(m-1) + A(m);
end
end
t1 = toc;
disp(['Time taken for loop version = ',num2str(t1)])
%%% Vector version, process entire array at once %%%
tic
for ntimes = 1:1e6
C2(2:(N-1)) = A(3:N) + A(1:(N-2)) + A(2:(N-1));
end
t2 = toc;
disp(['Time taken for vector version = ',num2str(t2)])

Efficient way of computing multivariate gaussian varying the mean - Matlab

Is there a efficient way to do the computation of a multivariate gaussian (as below) that returns matrix p , that is, making use of some sort of vectorization? I am aware that matrix p is symmetric, but still for a matrix of size 40000x3, for example, this will take quite a long time.
Matlab code example:
DataMatrix = [3 1 4; 1 2 3; 1 5 7; 3 4 7; 5 5 1; 2 3 1; 4 4 4];
[rows, cols ] = size(DataMatrix);
I = eye(cols);
p = zeros(rows);
for k = 1:rows
p(k,:) = mvnpdf(DataMatrix(:,:),DataMatrix(k,:),I);
end
Stage 1: Hack into source code
Iteratively we are performing mvnpdf(DataMatrix(:,:),DataMatrix(k,:),I)
The syntax is : mvnpdf(X,Mu,Sigma).
Thus, the correspondence with our input becomes :
X = DataMatrix(:,:);
Mu = DataMatrix(k,:);
Sigma = I
For the sizes relevant to our situation, the source code mvnpdf.m reduces to -
%// Store size parameters of X
[n,d] = size(X);
%// Get vector mean, and use it to center data
X0 = bsxfun(#minus,X,Mu);
%// Make sure Sigma is a valid covariance matrix
[R,err] = cholcov(Sigma,0);
%// Create array of standardized data, and compute log(sqrt(det(Sigma)))
xRinv = X0 / R;
logSqrtDetSigma = sum(log(diag(R)));
%// Finally get the quadratic form and thus, the final output
quadform = sum(xRinv.^2, 2);
p_out = exp(-0.5*quadform - logSqrtDetSigma - d*log(2*pi)/2)
Now, if the Sigma is always an identity matrix, we would have R as an identity matrix too. Therefore, X0 / R would be same as X0, which is saved as xRinv. So, essentially quadform = sum(X0.^2, 2);
Thus, the original code -
for k = 1:rows
p(k,:) = mvnpdf(DataMatrix(:,:),DataMatrix(k,:),I);
end
reduces to -
[n,d] = size(DataMatrix);
[R,err] = cholcov(I,0);
p_out = zeros(rows);
K = sum(log(diag(R))) + d*log(2*pi)/2;
for k = 1:rows
X0 = bsxfun(#minus,DataMatrix,DataMatrix(k,:));
quadform = sum(X0.^2, 2);
p_out(k,:) = exp(-0.5*quadform - K);
end
Now, if the input matrix is of size 40000x3, you might want to stop here. But with system resources permitting, you can vectorize everything as discussed next.
Stage 2: Vectorize everything
Now that we see what's actually going on and that the computations look parallelizable, it's time to step-up to use bsxfun in 3D with his good friend permute for a vectorized solution, like so -
%// Get size params and R
[n,d] = size(DataMatrix);
[R,err] = cholcov(I,0);
%// Calculate constants : "logSqrtDetSigma" and "d*log(2*pi)/2`"
K1 = sum(log(diag(R)));
K2 = d*log(2*pi)/2;
%// Major thing happening here as we calclate "X0" for all iterations
%// in one go with permute and bsxfun
diffs = bsxfun(#minus,DataMatrix,permute(DataMatrix,[3 2 1]));
%// "Sigma" is an identity matrix, so it plays no in "/R" at "xRinv = X0 / R".
%// Perform elementwise squaring and summing rows to get vectorized "quadform"
quadform1 = squeeze(sum(diffs.^2,2))
%// Finally use "quadform1" and get vectorized output as a 2D array
p_out = exp(-0.5*quadform1 - K1 - K2)

Reshape vector to matrix with column-wise zero padding in matlab

for an input matrix
in = [1 1;
1 2;
1 3;
1 4;
2 5;
2 6;
2 7;
3 8;
3 9;
3 10;
3 11];
i want to get the output matrix
out = [1 5 8;
2 6 9;
3 7 10;
4 0 11];
meaning i want to reshape the second input column into an output matrix, where all values corresponding to one value in the first input column are written into one column of the output matrix.
As there can be different numbers of entries for each value in the first input column (here 4 values for "1" and "3", but only 3 for "2"), the normal reshape function is not applicable. I need to pad all columns to the maximum number of rows.
Do you have an idea how to do this matlab-ish?
The second input column can only contain positive numbers, so the padding values can be 0, -x, NaN, ...
The best i could come up with is this (loop-based):
maxNumElem = 0;
for i=in(1,1):in(end,1)
maxNumElem = max(maxNumElem,numel(find(in(:,1)==i)));
end
out = zeros(maxNumElem,in(end,1)-in(1,1));
for i=in(1,1):in(end,1)
tmp = in(in(:,1)==i,2);
out(1:length(tmp),i) = tmp;
end
Either of the following approaches assumes that column 1 of in is sorted, as in the example. If that's not the case, apply this initially to sort in according to that criterion:
in = sortrows(in,1);
Approach 1 (using accumarray)
Compute the required number of rows, using mode;
Use accumarray to gather the values corresponding to each column, filled with zeros at the end. The result is a cell;
Concatenate horizontally the contents of all cells.
Code:
[~, n] = mode(in(:,1)); %//step 1
out = accumarray(in(:,1), in(:,2), [], #(x){[x; zeros(n-numel(x),1)]}); %//step 2
out = [out{:}]; %//step 3
Alternatively, step 1 could be done with histc
n = max(histc(in(:,1), unique(in(:,1)))); %//step 1
or with accumarray:
n = max(accumarray(in(:,1), in(:,2), [], #(x) numel(x))); %//step 1
Approach 2 (using sparse)
Generate a row-index vector using this answer by #Dan, and then build your matrix with sparse:
a = arrayfun(#(x)(1:x), diff(find([1,diff(in(:,1).'),1])), 'uni', 0); %//'
out = full(sparse([a{:}], in(:,1), in(:,2)));
Introduction to proposed solution and Code
Proposed here is a bsxfun based masking approach that uses the binary operators available as builtins for use with bsxfun and as such I would consider this very appropriate for problems like this. Of course, you must also be aware that bsxfun is a memory hungry tool. So, it could pose a threat if you are dealing with maybe billions of elements depending also on the memory available for MATLAB's usage.
Getting into the details of the proposed approach, we get the counts of each ID from column-1 of the input with histc. Then, the magic happens with bsxfun + #le to create a mask of positions in the output array (initialized by zeros) that are to be filled by the column-2 elements from input. That's all you need to tackle the problem with this approach.
Solution Code
counts = histc(in(:,1),1:max(in(:,1)))'; %//' counts of each ID from column1
max_counts = max(counts); %// Maximum counts for each ID
mask = bsxfun(#le,[1:max_counts]',counts); %//'# mask of locations where
%// column2 elements are to be placed
out = zeros(max_counts,numel(counts)); %// Initialize the output array
out(mask) = in(:,2); %// place the column2 elements in the output array
Benchmarking (for performance)
The benchmarking presented here compares the proposed solution in this post against the various methods presented in Luis's solution. This skips the original loopy approach presented in the problem as it appeared to be very slow for the input generated in the benchmarking code.
Benchmarking Code
num_ids = 5000;
counts_each_id = randi([10 100],num_ids,1);
num_runs = 20; %// number of iterations each approach is run for
%// Generate random input array
in = [];
for k = 1:num_ids
in = [in ; [repmat(k,counts_each_id(k),1) rand(counts_each_id(k),1)]];
end
%// Warm up tic/toc.
for k = 1:50000
tic(); elapsed = toc();
end
disp('------------- With HISTC + BSXFUN Masking approach')
tic
for iter = 1:num_runs
counts = histc(in(:,1),1:max(in(:,1)))';
max_counts = max(counts);
out = zeros(max_counts,numel(counts));
out(bsxfun(#le,[1:max_counts]',counts)) = in(:,2);
end
toc
clear counts max_counts out
disp('------------- With MODE + ACCUMARRAY approach')
tic
for iter = 1:num_runs
[~, n] = mode(in(:,1)); %//step 1
out = accumarray(in(:,1), in(:,2), [], #(x){[x; zeros(n-numel(x),1)]}); %//step 2
out = [out{:}];
end
toc
clear n out
disp('------------- With HISTC + ACCUMARRAY approach')
tic
for iter = 1:num_runs
n = max(histc(in(:,1), unique(in(:,1))));
out = accumarray(in(:,1), in(:,2), [], #(x){[x; zeros(n-numel(x),1)]}); %//step 2
out = [out{:}];
end
toc
clear n out
disp('------------- With ARRAYFUN + Sparse approach')
tic
for iter = 1:num_runs
a = arrayfun(#(x)(1:x), diff(find([1,diff(in(:,1).'),1])), 'uni', 0); %//'
out = full(sparse([a{:}], in(:,1), in(:,2)));
end
toc
clear a out
Results
------------- With HISTC + BSXFUN Masking approach
Elapsed time is 0.598359 seconds.
------------- With MODE + ACCUMARRAY approach
Elapsed time is 2.452778 seconds.
------------- With HISTC + ACCUMARRAY approach
Elapsed time is 2.579482 seconds.
------------- With ARRAYFUN + Sparse approach
Elapsed time is 1.455362 seconds.
slightly better, but still uses a loop :(
out=zeros(4,3);%set to zero matrix
for i = 1:max(in(:,1)); %find max in column 1, and loop for that number
ind = find(in(:,1)==i); %
out(1: size(in(ind,2),1),i)= in(ind,2);
end
don't know if you can avoid the loop...

Vectorizing three for loops

I'm quite new to Matlab and I need help in speeding up some part of my code. I am writing a Matlab application that performs 3D matrix convolution but unlike in standard convolution, the kernel is not constant, it needs to be calculated for each pixel of an image.
So far, I have ended up with a working code, but incredibly slow:
function result = calculateFilteredImages(images, T)
% images - matrix [480,360,10] of 10 grayscale images of height=480 and width=360
% reprezented as a value in a range [0..1]
% i.e. images(10,20,5) = 0.1231;
% T - some matrix [480,360,10, 3,3] of double values, calculated earlier
kerN = 5; %kernel size
mid=floor(kerN/2); %half the kernel size
offset=mid+1; %kernel offset
[h,w,n] = size(images);
%add padding so as not to get IndexOutOfBoundsEx during summation:
%[i.e. changes [1 2 3...10] to [0 0 1 2 ... 10 0 0]]
images = padarray(images,[mid, mid, mid]);
result(h,w,n)=0; %preallocate, faster than zeros(h,w,n)
kernel(kerN,kerN,kerN)=0; %preallocate
% the three parameters below are not important in this problem
% (are used to calculate sigma in x,y,z direction inside the loop)
sigMin=0.5;
sigMax=3;
d = 3;
for a=1:n;
tic;
for b=1:w;
for c=1:h;
M(:,:)=T(c,b,a,:,:); % M is now a 3x3 matrix
[R D] = eig(M); %get eigenvectors and eigenvalues - R and D are now 3x3 matrices
% eigenvalues
l1 = D(1,1);
l2 = D(2,2);
l3 = D(3,3);
sig1=sig( l1 , sigMin, sigMax, d);
sig2=sig( l2 , sigMin, sigMax, d);
sig3=sig( l3 , sigMin, sigMax, d);
% calculate kernel
for i=-mid:mid
for j=-mid:mid
for k=-mid:mid
x_new = [i,j,k] * R; %calculate new [i,j,k]
kernel(offset+i, offset+j, offset+k) = exp(- (((x_new(1))^2 )/(sig1^2) + ((x_new(2))^2)/(sig2^2) + ((x_new(3))^2)/(sig3^2)) /2);
end
end
end
% normalize
kernel=kernel/sum(kernel(:));
%perform summation
xm_sum=0;
for i=-mid:mid
for j=-mid:mid
for k=-mid:mid
xm_sum = xm_sum + kernel(offset+i, offset+j, offset+k) * images(c+mid+i, b+mid+j, a+mid+k);
end
end
end
result(c,b,a)=xm_sum;
end
end
toc;
end
end
I tried replacing the "calculating kernel" part with
sigma=[sig1 sig2 sig3]
[x,y,z] = ndgrid(-mid:mid,-mid:mid,-mid:mid);
k2 = arrayfun(#(x, y, z) exp(-(norm([x,y,z]*R./sigma)^2)/2), x,y,z);
but it turned out to be even slower than the loop. I went through several articles and tutorials on vectorization but I'm quite stuck with this one.
Can it be vectorized or somehow speeded up using something else?
I'm new to Matlab, maybe there are some build-in functions that could help in this case?
Update
The profiling result:
Sample data which was used during profiling:
T.mat
grayImages.mat
As Dennis noted, this is a lot of code, cutting it down to the minimum that's slow given by the profiler will help. I'm not sure if my code is equivalent to yours, can you try it and profile it? The 'trick' to Matlab vectorization is using .* and .^, which operate element-by-element instead of having to use loops. http://www.mathworks.com/help/matlab/ref/power.html
Take your rewritten part:
sigma=[sig1 sig2 sig3]
[x,y,z] = ndgrid(-mid:mid,-mid:mid,-mid:mid);
k2 = arrayfun(#(x, y, z) exp(-(norm([x,y,z]*R./sigma)^2)/2), x,y,z);
And just pick one sigma for now. Looping over 3 different sigmas isn't a performance problem if you can vectorize the underlying k2 formula.
EDIT: Changed the matrix_to_norm code to be x(:), and no commas. See Generate all possible combinations of the elements of some vectors (Cartesian product)
Then try:
% R & mid my test variables
R = [1 2 3; 4 5 6; 7 8 9];
mid = 5;
[x,y,z] = ndgrid(-mid:mid,-mid:mid,-mid:mid);
% meshgrid is also a possibility, check that you are getting the order you want
% Going to break the equation apart for now for clarity
% Matrix operation, should already be fast.
matrix_to_norm = [x(:) y(:) z(:)]*R/sig1
% Ditto
matrix_normed = norm(matrix_to_norm)
% Note the .^ - I believe you want element-by-element exponentiation, this will
% vectorize it.
k2 = exp(-0.5*(matrix_normed.^2))

How to improve performance in matlab operation with 3 for loops?

I'm trying to improve the speed of the following code calculation:
for i=1:5440
for j=1:46
for k= 1:2
pol(i,j,k)= kr0*exp(0.8*k*0.1)*(abs((I(i)*exp(-0.1*j*2.5))^0.9)+0.0);
end
end
end
Where I is a vector with 5440 values.
Is there any way to avoid the three for loops and increase the speed of this operation?. I can't find a right solution.
Thanks
Use bsxfunfor vectorization
f1 = #(a,b) (abs((a.*exp(-0.1*b*2.5)).^0.9)+0.0);
f2 = #(c,d) kr0*exp(0.8*c*0.1).*d;
pol = bsxfun(f2, permute(1:2, [3 1 2]), bsxfun(f1, I(:), 1:46));
Note that since array 1:2 is on third dimension we need permute to convert a matrix of size 1x2 to a matrix of size 1x1x2.
Here is a benchmark for comparision
kr0=1;
I=rand(5440,1);
[pol0, pol] = deal(zeros(5440, 46, 2));
tic
for mm = 1:10,
for i=1:5440
for j=1:46
for k= 1:2
pol0(i,j,k)= kr0*exp(0.8*k*0.1)*(abs((I(i)*exp(-0.1*j*2.5))^0.9)+0.0);
end
end
end
end
toc
tic
for mm=1:10
f1 = #(a,b) (abs((a.*exp(-0.1*b*2.5)).^0.9)+0.0);
f2 = #(c,d) kr0*exp(0.8*c*0.1).*d;
pol = bsxfun(f2, permute(1:2, [3 1 2]), bsxfun(f1, I(:), 1:46));
end
toc
isequal(pol0,pol)
Which returns
Elapsed time is 1.665443 seconds.
Elapsed time is 0.306089 seconds.
ans =
1
It is more than 5 times faster and the results are equal.
How about:
[i,j,k] = ndgrid(1:5440,1:46,1:2);
pol = kr0*exp(0.8*k*0.1) .* ( abs((I(i).*exp(-0.1*j*2.5)).^0.9) + 0.0);
MATLAB is column-major, so if you wanted to keep the loops, you should be able to get some speed up by looping your variables in the k, j, i order instead of i, j, k.

Resources