Divide image blocks into two chunks - image

I'm working on an image and I divided into non overlapping blocks, what I want to do next is to apply some changes on every two adjacent chunks of the same blocks. For example, I have a block B1 and I divided it into B11 and B12.
The objective here is to apply SVD on B11 and B12 and compare their singular values: S11(2,2) and S12(2,2) and so on for every other blocks. But I don't know how to work on every two sub-blocks adjacent I only can apply SVD for blocks Bi.
It seems like I need a loop for to process every two sub-blocks right or can the function mat2tiles do this?
This is an example explaining what I'm saying.

I finally found the way how to get my objective, and I'm sharing in case somebody needed too:
%First this a function that help me to divide my image into non overlapping block
function B=Block(IM,p,q)
p=p %% number of rows
q=q %% number of cols
[m,n] = size(IM);
IJ = zeros(p,q);
z = 1;
for m1 = 1:m/p
for n1 = 1:n/q
if m1*p <= m;
if n1*q <= n;
for i = (m1-1)*p+1:m1*p
for j = (n1-1)*q+1:n1*q
IJ(i-(m1-1)*p,j-(n1-1)*q) = IM(i,j);
if (i-(m1-1)*p)==p&&(j-(n1-1)*q)==q;
OUT = IJ;
B{1,z} = OUT;
z = z+1;
end
end
end
end
end
end
end
Now I will divide each blocks into sub-blocks and apply SVD
Block_Num = (m*n)/(p*q) % To get how many blocks we have
fun= #svd % the function we use is SVD
for i=1:Block_Num
sv=blkproc(B{i},[4 4],fun) % in my case I wanted to apply SVD
of every sub-block of 4*4
end
And this get the job done. hope it will be useful for you too

Related

Parallel loops and image processing in matlab

I am going to implement a salient object detection method based on a simple linear feedback control system (LFCS). The control system model is represented as in the following equation:
I've come up with the following program codes but the result would not be what should be. Specifically, the output should be something like the following image:
But the code produces this output:
The codes are as follows.
%Calculation of euclidian distance between adjacent superpixels stores in variable of Euc
A = imread('aa.jpg');
[rows, columns, cnumberOfColorChannels] = size(A);
[L,N] = superpixels(A,400);
%% Determination of adjacent superpixels
glcms = graycomatrix(L,'NumLevels',N,'GrayLimits',[1,N],'Offset',[0,1;1,0]); %Create gray-level co-occurrence matrix from image
glcms = sum(glcms,3); % add together the two matrices
glcms = glcms + glcms.'; % add upper and lower triangles together, make it symmetric
glcms(1:N+1:end) = 0; % set the diagonal to zero, we don't want to see "1 is neighbor of 1"
idx = label2idx(L); % Convert label matrix to cell array of linear indices
numRows = size(A,1);
numCols = size(A,2);
%%Mean color in Lab color space for each channel
data = zeros(N,3);
for labelVal = 1:N
redIdx = idx{labelVal};
greenIdx = idx{labelVal}+numRows*numCols;
blueIdx = idx{labelVal}+2*numRows*numCols;
data(labelVal,1) = mean(A(redIdx));
data(labelVal,2) = mean(A(greenIdx));
data(labelVal,3) = mean(A(blueIdx));
end
Euc=zeros(N);
%%Calculation of euclidian distance between adjacent superpixels stores in Euc
for a=1:N
for b=1:N
if glcms(a,b)~=0
Euc(a,b)=sqrt(((data(a,1)-data(b,1))^2)+((data(a,2)-data(b,2))^2)+((data(a,3)-data(b,3))^2));
end
end
end
%%Creation of Connectivity matrix "W" between adjacent superpixels
W=zeros(N);
W_num=zeros(N);
W_den=zeros(N);
OMG1=0.1;
for c=1:N
for d=1:N
if(Euc(c,d)~=0)
W_num(c,d)=exp(-OMG1*(Euc(c,d)));
W_den(c,c)=W_num(c,d)+W_den(c,c); %
end
end
end
%Connectivity matrix W between adjacent superpixels
for e=1:N
for f=1:N
if(Euc(e,f)~=0)
W(e,f)=(W_num(e,f))/(W_den(e,e));
end
end
end
%%calculation of geodesic distance between nonadjacent superpixels stores in variable "s_star_temp"
s_star_temp=zeros(N); %temporary variable for geodesic distance measurement
W_sparse=zeros(N);
W_sparse=sparse(W);
for g=1:N
for h=1:N
if W(g,h)==0 & g~=h;
s_star_temp(g,h)=graphshortestpath(W_sparse,g,h,'directed',false);
end
end
end
%%Calculation of connectivity matrix for nonadjacent superpixels stores in "S_star" variable"
S_star=zeros(N);
OMG2=8;
for i=1:N
for j=1:N
if s_star_temp(i,j)~=0
S_star(i,j)=exp(-OMG2*s_star_temp(i,j));
end
end
end
%%Calculation of connectivity matrix "S" for measuring connectivity between all superpixels
S=zeros(N);
S=S_star+W;
%% Defining non-isolation level for connectivity matrix "W"
g_star=zeros(N);
for k=1:N
g_star(k,k)=max(W(k,:));
end
%%Limiting the range of g_star and calculation of isolation cue matrix "G"
alpha1=0.15;
alpha2=0.85;
G=zeros(N);
for l=1:N
G(l,l)=alpha1*(g_star(l,l)- min(g_star(:)))/(max(g_star(:))- min(g_star(:)))+(alpha2 - alpha1);
end
%%Determining the supperpixels that surrounding the image boundary
lr = L([1,end],:);
tb = L(:,[1,end]);
labels = unique([lr(:);tb(:)]);
%% Calculation of background likelihood for each superpixels stores in"BgLike"
sum_temp=0;
temp=zeros(1,N);
BgLike=zeros(N,1);
BgLike_num=zeros(N);
BgLike_den=zeros(N);
for m=1:N
for n=1:N
if ismember(n,labels)==1
BgLike_num(m,m)=S(m,n)+ BgLike_num(m,m);
end
end
end
for o=1:N
for p=1:N
for q=1:N
if W(p,q)~=0
temp(q)=S(o,p)-S(o,q);
end
end
sum_temp=max(temp)+sum_temp;
temp=0;
end
BgLike_den(o,o)=sum_temp;
sum_temp=0;
end
for r=1:N
BgLike(r,1)= BgLike_num(r,r)/BgLike_den(r,r);
end
%%%%Calculation of Foreground likelihood for each superpixels stores in "FgLike"
FgLike=zeros(N,1);
for s=1:N
for t=1:N
FgLike(s,1)=(exp(-BgLike(t,1))) * Euc(s,t)+ FgLike(s,1);
end
end
The above codes are prerequisite for the following sections (in fact, they produce necessary data and matrices for the next section. The aforementioned codes provided to make the whole process reproducible).
Specifically, I think that this section did not give the desired results. I'm afraid I did not properly simulate the parallelism using for loops. Moreover, the terminating conditions (employed with for and if statements to simulate do-while loop) are never satisfied and the loops continue until the last iteration (instead terminating when a specified condition occurs). A major concern here is that if the terminating conditions are properly implemented.
The pseudo algorithm for the following code is as the image below:
%%parallel operations for background and foreground implemented here
T0 = 0 ;
Tf = 20 ;
Ts = 0.1 ;
Ti = T0:Ts:Tf ;
Nt=numel(Ti);
Y_Bg=zeros(N,Nt);
Y_Fg=zeros(N,Nt);
P_Back_Bg=zeros(N,N);
P_Back_Fg=zeros(N,N);
u_Bg=zeros(N,Nt);
u_Fg=zeros(N,Nt);
u_Bg_Star=zeros(N,Nt);
u_Fg_Star=zeros(N,Nt);
u_Bg_Normalized=zeros(N,Nt);
u_Fg_Normalized=zeros(N,Nt);
tau=0.1;
sigma_Bg=zeros(Nt,N);
Temp_Bg=0;
Temp_Fg=0;
C_Bg=zeros(Nt,N);
C_Fg=zeros(Nt,N);
%%System Initialization
for u=1:N
u_Bg(u,1)=(BgLike(u,1)- min(BgLike(:)))/(max(BgLike(:))- min(BgLike(:)));
u_Fg(u,1)=(FgLike(u,1)- min(FgLike(:)))/(max(FgLike(:))- min(FgLike(:)));
end
%% P_state and P_input
P_state=G*W;
P_input=eye(N)-G;
% State Initialization
X_Bg=zeros(N,Nt);
X_Fg=zeros(N,Nt);
for v=1:20 % v starts from 1 because we have no matrices with 0th column number
%The first column of X_Bg and X_Fg is 0 for system initialization
X_Bg(:,v+1)=P_state*X_Bg(:,v) + P_input*u_Bg(:,v);
X_Fg(:,v+1)=P_state*X_Fg(:,v) + P_input*u_Fg(:,v);
v=v+1;
if v==2
C_Bg(1,:)=1;
C_Fg(1,:)=1;
else
for w=1:N
for x=1:N
Temp_Fg=S(w,x)*X_Fg(x,v-1)+Temp_Fg;
Temp_Bg=S(w,x)*X_Bg(x,v-1)+Temp_Bg;
end
C_Fg(v-1,w)=inv(X_Fg(w,v-1)+((Temp_Bg)/(Temp_Fg)*(1-X_Fg(w,v-1))));
C_Bg(v-1,w)=inv(X_Bg(w,v-1)+((Temp_Fg)/(Temp_Bg))*(1-X_Bg(w,v-1)));
Temp_Bg=0;
Temp_Fg=0;
end
end
P_Bg=diag(C_Bg(v-1,:));
P_Fg=diag(C_Fg(v-1,:));
Y_Bg(:,v)= P_Bg*X_Bg(:,v);
Y_Fg(:,v)= P_Fg*X_Fg(:,v);
for y=1:N
Temp_sig_Bg=0;
Temp_sig_Fg=0;
for z=1:N
Temp_sig_Bg = Temp_sig_Bg +S(y,z)*abs(Y_Bg(y,v)- Y_Bg(z,v));
Temp_sig_Fg = Temp_sig_Fg +S(y,z)*abs(Y_Fg(y,v)- Y_Fg(z,v));
end
if Y_Bg(y,v)>= Y_Bg(y,v-1)
sign_Bg=1;
else
sign_Bg=-1;
end
if Y_Fg(y,v)>= Y_Fg(y,v-1)
sign_Fg=1;
else
sign_Fg=-1;
end
sigma_Bg(v-1,y)=sign_Bg*Temp_sig_Bg;
sigma_Fg(v-1,y)=sign_Fg*Temp_sig_Fg;
end
%Calculation of P_Back for background and foreground
P_Back_Bg=tau*diag(sigma_Bg(v-1,:));
P_Back_Fg=tau*diag(sigma_Fg(v-1,:));
u_Bg_Star(:,v)=u_Bg(:,v-1)+P_Back_Bg*Y_Bg(:,v);
u_Fg_Star(:,v)=u_Fg(:,v-1)+P_Back_Fg*Y_Fg(:,v);
for aa=1:N %Normalization of u_Bg and u_Fg
u_Bg(aa,v)=(u_Bg_Star(aa,v)- min(u_Bg_Star(:,v)))/(max(u_Bg_Star(:,v))-min(u_Bg_Star(:,v)));
u_Fg(aa,v)=(u_Fg_Star(aa,v)- min(u_Fg_Star(:,v)))/(max(u_Fg_Star(:,v))-min(u_Fg_Star(:,v)));
end
if (max(abs(Y_Fg(:,v)-Y_Fg(:,v-1)))<=0.0118) &&(max(abs(Y_Bg(:,v)-Y_Bg(:,v-1)))<=0.0118) %% epsilon= 0.0118
break;
end
end
Finally, the saliency map will be generated by using the following codes.
K=4;
T=0.4;
phi_1=(2-(1-T)^(K-1))/((1-T)^(K-2));
phi_2=(1-T)^(K-1);
phi_3=1-phi_1;
for bb=1:N
Y_Output_Preliminary(bb,1)=Y_Fg(bb,v)/((Y_Fg(bb,v)+Y_Bg(bb,v)));
end
for hh=1:N
Y_Output(hh,1)=(phi_1*(T^K))/(phi_2*(1-Y_Output_Preliminary(hh,1))^K+(T^K))+phi_3;
end
V_rs=zeros(N);
V_Final=zeros(rows,columns);
for cc=1:rows
for dd=1:columns
V_rs(cc,dd)=Y_Output(L(cc,dd),1);
end
end
maxDist = 10; % Maximum chessboard distance from image
wSF=zeros(rows,columns);
wSB=zeros(rows,columns);
% Get the range of x and y indices who's chessboard distance from pixel (0,0) are less than 'maxDist'
xRange = (-(maxDist-1)):(maxDist-1);
yRange = (-(maxDist-1)):(maxDist-1);
% Create a mesgrid to get the pairs of (x,y) of the pixels
[pointsX, pointsY] = meshgrid(xRange, yRange);
pointsX = pointsX(:);
pointsY = pointsY(:);
% Remove pixel (0,0)
pixIndToRemove = (pointsX == 0 & pointsY == 0);
pointsX(pixIndToRemove) = [];
pointsY(pixIndToRemove) = [];
for ee=1:rows
for ff=1:columns
% Get a shifted copy of 'pointsX' and 'pointsY' that is centered
% around (x, y)
pointsX1 = pointsX + ee;
pointsY1 = pointsY + ff;
% Remove the the pixels that are out of the image bounds
inBounds =...
pointsX1 >= 1 & pointsX1 <= rows &...
pointsY1 >= 1 & pointsY1 <= columns;
pointsX1 = pointsX1(inBounds);
pointsY1 = pointsY1(inBounds);
% Do stuff with 'pointsX1' and 'pointsY1'
wSF_temp=0;
wSB_temp=0;
for gg=1:size(pointsX1)
Temp=exp(-OMG1*(sqrt(double(A(pointsX1(gg),pointsY1(gg),1))-double(A(ee,ff,1)))^2+(double(A(pointsX1(gg),pointsY1(gg),2))-double(A(ee,ff,2)))^2 + (double(A(pointsX1(gg),pointsY1(gg),3))-double(A(ee,ff,3)))^2));
wSF_temp=wSF_temp+(Temp*V_rs(pointsX1(gg),pointsY1(gg)));
wSB_temp=wSB_temp+(Temp*(1-V_rs(pointsX1(gg),pointsY1(gg))));
end
wSF(ee,ff)= wSF_temp;
wSB(ee,ff)= wSB_temp;
V_Final(ee,ff)=V_rs(ee,ff)/(V_rs(ee,ff)+(wSB(ee,ff)/wSF(ee,ff))*(1-V_rs(ee,ff)));
end
end
imshow(V_Final,[]); %%Saliency map of the image
Part of your terminating criterion is this:
max(abs(Y_a(:,t)-Y_a(:,t-1)))<=eps
Say Y_a tends to 2. You are really close... In fact, the closest you can get without subsequent values being identical is Y_a(t)-Y_a(t-1) == 4.4409e-16. If the two values were any closer, their difference would be 0, because this is the precision with which floating-point values can be represented. So you have reached this fantastic level of closeness to your goal. Subsequent iterations are changing the target value by the smallest possible amount, 4.4409e-16. But your test is returning false! Why? Because eps == 2.2204e-16!
eps is short-hand for eps(1), the difference between 1 and the next representable larger value. Because how floating-point values are represented, this difference is half the difference between 2 and the next representable larger value (which is given by eps(2).
However, if Y_a tends to 1e-16, subsequent iterations could double or halve the value of Y_a and you'd still meet the stopping criterion!
Thus, what you need is to come up with a reasonable stopping criterion that is a fraction of the target value, something like this:
max(abs(Y_a(:,t)-Y_a(:,t-1))) <= 1e6 * eps(max(abs(Y_a(:,t))))
Unsolicited advice
You should really look into vectorized operations in MATLAB. For example,
for y=1:N
Temp_sig_a=0;
for z=1:N
Temp_sig_a = Temp_sig_a + abs(Y_a(y,t)- Y_a(z,t));
end
sigma_a(t-1,y)= Temp_sig_a;
end
can be written as
for y=1:N
sigma_a(t-1,y) = sum(abs(Y_a(y,t) - Y_a(:,t)));
end
which in turn can be written as
sigma_a(t-1,:) = sum(abs(Y_a(:,t).' - Y_a(:,t)));
Avoiding loops is not only usually more efficient, but it also leads to shorter code that is easier to read.
Also, this:
P_FB_a = diag(sigma_a(t-1,:));
u_a(:,t) = u_a(:,t-1) + P_FB_a * Y_a(:,t);
is the same as
u_a(:,t) = u_a(:,t-1) + sigma_a(t-1,:).' .* Y_a(:,t);
but of course creating a diagonal matrix and doing a matrix multiplication with so many zeros is much more expensive than directly computing an element-wise multiplication.

Grouping data using loops (signal processing in MATLAB)

I am working in MATLAB with a signal data that consist of consecutive dips as shown below. I am trying to write a code which sorts the contents of each dip into a separate group. How should the general structure of such a code look like?
The following is my data. I am only interested in the portion of the signal that lies below a certain threshold d (the red line):
And here is the desired grouping:
Here is an unsuccessful attempt:
k=0; % Group number
for i = 1 : length(signal)
if signal(i) < d
k=k+1;
while signal(i) < d
NewSignal(i, k) = signal(i);
i = i + 1;
end
end
end
The code above generated 310 groups instead of the desired 12 groups.
Any explanation would be greatly appreciated.
Taking Benl generated data you can do the following:
%generate data
x=1:1000;
y=sin(x/20);
for ii=1:9
y=y+-10*exp(-(x-ii*100).^2./10);
end
y=awgn(y,4);
%set threshold
t=-4;
%threshold data
Y = char(double(y<t) + '0'); %// convert to string of zeros and ones
%search for start and ends
This idea is taken from here
[s, e] = regexp(Y, '1+', 'start', 'end');
%and now plot and see that each pair of starts and end
% represents a group
plot(x,y)
hold on
for k=1:numel(s)
line(s(k)*ones(2,1),ylim,'Color','k','LineStyle','--')
line(e(k)*ones(2,1),ylim,'Color','k','LineStyle','-')
end
hold off
legend('Data','Starts','Ends')
Comments: First of all I choose an arbitrary threshold, it is up to you to find the "best" one in your data. Additionally I didn't group the data explicitly but rather this approach gives you the start and end of each epoch with a dip (you might call it group). So you could say that each index is the grouping index. Finally I did not debug this approach for corner cases, when dips fall on starts and ends...
In MATLAB you cannot change the loop index of a for loop. A for loop:
for i = array
loops over each column of array in turn. In your code, 1 : length(signal) is an array, each of its elements is visited in turn. Inside this loop there is a while loop that increments i. However, when this while loop ends and the next iteration of the for loop runs, i is reset to the next item in the array.
This code therefore needs two while loops:
i = 1; % Index
k = 0; % Group number
while i <= numel(signal)
if signal(i) < d
k = k + 1;
while signal(i) < d
NewSignal(i,k) = signal(i);
i = i + 1;
end
end
i = i + 1;
end
Easy, the function you're looking for is bwlabel, which when combined with logical indexing makes this simple.
To start I made some fake data which resembled your data
x=1:1000;
y=sin(x/20);
for ii=1:9
y=y+-10*exp(-(x-ii*100).^2./10);
end
y=awgn(y,4);
plot(x,y)
Then set your threshold and use 'bwlabel'
d=-4;% set the threshold
groupid=bwlabel(y<d);
bwlabel labels connected groups in a black and white image, what we've effectively done here is make a black and white (logical 0 & 1) 1D image in the logical vector y<d. bwlabel returns the number of the region at the index of the region. We're not interested in the 0 region, so to get the x values or y values of the nth region, simply use x(groupid==n), for example with my test data
x_4=x(groupid==4)
y_4=y(groupid==4)
x_4 = 398 399 400 401 402
y_4 = -5.5601 -7.8280 -9.1965 -7.9083 -5.8751

Compute double sum in matlab efficiently?

I am looking for an optimal way to program this summation ratio. As input I have two vectors v_mn and x_mn with (M*N)x1 elements each.
The ratio is of the form:
The vector x_mn is 0-1 vector so when x_mn=1, the ration is r given above and when x_mn=0 the ratio is 0.
The vector v_mn is a vector which contain real numbers.
I did the denominator like this but it takes a lot of times.
function r_ij = denominator(v_mn, M, N, i, j)
%here x_ij=1, to get r_ij.
S = [];
for m = 1:M
for n = 1:N
if (m ~= i)
if (n ~= j)
S = [S v_mn(i, n)];
else
S = [S 0];
end
else
S = [S 0];
end
end
end
r_ij = 1+S;
end
Can you give a good way to do it in matlab. You can ignore the ratio and give me the denominator which is more complicated.
EDIT: I am sorry I did not write it very good. The i and j are some numbers between 1..M and 1..N respectively. As you can see, the ratio r is many values (M*N values). So I calculated only the value i and j. More precisely, I supposed x_ij=1. Also, I convert the vectors v_mn into a matrix that's why I use double index.
If you reshape your data, your summation is just a repeated matrix/vector multiplication.
Here's an implementation for a single m and n, along with a simple speed/equality test:
clc
%# some arbitrary test parameters
M = 250;
N = 1000;
v = rand(M,N); %# (you call it v_mn)
x = rand(M,N); %# (you call it x_mn)
m0 = randi(M,1); %# m of interest
n0 = randi(N,1); %# n of interest
%# "Naive" version
tic
S1 = 0;
for mm = 1:M %# (you call this m')
if mm == m0, continue; end
for nn = 1:N %# (you call this n')
if nn == n0, continue; end
S1 = S1 + v(m0,nn) * x(mm,nn);
end
end
r1 = v(m0,n0)*x(m0,n0) / (1+S1);
toc
%# MATLAB version: use matrix multiplication!
tic
ninds = [1:m0-1 m0+1:M];
minds = [1:n0-1 n0+1:N];
S2 = sum( x(minds, ninds) * v(m0, ninds).' );
r2 = v(m0,n0)*x(m0,n0) / (1+S2);
toc
%# Test if values are equal
abs(r1-r2) < 1e-12
Outputs on my machine:
Elapsed time is 0.327004 seconds. %# loop-version
Elapsed time is 0.002455 seconds. %# version with matrix multiplication
ans =
1 %# and yes, both are equal
So the speedup is ~133×
Now that's for a single value of m and n. To do this for all values of m and n, you can use an (optimized) double loop around it:
r = zeros(M,N);
for m0 = 1:M
xx = x([1:m0-1 m0+1:M], :);
vv = v(m0,:).';
for n0 = 1:N
ninds = [1:n0-1 n0+1:N];
denom = 1 + sum( xx(:,ninds) * vv(ninds) );
r(m0,n0) = v(m0,n0)*x(m0,n0)/denom;
end
end
which completes in ~15 seconds on my PC for M = 250, N= 1000 (R2010a).
EDIT: actually, with a little more thought, I was able to reduce it all down to this:
denom = zeros(M,N);
for mm = 1:M
xx = x([1:mm-1 mm+1:M],:);
denom(mm,:) = sum( xx*v(mm,:).' ) - sum( bsxfun(#times, xx, v(mm,:)) );
end
denom = denom + 1;
r_mn = x.*v./denom;
which completes in less than 1 second for N = 250 and M = 1000 :)
For a start you need to pre-alocate your S matrix. It changes size every loop so put
S = zeros(m*n, 1)
at the start of your function. This will also allow you to do away with your else conditional statements, ie they will reduce to this:
if (m ~= i)
if (n ~= j)
S(m*M + n) = v_mn(i, n);
Otherwise since you have to visit every element im afraid it may not be able to get much faster.
If you desperately need more speed you can look into doing some mex coding which is code in c/c++ but run in matlab.
http://www.mathworks.com.au/help/matlab/matlab_external/introducing-mex-files.html
Rather than first jumping into vectorization of the double loop, you may want modify the above to make sure that it does what you want. In this code, there is no summing of the data, instead a vector S is being resized at each iteration. As well, the signature could include the matrices V and X so that the multiplication occurs as in the formula (rather than just relying on the value of X to be zero or one, let us pass that matrix in).
The function could look more like the following (I've replaced the i,j inputs with m,n to be more like the equation):
function result = denominator(V,X,m,n)
% use the size of V to determine M and N
[M,N] = size(V);
% initialize the summed value to one (to account for one at the end)
result = 1;
% outer loop
for i=1:M
% ignore the case where m==i
if i~=m
for j=1:N
% ignore the case where n==j
if j~=n
result = result + V(m,j)*X(i,j);
end
end
end
end
Note how the first if is outside of the inner for loop since it does not depend on j. Try the above and see what happens!
You can vectorize from within Matlab to speed up your calculations. Every time you use an operation like ".^" or ".*" or any matrix operation for that matter, Matlab will do them in parallel, which is much, much faster than iterating over each item.
In this case, look at what you are doing in terms of matrices. First, in your loop you are only dealing with the mth row of $V_{nm}$, which we can use as a vector for itself.
If you look at your formula carefully, you can figure out that you almost get there if you just write this row vector as a column vector and multiply the matrix $X_{nm}$ to it from the left, using standard matrix multiplication. The resulting vector contains the sums over all n. To get the final result, just sum up this vector.
function result = denominator_vectorized(V,X,m,n)
% get the part of V with the first index m
Vm = V(m,:)';
% remove the parts of X you don't want to iterate over. Note that, since I
% am inside the function, I am only editing the value of X within the scope
% of this function.
X(m,:) = 0;
X(:,n) = 0;
%do the matrix multiplication and the summation at once
result = 1-sum(X*Vm);
To show you how this optimizes your operation, I will compare it to the code proposed by another commenter:
function result = denominator(V,X,m,n)
% use the size of V to determine M and N
[M,N] = size(V);
% initialize the summed value to one (to account for one at the end)
result = 1;
% outer loop
for i=1:M
% ignore the case where m==i
if i~=m
for j=1:N
% ignore the case where n==j
if j~=n
result = result + V(m,j)*X(i,j);
end
end
end
end
The test:
V=rand(10000,10000);
X=rand(10000,10000);
disp('looped version')
tic
denominator(V,X,1,1)
toc
disp('matrix operation')
tic
denominator_vectorized(V,X,1,1)
toc
The result:
looped version
ans =
2.5197e+07
Elapsed time is 4.648021 seconds.
matrix operation
ans =
2.5197e+07
Elapsed time is 0.563072 seconds.
That is almost ten times the speed of the loop iteration. So, always look out for possible matrix operations in your code. If you have the Parallel Computing Toolbox installed and a CUDA-enabled graphics card installed, Matlab will even perform these operations on your graphics card without any further effort on your part!
EDIT: That last bit is not entirely true. You still need to take a few steps to do operations on CUDA hardware, but they aren't a lot. See Matlab documentation.

Vectorizing distance calculation between vectors

I have a 3 X 1000 (and later 3 X 10 000) matrix cord given, which contains the three dimensional coordinates for my pixels.
My intention is to calculate the distance between all the pixels, and I do it with a for loop (see below), but I will have to calculate this for huge matrices soon, and am wondering if I could vectorize the code for making it faster...?
dist = zeros(size(cord,2),size(cord,2));
for i = 1:size(cord,2)
for j = 1:size(cord,2)
dist(i,j) = norm(cord(:,i)-cord(:,j));
dist(j,i) = dist(i,j);
end
end
pdist does exactly that. squareform is needed to get the result in the form of a square, symmetric matrix:
dist = squareform(pdist(cord.'));
Approach 1 (Vectorized apprach with bsxfun ) -
squeeze(sqrt(sum(bsxfun(#minus,cord,permute(cord,[1 3 2])).^2)))
Not sure if this will be faster though.
Approach 2 -
Inspired by this very smart approach and all credits to the poster. The code posted here is just slightly customized for your case and hopefully slightly better in terms of runtime. Here it is -
A = cord'; %//'
numA = size(cord,2);
helpA = ones(numA,9);
helpB = ones(numA,9);
for idx = 1:3
sqA_idx = A(:,idx).^2;
helpA(:,3*idx-1:3*idx) = [-2*A(:,idx), sqA_idx ];
helpB(:,3*idx-2:3*idx-1) = [sqA_idx , A(:,idx)];
end
dist1 = sqrt(helpA * helpB'); %// desired output
From your code, you have recognized that the dist matrix is symmetric
dist(i,j) = norm(cord(:,i)-cord(:,j));
dist(j,i) = dist(i,j);
You could change the inner loop to account for this and reduce by roughly one half the number of calculations needed
for j = i:size(cord,2)
Further, we can avoid the dist(j,i) = dist(i,j); at each iteration and just do that at the end by extracting the upper triangle part of dist and adding its transpose to the dist matrix to account for the symmetry
dist = zeros(size(cord,2),size(cord,2));
for i = 1:size(cord,2)
for j = i:size(cord,2)
dist(i,j) = norm(cord(:,i)-cord(:,j));
end
end
dist = dist + triu(dist)';
The above addition is fine since the main diagonal is all zeros.
It still performs poorly though and so we should take advantage of vectorization. We can do that as follows against the inner loop
dist = zeros(size(cord,2),size(cord,2));
for i = 1:size(cord,2)
dist(i,i+1:end) = sum((repmat(cord(:,i),1,size(cord,2)-i)-cord(:,i+1:end)).^2);
end
dist = dist + triu(dist)';
dist = sqrt(dist);
For every element in cord we need to calculate its distance with all other elements that follow it. We reproduce the element with repmat so that we can subtract it from every element that follows without the need for the loop. The differences are squared and summed and assigned to the dist matrix. We take care of the symmetry and then take the square root of the matrix to complete the norm operation.
With tic and toc, the original distance calculation with a random cord (cord = rand(3,num);) took ~93 seconds. This version took ~2.8.

MATLAB loop optimization

I have a matrix, matrix_logical(50000,100000), that is a sparse logical matrix (a lot of falses, some true). I have to produce a matrix, intersect(50000,50000), that, for each pair, i,j, of rows of matrix_logical(50000,100000), stores the number of columns for which rows i and j have both "true" as the value.
Here is the code I wrote:
% store in advance the nonzeros cols
for i=1:50000
nonzeros{i} = num2cell(find(matrix_logical(i,:)));
end
intersect = zeros(50000,50000);
for i=1:49999
a = cell2mat(nonzeros{i});
for j=(i+1):50000
b = cell2mat(nonzeros{j});
intersect(i,j) = numel(intersect(a,b));
end
end
Is it possible to further increase the performance? It takes too long to compute the matrix. I would like to avoid the double loop in the second part of the code.
matrix_logical is sparse, but it is not saved as sparse in MATLAB because otherwise the performance become the worst possible.
Since the [i,j] entry counts the number of non zero elements in the element-wise multiplication of rows i and j, you can do it by multiplying matrix_logical with its transpose (you should convert to numeric data type first, e.g matrix_logical = single(matrix_logical)):
inter = matrix_logical * matrix_logical';
And it works both for sparse or full representation.
EDIT
In order to calculate numel(intersect(a,b))/numel(union(a,b)); (as asked in your comment), you can use the fact that for two sets a and b, you have
length(union(a,b)) = length(a) + length(b) - length(intersect(a,b))
so, you can do the following:
unLen = sum(matrix_logical,2);
tmp = repmat(unLen, 1, length(unLen)) + repmat(unLen', length(unLen), 1);
inter = matrix_logical * matrix_logical';
inter = inter ./ (tmp-inter);
If I understood you correctly, you want a logical AND of the rows:
intersct = zeros(50000, 50000)
for ii = 1:49999
for jj = ii:50000
intersct(ii, jj) = sum(matrix_logical(ii, :) & matrix_logical(jj, :));
intersct(jj, ii) = intersct(ii, jj);
end
end
Doesn't avoid the double loop, but at least works without the first loop and the slow find command.
Elaborating on my comment, here is a distance function suitable for pdist()
function out = distfun(xi,xj)
out = zeros(size(xj,1),1);
for i=1:size(xj,1)
out(i) = sum(sum( xi & xj(i,:) )) / sum(sum( xi | xj(i,:) ));
end
In my experience, sum(sum()) is faster for logicals than nnz(), thus its appearance above.
You would also need to use squareform() to reshape the output of pdist() appropriately:
squareform(pdist(martrix_logical,#distfun));
Note that pdist() includes a 'jaccard' distance measure, but it is actually the Jaccard distance and not the Jaccard index or coefficient, which is the value you are apparently after.

Resources