Related
I have a equation that used to compute sigma, in which i is index from 1 to N,* denotes convolution operation, Omega is image domain.
I want to implement it by matlab code. Currently, I have three options to implement the above equation. Could you look at my equation and said to me which one is correct? I spend so much time to see what is differnent amongs methods but I could not find. Thanks in advance
The different between Method 1 and Method 2 that is method 1 compute the sigma after loop but Method 2 computes it in loop.
sigma(1:row,1:col,1:dim) = nu/d;
Does it give same result?
===========Matlab code==============
Method 1
nu = 0;
d = 0;
I2 = I.^2;
[row,col] = size(I);
for i = 1:N
KuI2 = conv2(u(:,:,i).*I2,k,'same');
bc = b.*(c(:,:,i));
bcKuI = -2*bc.*conv2(u(:,:,i).*I,k,'same');
bc2Ku = bc.^2.*conv2(u(:,:,i),k,'same');
nu = nu + sum(sum(KuI2+bcKuI+bc2Ku));
ku = conv2(u(:,:,i),k,'same');
d = d + sum(sum(ku));
end
d = d + (d==0)*eps;
sigma(1:row,1:col,1:dim) = nu/d;
Method 2:
I2 = I.^2;
[row,col] = size(I);
for i = 1:dim
KuI2 = conv2(u(:,:,i).*I2,k,'same');
bc = b.*(c(:,:,i));
bcKuI = -2*bc.*conv2(u(:,:,i).*I,k,'same');
bc2Ku = bc.^2.*conv2(u(:,:,i),k,'same');
nu = sum(sum(KuI2+bcKuI+bc2Ku));
ku = conv2(u(:,:,i),k,'same');
d = sum(sum(ku));
d = d + (d==0)*eps;
sigma(1:row,1:col,i) = nu/d;
end
Method 3:
I2 = I.^2;
[row,col] = size(I);
for i = 1:dim
KuI2 = conv2(u(:,:,i).*I2,k,'same');
bc = b.*(c(:,:,i));
bcKuI = -2*bc.*conv2(u(:,:,i).*I,k,'same');
bc2Ku = bc.^2.*conv2(u(:,:,i),k,'same');
ku = conv2(u(:,:,i),k,'same');
d = ku + (ku==0)*eps;
sigma(:,:,i) = (KuI2+bcKuI+bc2Ku)./d;
end
sigma = sigma + (sigma==0).*eps;
I think that Method 1 is assume that sigma1=sigma2=...sigman because you were computed out of loop function
sigma(1:row,1:col,1:dim) = nu/d;
where nu and d are cumulative sum for each iteration.
While, the Method 2 shown that sigma1 !=sigma 2 !=..sigman because each sigma is calculated in loop function
Hope it help
I would like to vectorize the sum of products hereafter in order to speed up my Matlab code. Would it be possible?
for i=1:N
A=A+hazard(i)*Z(i,:)'*Z(i,:);
end
where hazard is a vector (N x 1) and Z is a matrix (N x p).
Thanks!
With matrix multiplication only:
A = A + Z'*diag(hazard)*Z;
Note, however, that this requires more operations than Divakar's bsxfun approach, because diag(hazard) is an NxN matrix consisting mostly of zeros.
To save some time, you could define the inner matrix as sparse using spdiags, so that multiplication can be optimized:
A = A + full(Z'*spdiags(hazard, 0, zeros(N))*Z);
Benchmarking
Timing code:
Z = rand(N,p);
hazard = rand(N,1);
timeit(#() Z'*diag(hazard)*Z)
timeit(#() full(Z'*spdiags(hazard, 0, zeros(N))*Z))
timeit(#() bsxfun(#times,Z,hazard)'*Z)
With N = 1000; p = 300;
ans =
0.1423
ans =
0.0441
ans =
0.0325
With N = 2000; p = 1000;
ans =
1.8889
ans =
0.7110
ans =
0.6600
With N = 1000; p = 2000;
ans =
1.8159
ans =
1.2471
ans =
1.2264
It is seen that the bsxfun-based approach is consistently faster.
You can use bsxfun and matrix-multiplication -
A = bsxfun(#times,Z,hazard).'*Z + A
I'm trying to use parfor to estimate the time it takes over 96 sec and I've more than one image to treat but I got this error:
The variable B in a parfor cannot be classified
this the code I've written:
Io=im2double(imread('C:My path\0.1s.tif'));
Io=double(Io);
In=Io;
sigma=[1.8 20];
[X,Y] = meshgrid(-3:3,-3:3);
G = exp(-(X.^2+Y.^2)/(2*1.8^2));
dim = size(In);
B = zeros(dim);
c = parcluster
matlabpool(c)
parfor i = 1:dim(1)
for j = 1:dim(2)
% Extract local region.
iMin = max(i-3,1);
iMax = min(i+3,dim(1));
jMin = max(j-3,1);
jMax = min(j+3,dim(2));
I = In(iMin:iMax,jMin:jMax);
% Compute Gaussian intensity weights.
H = exp(-(I-In(i,j)).^2/(2*20^2));
% Calculate bilateral filter response.
F = H.*G((iMin:iMax)-i+3+1,(jMin:jMax)-j+3+1);
B(i,j) = sum(F(:).*I(:))/sum(F(:));
end
end
matlabpool close
any Idea?
Unfortunately, it's actually dim that is confusing MATLAB in this case. You can fix it by doing
[n, m] = size(In);
parfor i = 1:n
for j = 1:m
B(i, j) = ...
end
end
For a fixed and given tform, the imwarp command in the Image Processing Toolbox
B = imwarp(A,tform)
is linear with respect to A, meaning there exists some sparse matrix W, depending on tform but independent of A, such that the above can be equivalently implemented
B(:)=W*A(:)
for all A of fixed known dimensions [n,n]. My question is whether there are fast/efficient options for computing W. The matrix form is necessary when I need the transpose operation W.'*B(:), or if I need to do W\B(:) or similar linear algebraic things which I can't do directly through imwarp alone.
I know that it is possible to compute W column-by-column by doing
E=zeros(n);
W=spalloc(n^2,n^2,4*n^2);
for i=1:n^2
E(i)=1;
tmp=imwarp(E,tform);
E(i)=0;
W(:,i)=tmp(:);
end
but this is brute force and slow.
The routine FUNC2MAT is somewhat more optimal in that it uses the loop to compute/gather the sparse entry data I,J,S of each column W(:,i). Then, after the loop, it uses this to construct the overall sparse matrix. It also offers the option of using a PARFOR loop. However, this is still slower than I would like.
Can anyone suggest more speed-optimal alternatives?
EDIT:
For those uncomfortable with my claim that imwarp(A,tform) is linear w.r.t. A, I include the demo script below, which tests that the superposition property is satisfied for random input images and tform data. It can be run repeatedly to see that the nonlinearityError is always small, and easily attributable to floating point noise.
tform=affine2d(rand(3,2));
%tform=projective2d(rand(3));
fun=#(A) imwarp(A,tform,'cubic');
I1=rand(100); I2=rand(100);
c1=rand; c2=rand;
LHS=fun(c1*I1+c2*I2); %left hand side
RHS=c1*fun(I1)+c2*fun(I2); %right hand side
linearityError = norm(LHS(:)-RHS(:),'inf')
That's actually pretty simple:
W = sparse(B(:)/A(:));
Note that W is not unique, but this operation probably produces the most sparse result. Another way to calculate it would be
W = sparse( B(:) * pinv(A(:)) );
but that results in a much less sparse (yet still valid) result.
I constructed the warping matrix using the optical flow fields [u,v] and it is working well for my application
% this function computes the warping matrix
% M x N is the size of the image
function [ Fw ] = generateFwi( u,v,M,N )
Fw = zeros(M*N, M*N);
k =1;
for i=1:M
for j= 1:N
newcoord(1) = i+u(i,j);
newcoord(2) = j+v(i,j);
newi = newcoord(1);
newj = newcoord(2);
if newi >0 && newj >0
newi1x = floor(newi);
newi1y = floor(newj);
newi2x = floor(newi);
newi2y = ceil(newj);
newi3x = ceil(newi); % four nearest points to the given point
newi3y = floor(newj);
newi4x = ceil(newi);
newi4y = ceil(newj);
x1 = [newi,newj;newi1x,newi1y];
x2 = [newi,newj;newi2x,newi2y];
x3 = [newi,newj;newi3x,newi3y];
x4 = [newi,newj;newi4x,newi4y];
w1 = pdist(x1,'euclidean');
w2 = pdist(x2,'euclidean');
w3 = pdist(x3,'euclidean');
w4 = pdist(x4,'euclidean');
if ceil(newi) == floor(newi) && ceil(newj)==floor(newj) % both the new coordinates are integers
Fw(k,(newi1x-1)*N+newi1y) = 1;
else if ceil(newi) == floor(newi) % one of the new coordinates is an integer
w = w1+w2;
w1new = w1/w;
w2new = w2/w;
W = w1new*w2new;
y1coord = (newi1x-1)*N+newi1y;
y2coord = (newi2x-1)*N+newi2y;
if y1coord <= M*N && y2coord <=M*N
Fw(k,y1coord) = W/w2new;
Fw(k,y2coord) = W/w1new;
end
else if ceil(newj) == floor(newj) % one of the new coordinates is an integer
w = w1+w3;
w1 = w1/w;
w3 = w3/w;
W = w1*w3;
y1coord = (newi1x-1)*N+newi1y;
y2coord = (newi3x-1)*N+newi3y;
if y1coord <= M*N && y2coord <=M*N
Fw(k,y1coord) = W/w3;
Fw(k,y2coord) = W/w1;
end
else % both the new coordinates are not integers
w = w1+w2+w3+w4;
w1 = w1/w;
w2 = w2/w;
w3 = w3/w;
w4 = w4/w;
W = w1*w2*w3 + w2*w3*w4 + w3*w4*w1 + w4*w1*w2;
y1coord = (newi1x-1)*N+newi1y;
y2coord = (newi2x-1)*N+newi2y;
y3coord = (newi3x-1)*N+newi3y;
y4coord = (newi4x-1)*N+newi4y;
if y1coord <= M*N && y2coord <= M*N && y3coord <= M*N && y4coord <= M*N
Fw(k,y1coord) = w2*w3*w4/W;
Fw(k,y2coord) = w3*w4*w1/W;
Fw(k,y3coord) = w4*w1*w2/W;
Fw(k,y4coord) = w1*w2*w3/W;
end
end
end
end
else
Fw(k,k) = 1;
end
k=k+1;
end
end
end
Suppose that I have an N-by-K matrix A, N-by-P matrix B. I want to do the following calculations to get my final N-by-P matrix X.
X(n,p) = B(n,p) - dot(gamma(p,:),A(n,:))
where
gamma(p,k) = dot(A(:,k),B(:,p))/sum( A(:,k).^2 )
In MATLAB, I have my code like
for p = 1:P
for n = 1:N
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
which are highly inefficient since it uses three for loops! Is there a good way to speed up this code?
Use bsxfun for the division and matrix multiplication for the loops:
gamma = bsxfun(#rdivide, B.'*A, sum(A.^2));
x = B - A*gamma.';
And here is a test script
N = 3;
K = 4;
P = 5;
A = rand(N, K);
B = rand(N, P);
for p = 1:P
for n = 1:N
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
gamma2 = bsxfun(#rdivide, B.'*A, sum(A.^2));
X2 = B - A*gamma2.';
isequal(x, X2)
isequal(gamma, gamma2)
which returns
ans =
1
ans =
1
It looks to me like you can hoist the gamma calculations out of the loop; at least, I don't see any dependencies on N in the gamma calculations.
So something like this:
for p = 1:P
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
end
for p = 1:P
for n = 1:N
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
I'm not familiar enough with your code (or matlab) to really know if you can merge the two loops, but if you can:
for p = 1:P
for k = 1:K
gamma(p,k) = dot(A(:,k),B(:,p))/sum(A(:,k).^2);
end
for n = 1:N
x(n,p) = B(n,p) - dot(gamma(p,:),A(n,:));
end
end
bxfun is slow...
How about something like the following (I might have a transpose wrong)
modA = A * (1./sum(A.^2,2)) * ones(1,k);
gamma = B' * modA;
x = B - A * gamma';