stationary wavelet transform (MATLAB) - image

Anyone please explain what is being done with the following code.
The code performs image fusion using Stationary Wavelet Transform.
%image decomposition using discrete stationary wavelet transform
[A1L1,H1L1,V1L1,D1L1] = swt2(im1,1,'sym2');
[A2L1,H2L1,V2L1,D2L1] = swt2(im2,1,'sym2');
[A1L2,H1L2,V1L2,D1L2] = swt2(A1L1,1,'sym2');
[A2L2,H2L2,V2L2,D2L2] = swt2(A2L1,1,'sym2');
% fusion at level2
AfL2 = 0.5*(A1L2+A2L2); **what are these equations ?**
D = (abs(H1L2)-abs(H2L2))>=0;
HfL2 = D.*H1L2 + (~D).*H2L2;
D = (abs(V1L2)-abs(V2L2))>=0;
VfL2 = D.*V1L2 + (~D).*V2L2;
D = (abs(D1L2)-abs(D2L2))>=0;
DfL2 = D.*D1L2 + (~D).*D2L2;
% fusion at level1
D = (abs(H1L1)-abs(H2L1))>=0;
HfL1 = D.*H1L1 + (~D).*H2L1;
D = (abs(V1L1)-abs(V2L1))>=0;
VfL1 = D.*V1L1 + (~D).*V2L1;
D = (abs(D1L1)-abs(D2L1))>=0;
DfL1 = D.*D1L1 + (~D).*D2L1;
% fused image
AfL1 = iswt2(AfL2,HfL2,VfL2,DfL2,'sym2');
imf = iswt2(AfL1,HfL1,VfL1,DfL1,'sym2');

Here AfL2, HfL2, VfL2, DfL2 at Fusion Level 2 are
Approximation coefficients
Horizontal detail coefficients
Vertical detail coefficients
Diagonal detail coefficients
Also same at next level and their respective mathematical implementations according to the concept.
It is really important to read the concept documents once so that you can understand the implementation easily, you can find the info's from the following link, you can directly move to the block diagram which explains the concept and then to physical implementation:
http://ijeetc.com/ijeetcadmin/upload/IJEETC_50e52508758cf.pdf

Related

Why is Python's Laplacian method different from Research Paper?

I found a Python library for Laplacian Score Feature Selection. But the implementation is seemingly different from the research paper.
I implemented the selection method according to the algorithm from the paper (https://papers.nips.cc/paper/2909-laplacian-score-for-feature-selection.pdf), which is as follows:
However, I found a Python library that implements the Laplacian method (https://github.com/jundongl/scikit-feature/blob/master/skfeature/function/similarity_based/lap_score.py).
So to check that my implementation was correct, I ran both versions on my dataset, and got different answers. In the process of debugging, I saw that the library used different formulas when calculating the affinity matrix (The S matrix from the paper).
The paper uses this formula:
, While the library uses
W_ij = exp(-norm(x_i - x_j)/2t^2)
Further investigation revealed that the library calculates the affinity matrix as follows:
t = kwargs['t']
# compute pairwise euclidean distances
D = pairwise_distances(X)
D **= 2
# sort the distance matrix D in ascending order
dump = np.sort(D, axis=1)
idx = np.argsort(D, axis=1)
idx_new = idx[:, 0:k+1]
dump_new = dump[:, 0:k+1]
# compute the pairwise heat kernel distances
dump_heat_kernel = np.exp(-dump_new/(2*t*t))
G = np.zeros((n_samples*(k+1), 3))
G[:, 0] = np.tile(np.arange(n_samples), (k+1, 1)).reshape(-1)
G[:, 1] = np.ravel(idx_new, order='F')
G[:, 2] = np.ravel(dump_heat_kernel, order='F')
# build the sparse affinity matrix W
W = csc_matrix((G[:, 2], (G[:, 0], G[:, 1])), shape=
n_samples,n_samples))
bigger = np.transpose(W) > W
W = W - W.multiply(bigger) + np.transpose(W).multiply(bigger)
return W
I'm not sure why the library squares each value in the distance matrix. I see that they also do some reordering, and they use a different heat kernel formula.
So I'd just like to know if any of the resources (The paper or the library) are wrong, or if they're somehow equivalent, or if anyone knows why they differ.

calculate first,second,third derivative on 3d image

really i have a problem to calculate first , second , third derivative on 3d image with matlab.
i have 60 slice of dicom format of knee mri , and i wanna calculate derivative .
for 2d image when we want to calculate derivative on x or y direction ,for example we use sobel or another operator in x direction for calculate derivative on x direction .
but in 3d image that i have 60 slices of dicom format , how can i calculate first, second ,and third derivative on x ,y,z directions .
i implement like this for first derivative :
F is 3d matrix that has all slices. [k,l,m] = size(F);
but i think it's not true .please help me , really i need your answers .
how can we calculate first, second, third derivative on x ,y ,z directions .?
case 'x'
D(1,:,:) = (F(2,:,:) - F(1,:,:));
D(k,:,:) = (F(k,:,:) - F(k-1,:,:));
D(2:k-1,:,:) = (F(3:k,:,:)-F(1:k-2,:,:))/2;
case 'y'
D(:,1,:) = (F(:,2,:) - F(:,1,:));
D(:,l,:) = (F(:,l,:) - F(:,l-1,:));
D(:,2:l-1,:) = (F(:,3:l,:)-F(:,1:l-2,:))/2;
case 'z'
D(:,:,1) = (F(:,:,2) - F(:,:,1));
D(:,:,m) = (F(:,:,m) - F(:,:,m-1));
D(:,:,2:m-1) = (F(:,:,3:m)-F(:,:,1:m-2))/2;
There is a function for that! Look up https://www.mathworks.com/help/images/ref/imgradient3.html, where there are options to indicate the kind of gradient computation: sobel is the default.
If you'd like directional gradients, consider using https://www.mathworks.com/help/images/ref/imgradientxyz.html, which has the same options available, but returns the directional gradients Gx, Gy and Gz.
volData = load('mri');
sz = volData.siz;
vol = squeeze(volData.D);
[Gx, Gy, Gz] = imgradientxyz(vol);
Note that these functions were introduced in R2016a.
The "first derivative" in higher dimensions is called a gradient vector. There are many formulas to numerically approximate the gradient, and one of the most accurate approaches is disccused in a recent paper: "High Order Spatial Generalization of 2D and 3D Isotropic Discrete Gradient Operators with Fast Evaluation on GPUs" by Leclaire et al.
Higher order derivatives in more than one dimension are tensors. The "second derivative" in particular is a rank-2 tensor and has 6 independent components, which to the lowest order approximation are
Dxx(x,y,z) = (F(x+1,y,z) - 2*F(x,y,z) + F(x-1,y,z))/2
Dyy(x,y,z) = (F(x,y+1,z) - 2*F(x,y,z) + F(x,y-1,z))/2
Dzz(x,y,z) = (F(x,y,z+1) - 2*F(x,y,z) + F(x,y,z-1))/2
Dxy(x,y,z) = (F(x+1,y+1,z) - F(x+1,y-1,z) - F(x-1,y+1,z) + F(x-1,y-1,z))/4
Dxz(x,y,z) = (F(x+1,y,z+1) - F(x+1,y,z-1) - F(x-1,y,z+1) + F(x-1,y,z-1))/4
Dyz(x,y,z) = (F(x,y+1,z+1) - F(x,y+1,z-1) - F(x,y-1,z+1) + F(x,y-1,z-1))/4
The "third derivative" will be a rank-3 tensor and will have even more components. The formulas are lenghty and can be derived by considering a Taylor series expansion of F up to the 3rd order

How do I efficiently created a BW mask for this microscopic image?

So some background. I was tasked to write a matlab program to count the number yeast cells inside visible-light microscopic images. To do that I think the first step will be cell segmentation. Before I got the real experiment image set, I developed an algorithm use a test image set utilizing watershed. Which looks like this:
The first step of watershed is generating a BW mask for the cells. Then I would generate a bwdist image with imposed local minimums generated from the BW mask. With that I can generate the watershed easily.
As you can see my algorithm rely on the successful generation of BW mask. Because I need to generate the bwdist image and markers from it. Originally, I generate the BW mask following the following steps:
generate the Local standard deviation of image sdImage = stdfilt(grayImage, ones(9))
Use BW thresholding to generate the initial BW mask binaryImage = sdImage < 8;
use imclearborder to clear the background. Use some other code to add the cells on the border back.
Background finished. Here is my problem
But today I received the new real data sets. The image resolution is much smaller and the light condition is different from the test image set. The color depth is also much smaller. These make my algorithm useless. Here is it:
Using stdfilt failed to generate a good clean images. Instead it generate stuff like this (Note: I have adjusted parameters for the stdfilt function and the BW threshold value, following is the best result I can get) :
As you can see there are light pixels in the center of the cells that not necessary darker than the membrane. Which lead the bw thresholding generate stuff like this:
The new bw image after bw thresholding have either incomplete membrane or segmented cell centers and make them unsuitable to the other steps.
I only start image processing recently and have no idea how should I proceed. If you have an idea please help me! Thanks!
For your convience, I have attached a link from dropbox for a subset of the images
I think there's a fundamental problem in your approach. Your algorithm uses stdfilt in order to binarize the image. But what that essentially means is you're assuming there is there is low "texture" in the background and within the cell. This works for your first image. However, in your second image there is a "texture" within the cell, so this assumption is broken.
I think a stronger assumption is that there is a "ring" around each cell (valid for both images you posted). So I took the approach of detecting this ring instead.
So my approach is essentially:
Detect these rings (I use a 'log' filter and then binarize based on positive values. However, this results in a lot of "chatter"
Try to remove some of the "chatter" initially by filtering out very small and very large regions
Now, fill in these rings. However, there is still some "chatter" and filled regions between cells left
Again, remove small and large regions, but since the cells are filled, increase the bounds for what is acceptable.
There are still some bad regions, most of the bad areas are going to be regions between cells. Regions between cells are detectable by observing the curvature around the boundary of the region. They "bend inwards" a lot, which is expressed mathematically as a large portion of the boundary having a negative curvature. Also, to remove the rest of the "chatter", these regions will have a large standard deviation in the curvature of their boundary, so remove boundaries with a large standard deviation as well.
Overall, the most difficult part will be removing regions between cells and the "chatter" without removing the actual cells.
Anyway, here's the code (note there are a lot of heuristics and also it's very rough and based on code from older projects, homeworks, and stackoverflow answers so it's definitely far from finished):
cell = im2double(imread('cell1.png'));
if (size(cell,3) == 3)
cell = rgb2gray(cell);
end
figure(1), subplot(3,2,1)
imshow(cell,[]);
% Detect edges
hw = 5;
cell_filt = imfilter(cell, fspecial('log',2*hw+1,1));
subplot(3,2,2)
imshow(cell_filt,[]);
% First remove hw and filter out noncell hws
mask = cell_filt > 0;
hw = 5;
mask = mask(hw:end-hw-1,hw:end-hw-1);
subplot(3,2,3)
imshow(mask,[]);
rp = regionprops(mask, 'PixelIdxList', 'Area');
rp = rp(vertcat(rp.Area) > 50 & vertcat(rp.Area) < 2000);
mask(:) = false;
mask(vertcat(rp.PixelIdxList)) = true;
subplot(3,2,4)
imshow(mask,[]);
% Now fill objects
mask1 = true(size(mask) + hw);
mask1(hw+1:end, hw+1:end) = mask;
mask1 = imfill(mask1,'holes');
mask1 = mask1(hw+1:end, hw+1:end);
mask2 = true(size(mask) + hw);
mask2(hw+1:end, 1:end-hw) = mask;
mask2 = imfill(mask2,'holes');
mask2 = mask2(hw+1:end, 1:end-hw);
mask3 = true(size(mask) + hw);
mask3(1:end-hw, 1:end-hw) = mask;
mask3 = imfill(mask3,'holes');
mask3 = mask3(1:end-hw, 1:end-hw);
mask4 = true(size(mask) + hw);
mask4(1:end-hw, hw+1:end) = mask;
mask4 = imfill(mask4,'holes');
mask4 = mask4(1:end-hw, hw+1:end);
mask = mask1 | mask2 | mask3 | mask4;
% Filter out large and small regions again
rp = regionprops(mask, 'PixelIdxList', 'Area');
rp = rp(vertcat(rp.Area) > 100 & vertcat(rp.Area) < 5000);
mask(:) = false;
mask(vertcat(rp.PixelIdxList)) = true;
subplot(3,2,5)
imshow(mask);
% Filter out regions with lots of positive concavity
% Get boundaries
[B,L] = bwboundaries(mask);
% Cycle over boundarys
for i = 1:length(B)
b = B{i};
% Filter boundary - use circular convolution
b(:,1) = cconv(b(:,1),fspecial('gaussian',[1 7],1)',size(b,1));
b(:,2) = cconv(b(:,2),fspecial('gaussian',[1 7],1)',size(b,1));
% Find curvature
curv_vec = zeros(size(b,1),1);
for j = 1:size(b,1)
p_b = b(mod(j-2,size(b,1))+1,:); % p_b = point before
p_m = b(mod(j,size(b,1))+1,:); % p_m = point middle
p_a = b(mod(j+2,size(b,1))+1,:); % p_a = point after
dx_ds = p_a(1)-p_m(1); % First derivative
dy_ds = p_a(2)-p_m(2); % First derivative
ddx_ds = p_a(1)-2*p_m(1)+p_b(1); % Second derivative
ddy_ds = p_a(2)-2*p_m(2)+p_b(2); % Second derivative
curv_vec(j+1) = dx_ds*ddy_ds-dy_ds*ddx_ds;
end
if (sum(curv_vec > 0)/length(curv_vec) > 0.4 || std(curv_vec) > 2.0)
L(L == i) = 0;
end
end
mask = L ~= 0;
subplot(3,2,6)
imshow(mask,[])
Output1:
Output2:

Ask questions about GAbor filter's parameters

I am trying to implement a 2D Gabor filter, but I don't understand several parameters of this filter. For example, I use a general form of 2D Gabor filter like
h(x, y, f, theta, sigma_x, sigma_y) = exp(-.5 * ( x_theta^2/sigma_x^2 + y_theta^2/sigma_y^2) * cos(2*pi*f*x_theta),
i.e. a even symmetric Gabor filet.
The question is that what sigma_x and sigma_y mean?
In most of the papers, what has been presented is just 'standard deviation' of the Gaussian envelope along x and y. OK, this confused me several days.
I read several codes about Gabor, this two parameters didn't directly determine the size of the filter. They are either processed as
if (isnan(SigmaX)==1) | isempty(SigmaX),
SigmaX = (3*sqrt(2*log(2)))/(2*pi*CtrFreq);
end
if (isnan(SigmaY)==1) | isempty(SigmaY),
SigmaY=sqrt(2*log(2))/(2*pi*tan(pi/8)*CtrFreq);
end
xlim=round(nstd*(SigmaX*abs(cos(Angle))+SigmaY*abs(sin(Angle))));
ylim=round(nstd*(SigmaY*abs(cos(Angle))+SigmaX*abs(sin(Angle))));
In this case, nstd is said to be length of impulse response. Don't know that.
or
sigmax = wavelength*kx;
sigmay = wavelength*ky.
Thus, I just wonder that how I can determine the size of the filter.
And because of this, I have several other questions about wavelength and bandwidth. (Cause I don't have any background in signal processing)
For the second case I provided above, it use wavelength to multiply kx or ky.
Why we cannot use sigma_x or sigma_y directly?
What this wavelength mean? Is it the size of Gabor filter?
What is that bandwidth mean? Is it the size of Gabor filter?
I implemented a simple program, but it seems not correct, as below
function [GR, GI, G] = yGabora(f, sigma_x, sigma_y, theta)
the = theta * pi/180; % Angular to degree.
% Rotation matrix
Rot = [ cos(the) sin(the);
-sin(the) cos(the)];
% Calculate gabor filter
for x=-sigma_u:1:sigma_u
for y=-sigma_v:1:sigma_v
% Calculate rotated position of Gaussian function
tmpRet = Rot*[x, y]';
xt = tmpRet(1);
yt = tmpRet(2);
h_even(x+sigma_u+1, y+sigma_v+1) = exp(-0.5* (xt^2/(sigma_u^2) + yt^2/(sigma_v^2))) * cos(2*pi*f*xt);
h_odd(x+sigma_u+1, y+sigma_v+1) = exp(-0.5* (xt^2/(sigma_u^2) + yt^2/(sigma_v^2))) * sin(2*pi*f*xt);
end
end
% Generate a complex unit.
j = sqrt(-1);
% Real part of G
GR = h_even;
% Imaginary part of G
GI = h_odd.*j;
% Gabor filter
G = GR + GI;

Plot images as axis labels in MATLAB

I am plotting a 7x7 pixel 'image' in MATLAB, using the imagesc command:
imagesc(conf_matrix, [0 1]);
This represents a confusion matrix, between seven different objects. I have a thumbnail picture of each of the seven objects that I would like to use as the axes tick labels. Is there an easy way to do this?
I don't know an easy way. The axes properties XtickLabel which determines the labels, can only be strings.
If you want a not-so-easy way, you could do something in the spirit of the following non-complete (in the sense of a non-complete solution) code, creating one label:
h = imagesc(rand(7,7));
axh = gca;
figh = gcf;
xticks = get(gca,'xtick');
yticks = get(gca,'ytick');
set(gca,'XTickLabel','');
set(gca,'YTickLabel','');
pos = get(axh,'position'); % position of current axes in parent figure
pic = imread('coins.png');
x = pos(1);
y = pos(2);
dlta = (pos(3)-pos(1)) / length(xticks); % square size in units of parant figure
% create image label
lblAx = axes('parent',figh,'position',[x+dlta/4,y-dlta/2,dlta/2,dlta/2]);
imagesc(pic,'parent',lblAx)
axis(lblAx,'off')
One problem is that the label will have the same colormap of the original image.
#Itmar Katz gives a solution very close to what I want to do, which I've marked as 'accepted'. In the meantime, I made this dirty solution using subplots, which I've given here for completeness. It only works up to a certain size input matrix though, and only displays well when the figure is square.
conf_mat = randn(5);
A = imread('peppers.png');
tick_images = {A, A, A, A, A};
n = length(conf_mat) + 1;
% plotting axis labels at left and top
for i = 1:(n-1)
subplot(n, n, i + 1);
imshow(tick_images{i});
subplot(n, n, i * n + 1);
imshow(tick_images{i});
end
% generating logical array for where the confusion matrix should be
idx = 1:(n*n);
idx(1:n) = 0;
idx(mod(idx, n)==1) = 0;
% plotting the confusion matrix
subplot(n, n, find(idx~=0));
imshow(conf_mat);
axis image
colormap(gray)

Resources