calculate first,second,third derivative on 3d image - image

really i have a problem to calculate first , second , third derivative on 3d image with matlab.
i have 60 slice of dicom format of knee mri , and i wanna calculate derivative .
for 2d image when we want to calculate derivative on x or y direction ,for example we use sobel or another operator in x direction for calculate derivative on x direction .
but in 3d image that i have 60 slices of dicom format , how can i calculate first, second ,and third derivative on x ,y,z directions .
i implement like this for first derivative :
F is 3d matrix that has all slices. [k,l,m] = size(F);
but i think it's not true .please help me , really i need your answers .
how can we calculate first, second, third derivative on x ,y ,z directions .?
case 'x'
D(1,:,:) = (F(2,:,:) - F(1,:,:));
D(k,:,:) = (F(k,:,:) - F(k-1,:,:));
D(2:k-1,:,:) = (F(3:k,:,:)-F(1:k-2,:,:))/2;
case 'y'
D(:,1,:) = (F(:,2,:) - F(:,1,:));
D(:,l,:) = (F(:,l,:) - F(:,l-1,:));
D(:,2:l-1,:) = (F(:,3:l,:)-F(:,1:l-2,:))/2;
case 'z'
D(:,:,1) = (F(:,:,2) - F(:,:,1));
D(:,:,m) = (F(:,:,m) - F(:,:,m-1));
D(:,:,2:m-1) = (F(:,:,3:m)-F(:,:,1:m-2))/2;

There is a function for that! Look up https://www.mathworks.com/help/images/ref/imgradient3.html, where there are options to indicate the kind of gradient computation: sobel is the default.
If you'd like directional gradients, consider using https://www.mathworks.com/help/images/ref/imgradientxyz.html, which has the same options available, but returns the directional gradients Gx, Gy and Gz.
volData = load('mri');
sz = volData.siz;
vol = squeeze(volData.D);
[Gx, Gy, Gz] = imgradientxyz(vol);
Note that these functions were introduced in R2016a.

The "first derivative" in higher dimensions is called a gradient vector. There are many formulas to numerically approximate the gradient, and one of the most accurate approaches is disccused in a recent paper: "High Order Spatial Generalization of 2D and 3D Isotropic Discrete Gradient Operators with Fast Evaluation on GPUs" by Leclaire et al.
Higher order derivatives in more than one dimension are tensors. The "second derivative" in particular is a rank-2 tensor and has 6 independent components, which to the lowest order approximation are
Dxx(x,y,z) = (F(x+1,y,z) - 2*F(x,y,z) + F(x-1,y,z))/2
Dyy(x,y,z) = (F(x,y+1,z) - 2*F(x,y,z) + F(x,y-1,z))/2
Dzz(x,y,z) = (F(x,y,z+1) - 2*F(x,y,z) + F(x,y,z-1))/2
Dxy(x,y,z) = (F(x+1,y+1,z) - F(x+1,y-1,z) - F(x-1,y+1,z) + F(x-1,y-1,z))/4
Dxz(x,y,z) = (F(x+1,y,z+1) - F(x+1,y,z-1) - F(x-1,y,z+1) + F(x-1,y,z-1))/4
Dyz(x,y,z) = (F(x,y+1,z+1) - F(x,y+1,z-1) - F(x,y-1,z+1) + F(x,y-1,z-1))/4
The "third derivative" will be a rank-3 tensor and will have even more components. The formulas are lenghty and can be derived by considering a Taylor series expansion of F up to the 3rd order

Related

High Pass Butterworth Filter on images in MATLAB

I need to implement a high pass Butterworth filter in MATLAB for the purposes of image filtering. I have implemented one but it looks like it doesn't work. Here is the code I have written. Can anyone tell me what is wrong?
n=1;
d=50;
A=1.5;
im=imread('imagex.jpg');
h=size(im,1);
w=size(im,2);
[x y]=meshgrid(-floor(w/2):floor(w-1/2),-floor(h/2):floor(h-1/2));
hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
image_2Dfilter=fftshift(fft2(im));
Image_butterworth=image_2Dfilter;
imshow(Image_butterworth);
ifftshow(Image_butterworth);
For one thing, there is no such command called ifftshow. Secondly, you aren't filtering anything. All you're doing is visualizing the spectrum of the image.
In terms of visualizing the spectrum, how you're doing it right now is very dangerous. For one thing, you are visualizing the coefficients at each spatial frequency component which is complex-valued in nature. If you want to visualize the spectrum in a way that makes sense to most of us, it's better to take a look at either the magnitude or phase. However, because this is a Butterworth filter, it's best to apply it to the magnitude of the filter.
You can find the magnitude of the spectrum by using the abs function. Even when you do that, if you did imshow directly on the magnitude, you will get a visualization that is zero everywhere except for the middle. This is because the DC component is so large and the rest of the spectrum is small in comparison.
Let me show you an example. This is the cameraman image that is part of the image processing toolbox:
im = imread('cameraman.tif');
figure;
imshow(im);
Now, let's visualize the spectrum and ensuring that the DC component is in the centre of the image - you already did this with fftshift. It's also a good idea to cast the image to double to ensure the best precision of data. In addition, make sure you apply abs to find the magnitude:
fftim = fftshift(fft2(double(im)));
mag = abs(fftim);
figure;
imshow(mag, []);
As you can see, it's not very useful due to the reason that I mentioned. A better way to visualize the spectrum of the image is usually to apply a log transformation to the spectrum. This is also useful if you want to de-mean or remove the mean so that the dynamic range fits better for display. In other words, you would add 1 to the magnitude, then apply a logarithm to the magnitude so that higher values can taper off. It doesn't matter which base you use, so I'll just use the natural logarithm which is encapsulated by the log command:
figure;
imshow(log(1 + mag), []);
Now that's much better. Now we'll get onto your filtering mechanism. Your Butterworth filter is slightly incorrect. The meshgrid of coordinates is slightly wrong. The -1 operation that's at the ending interval needs to go outside:
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
Remember, you are defining a symmetric interval about the centre of the image, and what you had originally wasn't correct. I'd also like to mention that this looks like a high-pass filter, so the output should look like an edge detection. In addition, the definition of the Butterworth high pass filter is incorrect. The correct definition of the filter in frequency domain is:
D(u,v) is the distance from the centre of the image in frequency domain, Do is the cutoff distance while B is a controlling scale factor controlling what the desired gain would be at the cutoff distance. n is the order of the filter. Do in your case is d = 50. In practice, B = sqrt(2) - 1 so that at the cutoff distance of Do, D(u,v) = 1 / sqrt(2) = 0.707, which is the 3 dB cutoff frequency mostly seen in electronics circuit filters. Sometimes you'll see B being set to 1 for simplicity, but it's common to set this to B = sqrt(2) - 1.
However, your current code isn't doing any filtering. To filter in the frequency domain, you simply multiply the spectrum of the image with the spectrum of the filter itself. This is equivalent to convolution in the spatial domain. Once you do that, you simply undo the fftshift that was performed on the image, take the inverse FFT and then eliminate any imaginary components that are due to numerical imprecision. Also, let's cast to uint8 to make sure that we respect the original image type.
That can be done like so:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components, and cast
out = uint8(real(ifft2(out_spec)));
%// Show image
imshow(out);
If you want to see what the filtered spectrum looks like, just do this:
figure;
imshow(log(1 + abs(out_spec_centre)), []);
We get:
This makes sense. You see that in the middle of the spectrum, it's slightly darker in comparison to the outer edges of the spectrum. That's because with the high-pass Butterworth filter, you are amplifying the higher frequency terms and it gets visualized to be a higher intensity.
Now, out contains your filtered image, and we finally get this:
That looks like a fine result! However, naively casting the image to uint8 truncates any negative values to 0 and any positive values greater than 255 to 255. Because this is an edge detection, you want to detect both the negative and positive transitions... so a good idea would be to normalize the output so that it ranges from [0,1], and then cast with uint8 after you multiply by 255. This way, no changes in the image get visualized to gray, negative changes get visualized as dark and positive changes get visualized as white.... so you'd do something like this:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components
out = real(ifft2(out_spec));
%// Normalize and cast
out = (out - min(out(:))) / (max(out(:)) - min(out(:)));
out = uint8(255*out);
%// Show image
imshow(out);
We get this:
I think that you should work a little bit diferent
n=1;
D0=50; % change the name for d0, d is usuaally the (u²+v²)⁽1/2)
A=1.5; % normally the amplitude is 1
im=imread('cameraman.jpg');
[M,N]=size(im); % is easy to get the h and w like this
% compute the 2d fourier transform in order to multiply
F=fft2(double(im));
% compute your filter and do the meshgrid for your matrix but it is M*n, and get only the real part
u=0:(M-1);
v=0:(N-1);
idx=find(u>M/2);
u(idx)=u(idx)-M;
idy=find(v>N/2);
v(idy)=v(idy)-N;
[V,U]=meshgrid(v,u);
D=sqrt(U.^2+V.^2);
H =A * (1./(1 + (D0./D).^(2*n)));
% multiply element by element
G=H.*F;
g=real(ifft2(double(G)));
subplot(1,2,1); imshow(im); title('Input image');
subplot(1,2,2); imshow(g,[ ]); title('filtered image');

Vector decomposition in matlab

this is my situation: I have a 30x30 image and I want to calculate the radial and tangent component of the gradient of each point (pixel) along the straight line passing through the centre of the image (15,15) and the same (i,j) point.
[dx, dy] = gradient(img);
for i=1:30
for j=1:30
pt = [dx(i, j), dy(i,j)];
line = [i-15, j-15];
costh = dot(line, pt)/(norm(line)*norm(pt));
par(i,j) = norm(costh*line);
tang(i,j) = norm(sin(acos(costh))*line);
end
end
is this code correct?
I think there is a conceptual error in your code, I tried to get your results with a different approach, see how it compares to yours.
[dy, dx] = gradient(img);
I inverted x and y because the usual convention in matlab is to have the first dimension along the rows of a matrix while gradient does the opposite.
I created an array of the same size as img but with each pixel containing the angle of the vector from the center of the image to this point:
[I,J] = ind2sub(size(img), 1:numel(img));
theta=reshape(atan2d(I-ceil(size(img,1)/2), J-ceil(size(img,2)/2)), size(img))+180;
The function atan2d ensures that the 4 quadrants give distinct angle values.
Now the projection of the x and y components can be obtained with trigonometry:
par=dx.*sind(theta)+dy.*cosd(theta);
tang=dx.*cosd(theta)+dy.*sind(theta);
Note the use of the .* to achieve point-by-point multiplication, this is a big advantage of Matlab's matrix computations which saves you a loop.
Here's an example with a well-defined input image (no gradient along the rows and a constant gradient along the columns):
img=repmat(1:30, [30 1]);
The results:
subplot(1,2,1)
imagesc(par)
subplot(1,2,2)
imagesc(tang)
colorbar

Calculating translation value and rotation angle of a rotated 2D image

I have two images which one of them is the Original image and the second one is Transformed image.
I have to find out how many degrees Transformed image was rotated using 3x3 transformation matrix. Plus, I need to find how far translated from origin.
Both images are grayscaled and held in matrix variables. Their sizes are same [350 500].
I have found a few lecture notes like this.
Lecture notes say that I should use the following matrix formula for rotation:
For translation matrix the formula is given:
Everything is good. But there are two problems:
I could not imagine how to implement the formulas using MATLAB.
The formulas are shaped to find x',y' values but I already have got x,x',y,y' values. I need to find rotation angle (theta) and tx and ty.
I want to know the equivailence of x, x', y, y' in the the matrix.
I have got the following code:
rotationMatrix = [ cos(theta) sin(theta) 0 ; ...
-sin(theta) cos(theta) 0 ; ...
0 0 1];
translationMatrix = [ 1 0 tx; ...
0 1 ty; ...
0 0 1];
But as you can see, tx, ty, theta variables are not defined before used. How can I calculate theta, tx and ty?
PS: It is forbidden to use Image Processing Toolbox functions.
This is essentially a homography recovery problem. What you are doing is given co-ordinates in one image and the corresponding co-ordinates in the other image, you are trying to recover the combined translation and rotation matrix that was used to warp the points from the one image to the other.
You can essentially combine the rotation and translation into a single matrix by multiplying the two matrices together. Multiplying is simply compositing the two operations together. You would this get:
H = [cos(theta) -sin(theta) tx]
[sin(theta) cos(theta) ty]
[ 0 0 1]
The idea behind this is to find the parameters by minimizing the error through least squares between each pair of points.
Basically, what you want to find is the following relationship:
xi_after = H*xi_before
H is the combined rotation and translation matrix required to map the co-ordinates from the one image to the other. H is also a 3 x 3 matrix, and knowing that the lower right entry (row 3, column 3) is 1, it makes things easier. Also, assuming that your points are in the augmented co-ordinate system, we essentially want to find this relationship for each pair of co-ordinates from the first image (x_i, y_i) to the other (x_i', y_i'):
[p_i*x_i'] [h11 h12 h13] [x_i]
[p_i*y_i'] = [h21 h22 h23] * [y_i]
[ p_i ] [h31 h32 1 ] [ 1 ]
The scale of p_i is to account for homography scaling and vanishing points. Let's perform a matrix-vector multiplication of this equation. We can ignore the 3rd element as it isn't useful to us (for now):
p_i*x_i' = h11*x_i + h12*y_i + h13
p_i*y_i' = h21*x_i + h22*y_i + h23
Now let's take a look at the 3rd element. We know that p_i = h31*x_i + h32*y_i + 1. As such, substituting p_i into each of the equations, and rearranging to solve for x_i' and y_i', we thus get:
x_i' = h11*x_i + h12*y_i + h13 - h31*x_i*x_i' - h32*y_i*x_i'
y_i' = h21*x_i + h22*y_i + h23 - h31*x_i*y_i' - h32*y_i*y_i'
What you have here now are two equations for each unique pair of points. What we can do now is build an over-determined system of equations. Take each pair and build two equations out of them. You will then put it into matrix form, i.e.:
Ah = b
A would be a matrix of coefficients that were built from each set of equations using the co-ordinates from the first image, b would be each pair of points for the second image and h would be the parameters you are solving for. Ultimately, you are finally solving this linear system of equations reformulated in matrix form:
You would solve for the vector h which can be performed through least squares. In MATLAB, you can do this via:
h = A \ b;
A sidenote for you: If the movement between images is truly just a rotation and translation, then h31 and h32 will both be zero after we solve for the parameters. However, I always like to be thorough and so I will solve for h31 and h32 anyway.
NB: This method will only work if you have at least 4 unique pairs of points. Because there are 8 parameters to solve for, and there are 2 equations per point, A must have at least a rank of 8 in order for the system to be consistent (if you want to throw in some linear algebra terminology in the loop). You will not be able to solve this problem if you have less than 4 points.
If you want some MATLAB code, let's assume that your points are stored in sourcePoints and targetPoints. sourcePoints are from the first image and targetPoints are for the second image. Obviously, there should be the same number of points between both images. It is assumed that both sourcePoints and targetPoints are stored as M x 2 matrices. The first columns contain your x co-ordinates while the second columns contain your y co-ordinates.
numPoints = size(sourcePoints, 1);
%// Cast data to double to be sure
sourcePoints = double(sourcePoints);
targetPoints = double(targetPoints);
%//Extract relevant data
xSource = sourcePoints(:,1);
ySource = sourcePoints(:,2);
xTarget = targetPoints(:,1);
yTarget = targetPoints(:,2);
%//Create helper vectors
vec0 = zeros(numPoints, 1);
vec1 = ones(numPoints, 1);
xSourcexTarget = -xSource.*xTarget;
ySourcexTarget = -ySource.*xTarget;
xSourceyTarget = -xSource.*yTarget;
ySourceyTarget = -ySource.*yTarget;
%//Build matrix
A = [xSource ySource vec1 vec0 vec0 vec0 xSourcexTarget ySourcexTarget; ...
vec0 vec0 vec0 xSource ySource vec1 xSourceyTarget ySourceyTarget];
%//Build RHS vector
b = [xTarget; yTarget];
%//Solve homography by least squares
h = A \ b;
%// Reshape to a 3 x 3 matrix (optional)
%// Must transpose as reshape is performed
%// in column major format
h(9) = 1; %// Add in that h33 is 1 before we reshape
hmatrix = reshape(h, 3, 3)';
Once you are finished, you have a combined rotation and translation matrix. If you want the x and y translations, simply pick off column 3, rows 1 and 2 in hmatrix. However, we can also work with the vector of h itself, and so h13 would be element 3, and h23 would be element number 6. If you want the angle of rotation, simply take the appropriate inverse trigonometric function to rows 1, 2 and columns 1, 2. For the h vector, this would be elements 1, 2, 4 and 5. There will be a bit of inconsistency depending on which elements you choose as this was solved by least squares. One way to get a good overall angle would perhaps be to find the angles of all 4 elements then do some sort of average. Either way, this is a good starting point.
References
I learned about homography a while ago through Leow Wee Kheng's Computer Vision course. What I have told you is based on his slides: http://www.comp.nus.edu.sg/~cs4243/lecture/camera.pdf. Take a look at slides 30-32 if you want to know where I pulled this material from. However, the MATLAB code I wrote myself :)

Recognizing edges based on points and normals

I have a bit of a problem categorizing points based on relative normals.
What I would like to do is use the information I got below to fit a simplified polygon to the points, with a bias towards 90 degree angles to an extent.
I have the rough (although not very accurate) normal lines for each point, but I'm not sure how to separate the data base on closeness of points and closeness of the normals. I plan to do a linear regression after chunking the points for each face, as the normal lines sometimes does not fit well with the actual faces (although they are close to each other for each face)
Example:
alt text http://a.imageshack.us/img842/8439/ptnormals.png
Ideally, I would like to be able to fit a rectangle around this data. However, the polygon does not need to be convex, nor does it have to be aligned with the axis.
Any hints as to how to achieve something like this would be awesome.
Thanks in advance
I am not sure if this is what you are looking for, but here's my attempt at solving the problem as I understood it:
I am using the angles of the normal vectors to find points belonging to each side of the rectangle (left, right, up, down), then simply fit a line to each.
%# create random data (replace those with your actual data)
num = randi([10 20]);
pT = zeros(num,2);
pT(:,1) = rand(num,1);
pT(:,2) = ones(num,1) + 0.01*randn(num,1);
aT = 90 + 10*randn(num,1);
num = randi([10 20]);
pB = zeros(num,2);
pB(:,1) = rand(num,1);
pB(:,2) = zeros(num,1) + 0.01*randn(num,1);
aB = 270 + 10*randn(num,1);
num = randi([10 20]);
pR = zeros(num,2);
pR(:,1) = ones(num,1) + 0.01*randn(num,1);
pR(:,2) = rand(num,1);
aR = 0 + 10*randn(num,1);
num = randi([10 20]);
pL = zeros(num,2);
pL(:,1) = zeros(num,1) + 0.01*randn(num,1);
pL(:,2) = rand(num,1);
aL = 180 + 10*randn(num,1);
pts = [pT;pR;pB;pL]; %# x/y coords
angle = mod([aT;aR;aB;aL],360); %# angle in degrees [0,360]
%# plot points and normals
plot(pts(:,1), pts(:,2), 'o'), hold on
theta = angle * pi / 180;
quiver(pts(:,1), pts(:,2), cos(theta), sin(theta), 0.4, 'Color','g')
hold off
%# divide points based on angle
[~,bin] = histc(angle,[0 45 135 225 315 360]);
bin(bin==5) = 1; %# combine last and first bin
%# fit line to each segment
hold on
for i=1:4
%# indices of points in this segment
idx = ( bin == i );
%# x/y or y/x
if i==2||i==4, xx=1; yy=2; else xx=2; yy=1; end
%# fit line
coeff = polyfit(pts(idx,xx), pts(idx,yy), 1);
fit(:,1) = 0:0.05:1;
fit(:,2) = polyval(coeff, fit(:,1));
%# plot fitted line
plot(fit(:,xx), fit(:,yy), 'Color','r', 'LineWidth',2)
end
hold off
I'd try the following
Cluster the points based on proximity and similar angle. I'd use single-linkage hierarchical clustering (LINKAGE in Matlab), since you don't know a priori how many edges there will be. Single linkage favors linear structures, which is exactly what you're looking for. As the distance criterion between two points you can use the euclidean distance between point coordinates multiplied by a function of the angle that increases very steeply as soon as the angle differs more than, say, 20 or 30 degrees.
Do (robust) linear regression into the data. Using the normals may or may not help. My guess is that they won't help too much. For simplicity, you may want to disregard the normals initially.
Find the intersections between the lines.
If you have to, you can always try and improve the fit, for example by constraining opposite lines to be parallel.
If that fails, you could try and implement the approach in THIS PAPER, which allows fitting multiple straight lines at once.
You could get the mean value for the X and Y coordinates for each side and then just make lines based on that.

calculating parameters for defining subsections of quadratic bezier curves

I have a quadratic bezier curve described as (startX, startY) to (anchorX, anchorY) and using a control point (controlX, controlY).
I have two questions:
(1) I want to determine y points on that curve based on an x point.
(2) Then, given a line-segment on my bezier (defined by two intermediary points on my bezier curve (startX', startY', anchorX', anchorY')), I want to know the control point for that line-segment so that it overlaps the original bezier exactly.
Why? I want this information for an optimization. I am drawing lots of horizontal beziers. When the beziers are larger than the screen, performance suffers because the rendering engine ends up rendering beyond the extents of what is visible. The answers to this question will let me just render what is visible.
Part 1
The formula for a quadratic Bezier is:
B(t) = a(1-t)2 + 2bt(1-t) + ct2
= a(1-2t+t2) + 2bt - 2bt2 + ct2
= (a-2b+c)t2+2(b-a)t + a
where bold indicates a vector. With Bx(t) given, we have:
x = (ax-2bx+cx)t2+2(bx-ax)t + ax
where vx is the x component of v.
According to the quadratic formula,
-2(bx-ax) ± 2√((bx-ax)2 - ax(ax-2bx+cx))
t = -----------------------------------------
2(ax-2bx+cx)
ax-bx ± √(bx2 - axcx)
= ----------------------
ax-2bx+cx
Assuming a solution exists, plug that t back into the original equation to get the other components of B(t) at a given x.
Part 2
Rather than producing a second Bezier curve that coincides with part of the first (I don't feel like crunching symbols right now), you can simply limit the domain of your parametric parameter to a proper sub-interval of [0,1]. That is, use part 1 to find the values of t for two different values of x; call these t-values i and j. Draw B(t) for t ∈ [i,j]. Equivalently, draw B(t(j-i)+i) for t ∈ [0,1].
The t equation is wrong, you need to use eq(1)
(1) x = (ax-2bx+cx)t2+2(bx-ax)t + ax
and solve it using the the quadratic formula for the roots (2).
-b ± √(b^2 - 4ac)
(2) x = -----------------
2a
Where
a = ax-2bx+cx
b = 2(bx-ax)
c = ax - x

Resources